added
stringlengths 19
26
| created
timestamp[s] | id
stringlengths 9
16
| metadata
dict | source
stringclasses 2
values | text
stringlengths 6
1.2M
|
---|---|---|---|---|---|
2024-09-04T02:54:58.109592 | 2020-03-07T19:07:31 | 2003.03638 | {
"authors": "James D. Brunner and Nicholas Chia",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26099",
"submitter": "James Brunner",
"url": "https://arxiv.org/abs/2003.03638"
} | arxiv-papers |
Minimizing the number of optimizations for efficient community dynamic flux
balance analysis.
James D. Brunner1†*,Nicholas Chia1†
1Department of Surgery, Center for Individualized Medicine Microbiome Program,
Mayo Clinic, Rochester, MN, USA
†Current Address: Mayo Clinic, 200 First St. SW, Rochester, MN, USA
*<EMAIL_ADDRESS>
## Abstract
Dynamic flux balance analysis uses a quasi-steady state assumption to
calculate an organism’s metabolic activity at each time-step of a dynamic
simulation, using the well-known technique of flux balance analysis. For
microbial communities, this calculation is especially costly and involves
solving a linear constrained optimization problem for each member of the
community at each time step. However, this is unnecessary and inefficient, as
prior solutions can be used to inform future time steps. Here, we show that a
basis for the space of internal fluxes can be chosen for each microbe in a
community and this basis can be used to simulate forward by solving a
relatively inexpensive system of linear equations at most time steps. We can
use this solution as long as the resulting metabolic activity remains within
the optimization problem’s constraints (i.e. the solution to the linear system
of equations remains a feasible to the linear program). As the solution
becomes infeasible, it first becomes a feasible but degenerate solution to the
optimization problem, and we can solve a different but related optimization
problem to choose an appropriate basis to continue forward simulation. We
demonstrate the efficiency and robustness of our method by comparing with
currently used methods on a four species community, and show that our method
requires at least $91\%$ fewer optimizations to be solved. For
reproducibility, we prototyped the method using Python. Source code is
available at `https://github.com/jdbrunner/surfin_fba`.
## Author summary.
The standard methods in the field for dynamic flux balance analysis (FBA)
carries a prohibitively high computational cost because it requires solving a
linear optimization problem at each time-step. We have developed a novel
method for producing solutions to this dynamical system which greatly reduces
the number of optimization problems that must be solved. We prove
mathematically that we can solve the optimization problem once and simulate
the system forward as an ordinary differential equation (ODE) for some time
interval, and solutions to this ODE provide solutions to the optimization
problem. Eventually, the system reaches an easily check-able condition which
implies that another optimization problem must be solved. We compare our
method against typically used methods for dynamic FBA to validate that it
provides equivalent solutions while requiring fewer linear-program solutions.
## Introduction.
### Microbial communities and human health.
The makeup of microbial communities is often complex, dynamic, and hard to
predict. However, microbial community structure has a profound effect on human
health and disease [1, 2, 3, 4, 5, 6, 7]. These two facts have lead to
significant interest in mathematical models which can predict relative
abundances among microbes in a community. Various dynamical models have been
proposed to explain and predict microbial community population dynamics [8, 9,
10, 11, 12]. Among these are models which propose that interactions between
species are mediated by the metabolites that each species produces and
consumes [13, 14], and there is significant evidence that these models perform
better than models which depend on direct interaction between species [15,
16].
Recently, advances in genetic sequencing have allowed the creation of genome-
scale models (GEMs) that reflect the internal network of cellular metabolism,
and can therefore be used to predict metabolite use and production [17, 18,
19]. This technique can be extended to microbial community modeling by
combining GEMs of different species. There has been significant interest in
using GEMs to predict relative populations of stable microbial communities
[20, 21, 22, 23, 24, 25, 26]. Community metabolic modeling can not only
predict relative populations, but also holds the potential to predict and
explain the community metabolite yield, which can have a profound effect on
health [4]. Furthermore, model repositories such as the online bacterial
bioinformatics resource _PATRIC_ [27] or the _BiGG model database_ [28] make
it possible to build community models using information from individual
species investigations.
GEMs can be used to predict microbial growth rates as well as metabolite
consumption and production rates using a process called _flux balance
analysis_ (FBA). Because these predictions appear in the form of rates of
change, they can be used to define a metabolite mediated dynamical model,
simply by taking as a vector field the rates of change predicted by FBA. We
can therefore combine the techniques of metabolite mediated dynamic modeling
and community metabolic modeling to produce dynamic predictions of microbial
community population size and metabolite yield. This strategy is called
_dynamic FBA_ [29, 30, 31], and has recently been used to model microbial
communities [32, 33, 34].
Dynamic FBA, when implemented naïvely, requires a linear optimization problem
to be repeatedly solved, and carries a high computational cost for even small
communities. Furthermore, _in silico_ experiments may need to be repeated many
times over various environmental conditions or using various parameter choices
in order to make robust conclusions or to accurately fit model parameters. As
a result, implementations of dynamic FBA which depend on optimization at every
time-step carry a prohibitively high computational cost when used to simulate
larger microbial communities. The implementation of dynamic FBA in the popular
COBRA toolbox software package [17] is done in this way, and essentially all
more efficient available tools for simulating dynamic FBA fundamentally use an
ODE solver approach with optimization at each time-step [31, 35, 36, 24, 37,
38]. Dynamic FBA can be improved by taking advantage of the linear structure
of the optimization problem which provides a choice of basis for an optimal
solution that may be reused at future time-steps [39, 40]. However, the
optimizations that are required by this strategy involve solutions with non-
unique bases. This means that a basis chosen at random may not provide an
optimal solution to the linear program at future time-steps because it
provides a solution that is non-optimal or infeasible.
In order to implement dynamic FBA without optimizing at each time step, we use
an optimal basic set for the FBA linear optimization problem to create a
system of linear equations whose solutions at future time-steps coincide with
the solutions to the FBA optimization problem. To solve the problem of non-
uniqueness among bases, we prove that there exists a choice of basis that
allows forward simulation for a given optimal flux solution and provide a
method to choose this basis. Note that this method does not choose among a set
of non-unique optimal flux solutions, but instead chooses a basis for a single
given optimum. To choose among multiple optimal flux solutions, biological,
rather than mathematical, considerations should be used.
In this manuscript, we detail how dynamic FBA can be simulated forward without
re-optimization for some time interval, and give a method for doing so. We
propose conditions on an optimal basic set for the FBA linear optimization
problem which allows for forward simulation, and we prove that such a choice
exists. We then detail how to choose this basis set, and finally give examples
of simulations which demonstrate the power of our method. For reproducibility,
we make a prototype implementation of our method in the Python language
available at `https://github.com/jdbrunner/surfin_fba`.
## Background
### Flux balance analysis.
With the advent of genetic sequencing and the resulting genome scale
reconstruction of metabolic pathways, methods have been developed to analyze
and draw insight from such large scale models [18]. To enable computation of
relevant model outcomes, constraint based reconstruction and analysis (COBRA)
is used to model steady state fluxes $v_{i}$ through a microorganism’s
internal metabolic reactions under physically relevant constraints [18]. One
of the most basic COBRA methods, called _flux balance analysis_ (FBA)
optimizes some combination of reaction fluxes $\sum\gamma_{i}v_{i}$ which
correspond to increased cellular biomass, subject to the constraint that the
cell’s internal metabolism is at equilibrium:
$\Gamma\bm{v}=0$ (1)
where $\Gamma$ is the _stoichiometric matrix_ , a matrix describing the
stoichiometry of the metabolic model.
This optimization is chosen because it reflects the optimization carried out
by nature through evolution [18]. The vector
$\bm{\gamma}=(\gamma_{1},\gamma_{2},...,\gamma_{d})$ is an encoding of
cellular objectives, reflecting the belief that the cell will be optimized to
carry out these objectives. The constraint Eq. 1 means that any optimal set of
fluxes found by FBA corresponds to a steady state of the classical model of
chemical reaction networks [41]. This reflects the assumption that the cell
will approach an internal chemical equilibrium.
The optimization is done over a polytope of feasible solutions defined by the
inequalities $v_{i,min}\leq v_{i}\leq v_{i,max}$, or possibly more complicated
linear constraints. See Fig. 1 for a geometric representation of an example of
the type of linear optimization problem that is carried out. By convention,
forward and reverse reactions are not separated and so negative flux is
allowed. Linear optimization problems like FBA often give rise to an infinite
set of optimal flux vectors $\bm{v}=(v_{1},v_{2},...,v_{d})$. Geometrically,
this set will correspond to some face of the polytope of feasible solutions.
To draw conclusions despite this limitation, many methods have been developed
to either characterize the set of optimal solutions, as with flux variability
analysis (FVA), or enforce more constraints on the network to reduce the size
of this set, as with loopless FVA [18].
### Dynamic FBA.
FBA provides a rate of increase of biomass which can be interpreted as a
growth rate for a cell. Furthermore, a subset of the reactions of a GEM
represent metabolite exchange between the cell and its environment. By
interpreting constraints on nutrient exchange reactions within the metabolic
network as functions of the available external metabolites and fluxes of
exchange reactions as metabolite exchange rates between the cell and its
environment, the coupled system can be modeled. The simplest way to do this is
to use an Euler method, as in [30].
In addition to Euler’s method, more sophisticated ODE solvers may be used in
the so-called “direct” method of simply recomputing the FBA optimization at
every time-step. This can provide better solution accuracy and potentially
larger time-steps, but may also require more than one FBA optimization at each
time-step. For instance, the Runge-Kutta fourth order method [42] requires
four FBA solutions at each time step. Direct methods are implemented in the
COBRA toolbox [17] and are the central algorithm in many modern tools,
including those of Zhuang et al. [31, 35], Harcombe et al. [36], Zomorrodi et
al. [24], Louca and Doebeli [37], and Popp and Centler [38]. Notably, any
direct method requires at least one complete recalculation of the network
fluxes _at each time-step_.
However, resolving the system at each time step is not necessary, as the
solution the optimization problem at some initial time can actually be used to
compute future optimal solutions. Höffner et al., [40], used this observation
to introduce a variable step-size method for dynamic FBA. In that method a
basic index set is chosen by adding biological constraints to the optimization
problem hierarchically until a unique optimal flux vector is found. The
challenge of such an approach is in choosing the basis for the optimal
solution, as the optimal basis is not guaranteed to be unique even for a
unique optimal flux solution. In fact, due to the nature of the method of
Höffner et al. and of our method, any optimization past the initial solution
that must be carried out is guaranteed to have a solution with a non-unique
basis. Furthermore, many choices of optimal basis will not provide a solution
for future time-steps, so that choosing among these bases must be done
intelligently. Unfortunately, Höffner et al. [40] do not provide a method for
choosing among non-unique bases for a single linear program solution.
Our method seeks to solve this problem by choosing a basis from among the
possibilities provided from an FBA solution which is most likely to remain
optimal as simulation proceeds forward. We therefore prioritize reducing the
number of times the linear program must be solved, choosing our basis based on
the mathematical properties of the system which gives the best chance of
providing a solution at future time-steps.
Additionally, a method described as the “dynamic optimization approach” was
introduced in Mahadevan et al., [29], however this method is computationally
expensive. In particular, the method given in [29] involves optimizing over
the entire time-course simulated, and so is formulated as a non-linear program
which only needs to be solved once. While this method requires only one
optimization, this optimization is itself prohibitively difficult due to the
dimensionality of the problem growing with the fineness of time-
discretization.
### The dynamic FBA model for communities.
We can write a metabolite mediated model for the population dynamics of a
community of organisms $\bm{x}=(x_{1},...,x_{p})$ on a medium composed of
nutrients $\bm{y}=(y_{1},...,y_{m})$:
$\displaystyle\dot{x}_{i}$ $\displaystyle=g_{i}(\bm{\psi}_{i}(\bm{y}))x_{i}$
(2) $\displaystyle\dot{y}_{j}$
$\displaystyle=-\sum_{i=1}^{p}\psi_{ij}(\bm{y})x_{i}$ (3)
where $\bm{\psi}_{i}$ is a vector of the fluxes of nutrient exchange reactions
for organism $x_{i}$ as determined by FBA. Using FBA to determine
$\bm{\psi}_{i}$ is therefore a quasi-steady state assumption on the internal
metabolism of the organisms $x_{i}$[43, 44, 45].
Recall that the basic assumption of flux balance analysis is that, given a
matrix $\Gamma_{i}$ that gives the stoichiometry of the network of reactions
in a cell of organism $x_{i}$ that growth $g_{i}(\bm{y})$ is the maximum
determined by solving the following linear program [18]:
$\left\\{\begin{array}[]{r}\max(\bm{v}_{i}\cdot\bm{\gamma}_{i})\\\
\Gamma_{i}\bm{v}_{i}=0\\\
\bm{c}^{1}_{i}\leq\bm{v}\leq\bm{c}^{2}_{i}(\bm{y})\end{array}\right\\}$ (4)
where $\bm{c}^{1}_{i}$ is some vector of lower flux bounds while
$\bm{c}^{2}_{i}(\bm{y})$ is some vector-valued function of the available
metabolites which represents upper flux bounds. The key observation allowing
dynamic FBA is that the optimal solution to this problem also determines
$\bm{\psi}_{i}$ simply by taking $\psi_{ij}$ to be the value of the flux
$v_{ij}$ of the appropriate metabolite exchange reaction. For clarity, we will
relabel the elements of $\bm{v}_{i}$ so that $\psi_{ik}=v_{ij}$ if $v_{ij}$ is
the $k^{th}$ exchange flux, and $\phi_{ik}=v_{ij}$ if $v_{ij}$ is the $k^{th}$
internal flux. The objective vector $\bm{\gamma}_{i}$ indicates which
reactions within the cell contribute directly to cellular biomass, and so is
non-zero only in elements corresponding to internal fluxes. We can therefore
rewrite this vector to include only elements corresponding to internal fluxes,
so that the objective of the optimization is to maximize
$\bm{\gamma}_{i}\cdot\bm{\phi}_{i}$.
The stoichiometry of metabolite exchange reactions is represented by standard
basis vectors [18]. Therefore, we can partition $\Gamma_{i}$ as
$\Gamma_{i}=\begin{bmatrix}I&-\Gamma_{i}^{*}\\\
0&\Gamma_{i}^{\dagger}\end{bmatrix}$ (5)
where $I$ is the identity matrix of appropriate size, and $\Gamma_{i}^{*}$ and
$\Gamma_{i}^{\dagger}$ contain the stoichiometry of the internal reactions
[18, 46, 47]. Making this change in notation allows us to see that the
optimization problem of flux balance analysis is essentially internal to the
cell, with external reactions providing constraints.
We can see from Eq. 5 that $\ker(\Gamma_{i})$ is isomorphic to
$\ker(\Gamma^{\dagger}_{i})$, and so we can maximize over this kernel. Then,
the exchange reaction fluxes are determined by the internal fluxes according
to the linear mapping $\bm{\psi}_{i}=\Gamma^{*}_{i}\bm{\phi}_{i}$ . The
maximization of FBA becomes a maximization problem over the internal
fluxes111In fact, we can project onto the kernel of the matrix
$\Gamma^{\dagger}_{i}$ and so reduce the dimensionality of the problem.
However, in practice this projection is not numerically stable.. We rewrite
Eq. 4 using Eq. 5 and combine with Eqs. 2 and 3 to form the differential
algebraic system
$\displaystyle\frac{dx_{i}}{dt}=x_{i}(\bm{\gamma}_{i}\cdot\bm{\phi}_{i})$ (6)
$\displaystyle\frac{d\bm{y}}{dt}=-\sum_{i}x_{i}\Gamma^{*}_{i}\bm{\phi}_{i}$
(7)
$\displaystyle\left\\{\begin{array}[]{r}\max(\bm{\phi}_{i}\cdot\bm{\gamma}_{i})\\\
\Gamma^{\dagger}_{i}\bm{\phi}_{i}=0\\\
\bm{c}^{1}_{i}\leq\begin{bmatrix}\Gamma^{*}_{i}\\\
I\end{bmatrix}\bm{\phi}_{i}\leq\bm{c}^{2}_{i}(\bm{y})\end{array}\right\\}$
(11)
where each $\bm{\phi}_{i}$ is determined by the optimization Eq. 11, all
carried out separately. Note that this is a metabolite mediated model of
community growth as defined in [15]. That is, the coupling of the growth of
the separate microbes is due to the shared pool of metabolites $\bm{y}$. Each
separate optimization which determines $\bm{\phi}_{i}$ at a single time-step
depends on $\bm{y}$, and each $\bm{\phi}_{i}$ determines some change in
$\bm{y}$. Furthermore, each optimization is carried out in a manner that
depends only the status of the metabolite pool and is independent from the
optimizations of other organisms. There is therefore no shared “community
objective”. Instead, each organism optimizes according to only its own
internal objective.
We write, for full generality, upper and lower dynamic bounds on internal and
exchange reactions, and assume that each function $c_{ij}(\bm{y})\in
C^{\infty}$. We let
$A_{i}=\begin{bmatrix}(\Gamma_{i}^{*})^{T},-(\Gamma_{i}^{*})^{T},I,-I,\end{bmatrix}^{T}$
(12)
so that we can rewrite the optimization problem Eq. 11 as
$\left\\{\begin{array}[]{r}\max(\bm{\phi}_{i}\cdot\bm{\gamma}_{i})\\\
A_{i}\bm{\phi}_{i}\leq\bm{c}_{i}(\bm{y},t)\\\
\Gamma^{\dagger}_{i}\bm{\phi}_{i}=\bm{0}\end{array}\right\\}$ (13)
for ease of notation.
We now hope to select a basic index set $\mathcal{I}_{i}$ for Eq. 13 for each
organism $x_{i}$ so that each $\bm{\phi}_{i}(t)$ is a solution to the
resulting linear system of equations.
## Methods.
### Linear optimization preliminaries.
In this manuscript, we will rewrite the FBA optimization problem in the form
$\left\\{\begin{array}[]{c}\max(\bm{\phi}\cdot\bm{\gamma})\\\
A\bm{\phi}\leq\bm{c}\\\ \Gamma^{\dagger}\bm{\phi}=0\end{array}\right\\}$ (14)
where the matrices $A$ and $\Gamma^{\dagger}$ are derived from the
stoichiometric matrix and flux constraints. Such a problem is often referred
to as a _linear program_ (LP). We now recall some well known results from the
study of linear programming (see, for example [48, 40]).
First, we note that Eq. 14 can be rewritten in the so-called _standard form_
with the addition of _slack variables_ $\bm{s}=(s_{1},...,s_{n})$ which
represent the distance each of the $n$ constraints is from its bound as
follows:
$\left\\{\begin{array}[]{c}\max(\bm{\tilde{\phi}}\cdot\bm{\tilde{\gamma}})\\\
\begin{bmatrix}\tilde{A}&I\end{bmatrix}\begin{bmatrix}\bm{\tilde{\phi}}\\\
\bm{s}\end{bmatrix}=\bm{c}\\\ \tilde{\phi}_{i}\geq 0,s_{i}\geq
0\end{array}\right\\}.$ (15)
Standard form requires that we rewrite $\phi_{i}=\phi_{i}^{+}-\phi_{i}^{-}$
and then define
$\bm{\tilde{\phi}}=(\phi_{1}^{+},\phi_{2}^{+},...,\phi_{d}^{+},\phi_{1}^{-},\phi_{2}^{-},...,\phi_{d}^{-})$
so that we require non-negativity of each variable, and the matrix
$\tilde{A}=\left[A\;B\right]$, $B=-A$. We rewrite the problem in this form to
make use of established results, and for ease of notation will write
$\bm{\phi}$ instead of $\bm{\tilde{\phi}}$ when it is clear which form of the
problem we are discussing.
We will make use of the well-known result that there exists an _optimal basis_
or _basic set_ for a bounded linear program [49]. To state this result, we
first define the notation $B_{\mathcal{J}}$ to be the matrix with columns of
$[\tilde{A}\;I]$ corresponding to some index set
$\\{k_{1},k_{2},...,k_{n}\\}=\mathcal{J}$, and if $B_{\mathcal{J}}$ is
invertible we define the notation $\bm{w}_{\mathcal{J}}(\bm{a})$ so that
$(\bm{w}_{\mathcal{J}}(\bm{a}))_{l}=\left\\{\begin{array}[]{cc}(B^{-1}_{\mathcal{I}}\bm{a})_{j}&l=k_{j}\in\mathcal{J}\\\
0&l\not\in\mathcal{J}\end{array}\right.$ (16)
for any $\bm{a}\in\mathbb{R}^{n}$. We may now define an _optimal basis_ and
_optimal basic set_.
###### Definition 1.
A _basic optimal solution_ to a linear program is an optimal solution along
with some index set $\\{k_{1},k_{2},...,k_{n}\\}=\mathcal{I}$ such that
$\bm{w}=\bm{w}_{\mathcal{I}}(\bm{c})$, where $\bm{c}$ is the vector of
constraints as in Eq. 15. The variables $\\{\bm{w}_{i}|i\in\mathcal{I}\\}$ are
referred to as _basic variables_ , and the index set $\mathcal{I}$ is referred
to as the _basic index set_.
Finally, if there exists a bounded, optimal solution to Eq. 15, then there
exists a basic optimal solution and corresponding basic index set.
For a given basic optimal solution vector $\bm{w}$, there may be more than one
basic index set $\mathcal{I}$ such that $\bm{w}=\bm{w}_{\mathcal{I}}(\bm{b})$.
Such a solution is called _degenerate_. Clearly a necessary condition for such
non-uniqueness is that there exists some $k\in\mathcal{I}$ such that
$w_{k}=0$. This is also a sufficient condition as long as there is some column
of $[\tilde{A}\,I]$ which is not in the column space of
$B_{\mathcal{I}\setminus\\{k\\}}$.
### Forward simulation without re-solving.
Consider again Eq. 13, the linear program that must be solved at each time
point of the dynamical system for each microbial population. Information from
prior solutions can inform future time-steps as long as the region of feasible
solutions has not qualitatively changed. Thus, we may only need to solve the
optimization problem a few times over the course of a simulation. The key
observation making this possible is that the simplex method of solving a
linear program provides an optimal basis for the solution. We may often re-use
this basis within some time interval, and therefore find optimal solutions
without re-solving the linear program.
In order to do this, we need to find a form of the solution which may be
evolved in time. Thus, we turn the system of linear inequalities given in the
linear program into a system of linear equations. Then, if this system has a
unique solution we have reduced the task to solving a system of equations
rather than optimizing over a system of inequalities. We can find such a
system of equations by solving the linear program once, and using this
solution to create a system of equations whose solution provides the optimal
flux $\bm{\phi}_{i}$, as described above. We then use this same system to
simulate forward without the need to re-solve the solution to the system of
equations until there is no longer a feasible solution to the linear program.
First, the linear program Eq. 13 is transformed into standard form (Eq. 15).
Then, a basic optimal solution is found with corresponding basic index set
$\mathcal{I}_{i}$. The dynamical system Eqs. 6, 7 and 13 can then be evolved
in time using Eq. 16. This evolution is accurate until some $w_{ij}$ becomes
negative (meaning that the solution is no longer a feasible solution to the
linear program). At this point, a new basis must be chosen. That is, until
$\bm{w}_{\mathcal{I}_{i}}(\bm{c}(t))$ becomes infeasible, we let
$(\phi_{j_{1}}(\bm{c}_{i}(t)),...,\phi_{j_{m}}(\bm{c}_{i}(t)),s_{1}(\bm{c}_{i}(t)),...,s_{n}(\bm{c}_{i}(t)))=\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$
and replace Eqs. 6, 7 and 13 with
$\displaystyle\frac{dx_{i}}{dt}$
$\displaystyle=x_{i}(\bm{\gamma}_{i}\cdot\bm{\phi}_{i}(\bm{c}_{i}(t)))$ (17)
$\displaystyle\frac{d\bm{y}}{dt}$
$\displaystyle=-\sum_{i}x_{i}\Gamma^{*}_{i}\bm{\phi}_{i}(\bm{c}_{i}(t))$ (18)
One major difficulty in this technique is that a unique $\bm{w}_{i}$ does not
guarantee a unique basis set $\mathcal{I}_{i}$. If we have some
$(w_{\mathcal{I}_{i}})_{j}=0$ for $j\in\mathcal{I}_{i}$, then there exists
some alternate set $\hat{\mathcal{I}}_{i}$ such that
$\bm{{w}}_{\hat{\mathcal{I}}_{i}}=\bm{{w}}_{\mathcal{I}_{i}}$. Such a solution
$\bm{{w}}_{\mathcal{I}_{i}}$ is called _degenerate_. In a static
implementation of a linear program, the choice of basis of a degenerate
solution is not important, as one is interested in the optimal vector and
optimal value. However, as we will demonstrate with Example 1, the choice of
basis of a degenerate solution is important in a dynamic problem. In fact, if
the system given in Eqs. 17 and 18 is evolved forward until
$\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ becomes infeasible, the time at
which the system becomes infeasible is the time at which we have some
$(w_{\mathcal{I}_{i}})_{j}=0$ for $j\in\mathcal{I}_{i}$. Thus, we need to
resolve Eq. 13 whenever $\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ becomes
degenerate, which will be the final time-point at which the
$\bm{w}_{\mathcal{I}_{i}}(\bm{c}_{i}(t))$ is feasible.
###### Example 1.
Consider the dynamic linear program
$\left\\{\begin{array}[]{c}\max((1,1)\cdot\bm{v})\\\ \begin{bmatrix}1&0\\\
0&1\\\ 1&2\end{bmatrix}\bm{v}\leq\begin{bmatrix}10\\\ 10\\\
30-t\end{bmatrix}\\\ v_{i}\geq 0\end{array}\right\\}$ (19)
In standard form at $t=0$, this linear program becomes
$\left\\{\begin{array}[]{c}\max((1,1)\cdot\bm{v})\\\
\begin{bmatrix}1&0&1&0&0\\\ 0&1&0&1&0\\\
1&2&0&0&1\end{bmatrix}\begin{bmatrix}\bm{v}\\\
\bm{s}\end{bmatrix}=\begin{bmatrix}10\\\ 10\\\ 30\end{bmatrix}\\\
v_{i},s_{i}\geq 0\end{array}\right\\}$ (20)
which has the unique solution $\bm{w}=(10,10,0,0,0)$. There are three choices
of basic index sets: $\mathcal{I}_{1}=\\{1,2,3\\}$,
$\mathcal{I}_{2}=\\{1,2,4\\}$, and $\mathcal{I}_{3}=\\{1,2,5\\}$. The
resulting bases are
$\ B_{\mathcal{I}_{1}}=\begin{bmatrix}1&0&1\\\ 0&1&0\\\
1&2&0\end{bmatrix}\quad B_{\mathcal{I}_{2}}=\begin{bmatrix}1&0&0\\\ 0&1&1\\\
1&2&0\end{bmatrix}\quad B_{\mathcal{I}_{3}}=\begin{bmatrix}1&0&0\\\ 0&1&0\\\
1&2&1\end{bmatrix}$
Computing Eq. 16 at $t>0$ for each, we have that $B_{\mathcal{I}_{1}}$ yields
$\bm{w}_{\mathcal{I}_{1}}(\bm{c}(t))=(10-t,10,t,0,0)$, $B_{\mathcal{I}_{2}}$
yields
$\bm{w}_{\mathcal{I}_{2}}(\bm{c}(t))=(10,10-\nicefrac{{t}}{{2}},0,\nicefrac{{t}}{{2}},0)$,
and $B_{\mathcal{I}_{3}}$ yields
$\bm{w}_{\mathcal{I}_{3}}(\bm{c}(t))=(10,10,0,0,-t)$, shown in Fig. 1 for
$t>0$. Thus, only $\bm{w}_{\mathcal{I}_{2}}(\bm{c}(t))$ solves the dynamic
problem because $\bm{w}_{\mathcal{I}_{1}}(\bm{c}(t))$ is not optimal and
$\bm{w}_{\mathcal{I}_{3}}(\bm{c}(t))$ is not feasible for $t>0$. We may follow
$\bm{w}_{\mathcal{I}_{2}}$ and be insured of remaining at an optimal solution
to the linear program until $t=20+\varepsilon$, at which point
$\bm{w}_{\mathcal{I}_{2}}=(10,-\varepsilon/2,0,10,0)$, which is not a feasible
solution to the linear program. At time $t=20$, a re-optimization is required
to choose a new basis.
Notice that the correct choice of basis fundamentally depends on the time-
varying bound function $\bm{c}(t)=(10,10,30-t)$. To see this, consider other
possible time-varying bounds $\bm{c}(t)$ which have $\bm{c}(0)=(10,10,30)$.
For example, if $\bm{c}(t)=(10-t,10-t,30)$, then only $B_{\mathcal{I}_{3}}$
would give the correct $\bm{w}(\bm{c}(t))$ for $t>0$.
Fig 1: Geometric representation of Example 1 for $t_{3}>t_{2}>t_{1}>0$,
showing the three options for bases which are equivalent at $t=0$. Note that
the best choice depends on the function $\bm{c}(t)=(10,10,30-t)$ and cannot be
chosen using the static problem alone. The feasible region of the optimization
problem is shown in gray.
### A basis for the flux vector.
We now provide a method to choose a basis $\mathcal{I}_{i}$ for each organism
$x_{i}$ in the case of a degenerate solution. Consider an optimal solution
$\bm{w}_{i}$ to the linear program Eq. 15. To simulate forward according to
Eqs. 17 and 18, we need for each organism $x_{i}$ a basic index set
$\mathcal{I}_{i}$ such that
$\left\\{\begin{array}[]{c}\bm{\dot{w}_{i}}=\bm{w}_{\mathcal{I}_{i}}\left(\frac{d}{dt}\bm{c}_{i}\right)\\\
\begin{bmatrix}\tilde{A}&I\end{bmatrix}\bm{\dot{w}}=\frac{d}{dt}\bm{c}_{i}\\\
(\bm{w}_{\mathcal{I}_{i}})_{j}=0\Rightarrow\dot{w}_{ij}\geq
0\end{array}\right\\}$ (21)
so that the solution remains feasible, and furthermore that $\bm{\dot{w}}_{i}$
is optimal over the possible choice of basic index sets for $\bm{w}_{i}$. This
is obviously a necessary condition for forward simulation within some non-
empty time interval, and can be made sufficient (although no longer necessary)
by making the inequality
$(\bm{w}_{\mathcal{I}_{i}})_{j}=0\Rightarrow\dot{w}_{ij}\geq 0$ strict. We use
the relaxed condition for more practical applicability.
In order to develop a method based on the above observation (i.e., Eq. 21), we
must know that Eq. 15 has such a solution. We therefore require the following
lemma, which is proved in Appendix A:
###### Lemma 1.
For a linear program with the form given in Eq. 15 with a basic optimal
solution $\bm{w}$, there exists a basic index set $\mathcal{I}$ such that Eq.
21 holds and $\bm{\dot{w}}$ is optimal over the possible choice of basic index
sets for $\bm{w}$.
If Eq. 15 has only a non-degenerate solution, the unique basis will satisfy
this requirement. The challenge remains to choose from among the possible
bases of a degenerate solution.
To do this, we form a second linear program analogous to Eq. 21 in the
following way. We first find all constraints $\bm{a}_{j}$ (i.e. rows of
$A_{i}$ or $\Gamma^{\dagger}_{i}$) such that
$\bm{a}_{ij}\cdot\bm{\phi}_{i}=c_{ij}(t)$, calling this set $\mathcal{S}_{i}$.
Note that this set contains all the rows of $\Gamma^{\dagger}_{i}$, for which
we regard $c_{ij}(t)=0$ for all $t>0$. Note that if the solution given is a
basic optimal solution, the rank of the matrix whose rows are $\bm{a}_{ij}$
for $\bm{a}_{ij}\in\mathcal{S}_{i}$ is $d$, where again $d$ is the number of
internal fluxes. This is true because we include constraints of the type
$a<\phi_{ij}<b$ as rows of $A_{i}$.
Then, we solve the linear program
$\left\\{\begin{array}[]{c}\max(\bm{\dot{w}}_{i}\cdot\bm{\gamma}_{i})\\\
\bm{a}_{j}\cdot\bm{\dot{\phi}}_{i}\leq\frac{dc_{ij}}{dt},\;\;\bm{a}_{j}\in\mathcal{S}_{i}\end{array}\right\\}$
(22)
We may then use any basis $B_{\mathcal{I}}^{i}$ which solves Eq. 22 as long as
it has exactly $d$ non-basic slack variables. Lemma 1 tells us that such a
choice exists, although it may be necessary to manually pivot non-slack
variables into the basis set given by the numerical solver222In testing the
algorithm, this was necessary when using IBM ILOG CPLEX Optimization Studio to
solve, but not when using The Gurobi Optimizer.. Note that we do not need the
entire basis $B_{\mathcal{I}}^{i}$, but instead only need the $d\times d$
submatrix formed by rows of $A_{i}$ or $\Gamma_{i}^{\dagger}$ which correspond
to non-basic slack variables in the solution to Eq. 22. These appear as rows
$(\bm{a}_{i},\bm{0})$ in $B_{\mathcal{I}}^{i}$, and so this sub-matrix
uniquely determines $\bm{\phi}_{i}$. We call this smaller matrix $B_{i}$, and
label the set of row indices as $\mathcal{J}$.
The chosen basis $\mathcal{J}$ and corresponding constraints is used to
simulate forward until that particular solution becomes infeasible. At that
time, we have an optimal solution to Eq. 13 simply by continuity. We therefore
do not need to resolve Eq. 13 but instead re-form and solve Eq. 22.
### Pseudo-Code of the method.
Below, we present as pseudo-code an outline of the method. A practical
implication may need to adaptively adjust the time-step $\Delta t$ to insure
that no resource is artificially over-depleted past $0$.
Input: Final time $T$, initial microbial biomasses $x_{i}(0)$, initial
nutrient concentrations $y_{j}(0)$, maximum inflow rates of nutrients
$\alpha_{i}$, stoichiometric matrices $\Gamma_{i}$
Output: Timecourse simulation of biomass and nutrient concentrations
1 for _each microbial population $i$_ do
2 Set $\bm{w}_{i}(0)$ to be solution to eq. (13) which lies on a vertex of the
feasible polytope.;
3 Solve eq. (21) to find initial basis $B_{i}$
4 end for
5
6while _$t <T$_ do
7 Integrate eqs. (14) and (15) from $t$ to $t+\Delta t$ with
$\bm{\phi}_{i}=B_{i}^{-1}\bm{c}_{\mathcal{J}}(\bm{y}(t),t)$;
8 if _$B_{i}^{-1}\bm{c}_{\mathcal{J}}(\bm{y}(t+\Delta t),t+\Delta t)$ is not
a feasible solution_ then
9 reset $x_{i}=x_{i}(t)$, $y_{j}=y_{j}(t)$;
10 Solve eq. (21) to find new basis $B_{i}$, with additional constraints
representing bounds violated by $B_{i}^{-1}\bm{c}_{\mathcal{J}}(\bm{y}(t),t)$.
11 end if
12
13 end while
Algorithm 1 Dynamic FBA algorithm following Lemma 1. Note that for numerical
stability and speed, we may store the matrices $Q_{i},R_{i}$ such that
$Q_{i}R_{i}=B_{i}$ is the QR-factorization of $B_{i}$ rather than either
storing $B^{-1}_{i}$ or solving completely during each time step of numerical
integration.
## Results.
### Number of optimizations.
We can compare the efficiency of Algorithm 1 with modern dynamic FBA methods
by counting the number of times a large linear program must be carried out
over the course of a simulation. At their core, state-of-art dynamic FBA tools
such as _d-OptCom_ [24] and _COMETS_ [36] employ the direct method of calling
an ODE-solving method with the linear program set as the right-hand-side. In
the case of Euler’s method, the resulting ODE can be integrated by hand
between time-steps. This last strategy is often referred to as the “static
optimization approach” [40].
We compared simulation of various combinations of the organisms _Escherichia
coli str. K-12 substr. MG1655_ (model iJR904), _Saccharomyces cerevisiae
S288C_ (model iND705), _Pseudomonas putida KT2440_ (model iJN746) and
_Mycobacterium tuberculosis H37Rv_ (model iEK1008), using models from the BiGG
database [28] (see S2 File. for details). We counted the optimizations
required for our model, as well as for direct methods using the numerical ODE
solvers _vode_ , _zvode_ , _lsoda_ , _dopri5_ , and _dop853_ from the SciPy
library. All of these numerical ODE solvers use adaptive step sizes for
accuracy and stability, and so represent optimized choices of time-steps.
Additionally, we compared the method of Höffner et al. as implemented in the
MatLab package _DFBAlab_ [39].
For our method and the direct method, we allowed exchange of every metabolite
detailed in S1 File. with initial metabolite concentrations given by that same
file, and with initial biomass of $0.3$ for each species. The file
`sim_comm.py` in the supplementary repository S3 Software. contains complete
simulation set-up.
To compare with the method of Höffner et al. [40], we use the newly available
Python package from the research group of Dr. David Tourigny titled _dynamic-
fba_ [50] for single organisms. This package allows simulation without
secondary optimizations, as our does, and so is more similar to our prototype
tool for comparison. Unfortunately, this package is currently only able to
simulate single organisms at the time of publishing. For microbial
communities, we can compare with the MatLab package DFBAlab [39] which
requires all dynamics variables to be optimized in a secondary optimization.
For simulations with DFBAlab, we use only the low-concentration metabolites
D-glucose, oxygen, and cob(I)alamin from the M9 medium detailed in S1 File. as
dynamically varying metabolites. It is worth noting that these are the most
favorable conditions we could find for the method of H”offner [40, 39] et al.
which are still biologically equivalent to our other simulations.
| Solution Method
---|---
Model Combination | Algorithm 1 | Höffner | vode | zvode | lsoda | dopri5 | dop853
iJR904 | 7 | 1 | 62 | 62 | 116 | 3313 | 6228
iND750 | 4 | 1 | 91 | 91 | 85 | 3508 | 6514
iJN746 | 4 | 13 | 166 | 167 | 376 | 1176 | 2249
iEK1008 | 4 | 4 | 120 | 120 | 208 | 2768 | 5148
iJR904 + iND750 | 4 | 24 | 240 | 211 | 346 | 5586 | 10469
iJR904 + iJN746 | 30 | 479 | 420 | 420 | 744 | 2695 | 5579
iJR904 + iEK1008 | 20 | 136 | 216 | 216 | 454 | 3385 | 6411
iND750 + iEK1008 | 8 | 32 | 311 | 311 | 509 | 5284 | 9888
iJR904 + iND750 + iEK1008 | 18 | 32* | 451 | 451 | 1282 | 6225 | 11961
iJR904 + iND750 + iJN746 + iEK1008 | 56 | 672 | 1122 | 1122 | 2242 | 6837 | 13529
Table 1: Number of realizations required to simulate to time $t=5$ with no
cell death or metabolite flow, using M9 minimal medium. *Simulation failed at
$t=3.034277$. Fig 2: Time-points of re-optimizations required in simulations
using the proposed method, the method of Höffner et al. [40] and various
direct methods, shown in blue. Shown in orange are times at which the direct
method solver encountered an infeasible linear program due to numerical error.
### Error estimation.
Our method provides much less theoretical error in dynamic FBA solutions than
traditional methods. In fact, Algorithm 1 implies that a simulation of a
microbial community can be divided into time intervals on which the algorithm
is exact. Of course, this assumes that the linear ODE solved in these
intervals is solved exactly rather than numerically.
Precisely, there exits some sequence $t_{0}=0<t_{1}<\cdots<t_{n-1}<t_{n}=T$
such that if we know the optimal flux vectors $\bm{w}_{i}(t_{l})$ at time
$t_{l}$, then Lemma 1 implies the existence of a set of invertible matrices
$B_{i}^{l}$ such that solutions to Eqs. 17 and 18 are solutions to Eqs. 6, 7
and 13 for $t\in[t_{l},t_{l+1}]$. Therefore, if we are able to identify the
$t_{l}$ exactly, then Algorithm 1 provides exact solutions to the dynamic FBA
problem Eqs. 6, 7 and 13. Of course, numerical limitations imply that we will
not re-optimize precisely at each $t_{l}$, and so we must investigate the
impact of this error. However, once re-optimization is done, the method is
again exact. The result is that we have no local truncation error for any time
step taken between re-optimization after $t_{l}$ and the interval endpoint
$t_{l+1}$, except for error due to numerical integration. In comparison,
direct methods from some integration error at every time step. This error
depends on the integration strategy used, and so for example the Euler’s
method based static optimization approach carries first order local truncation
error at each time step. This can easily lead to ODE overshoot and infeasible
linear programs at future time-step.
Assume that $t_{l-1}$ is known exactly, and $N$ is such that
$t^{1}=t_{l-1}+(N-1)\Delta t\leq t_{l}<t_{l-1}+N\Delta t=t^{2}$, so that there
is some possible error in the interval $[t^{1},t^{2}]$. We can estimate the
accumulated error in this time interval using a power series expansion. Let
$\bm{x}(t),\bm{y}(t)$ be solutions to Eqs. 6, 7 and 13 and
$\bm{\tilde{x}},\bm{\tilde{y}}$ be solutions given by Algorithm 1 for
$t\in[t^{1},t^{2})$. Furthermore, let $B_{i}^{l-1}$ be the invertible matrices
derived by solving Eq. 13 at $t_{l-1}$ and $B_{i}^{l}$ those derived by
solving at $t_{l}$. Then, $\bm{x}(t^{1})=\bm{\tilde{x}}(t^{1})$ and
$\bm{y}(t^{1})=\bm{\tilde{y}}(t^{1})$. For each $x_{i}$ we expand, assuming
some regularity of the functions $\bm{c}(\bm{y})$,
$x_{i}(t^{2})-\tilde{x}_{i}(t^{2})=(\Delta
t)x_{i}(t_{1})(\bm{\gamma}_{i}\cdot\left((B_{i}^{l-1})^{-1}-(B_{i}^{l-1})^{-1}\right)\bm{\hat{c}}_{i}(\bm{y}(t^{1}))+o(\Delta
t)$ (23)
and see that this method gives first order local error in time steps that
require a re-optimization.
The local error, while first order, only appears at time steps in which a re-
optimization occurred, and so global error will scale with the number of
necessary re-optimizations. This is in contrast with the classical use of
Euler’s method, which gives first order local error at every time-step, or any
other direct ODE method, whose error is dependent on the solver used.
We may compare the solutions provided by direct methods with those provided by
the method presented in Algorithm 1 and by the method of Höffner et al. [40].
The root-sum-square ($l_{2}$) difference in results are shown in Table 2. As
we argue above, direct methods are less accurate in theory that the algorithm
presented in Algorithm 1. Furthermore, direct simulations routinely failed to
simulate to time $t=5$ without encountering an infeasible linear program. This
infeasibility is the result of numerical error accumulating throughout the
simulation. The comparisons in Table 2 can be summarized by three distinct
characteristics. First, in the case of _S.cerevisiae_ , the direct methods
agree well with the newly presented method. Secondly, in the case of _E.coli_
and _M.tuberculosis_ , error seems to begin accumulating immediately. Finally,
in the case of _P.putida_ , the simulations agree well up to some time-point
at which the direct method fails and either quits entirely (as in the case of
the _dopri5_ solver which returns small error) or continues at a constant
value.
We note that discrepancies in dynamic FBA simulation may not always be due to
numerical error, but instead due to non-uniqueness in optimal flux solutions.
Our method provides a strategy for choosing between non-unique representations
(in the form of a basis) of a single optimal flux solution. The method of
Höffner et al. [40] provides a lexicographic strategy for choosing between
non-unique optimal flux solutions based on biological, rather than
mathematical, considerations. We note that for complete reproducibility, our
method should be integrated with some biologically based strategy for choosing
between non-unique optima.
| vode | zvode | lsoda | dopri5 | dop853 | Hoffner et al.
---|---|---|---|---|---|---
E.coli | 5.09933 | 5.09933 | 4.61467 | 5.09928 | 5.09928 | 4.68578
M.tuberculosis | 1.45401 | 1.45401 | 1.45417 | 1.45415 | 1.45415 | 2.48691
S.cerevisiae | 0.00426 | 0.00426 | 0.00430 | 0.00429 | 0.00429 | 3.06105
P.putida | 15.29177 | 15.29177 | 0.07080 | 15.23826 | 15.26221 | 4.78751
Table 2: $l_{2}$ difference in solutions to single-organism simulations
between direct methods and the method presented in Algorithm 1. Fig 3:
Simulations of _E.coli_ , _S.cerevisae_ , _M.tuberculosis_ and _P.putida_
using Algorithm 1, direct solvers, and the method of Höffner et al. In
simulations of _E.coli_ _M.tuberculosis_ , there is discrepancy early in the
simulation. In contrast, simulations of _P.putida_ agree up to the point that
an ODE solver fails.
## Examples & applications.
There has been a recent surge in interest in modeling microbial communities
using genome-scale metabolic models, much of which has focused on equilibrium
methods [22, 21, 4, 51, 26]. In order to capture transient behavior and
dynamic responses to stimuli, dynamic FBA has also been applied to microbial
communities [24, 52, 34]. However, community dynamic FBA invariable leads to a
large dynamical system with a high-dimensional parameter space, often with
little to know knowledge of parameter values. Any parameter fitting therefore
requires repeated numerical simulation of the system. Existing tools to do
this are built around a direct simulation approach, requiring many linear
program solutions. By drastically reducing the number of optimizations
required for numerical simulation, our approach offers the promise of
efficient numerical simulation of dynamic FBA which will make parameter
fitting more tractable, and may even allow conclusions without well-fit
parameters.
Below, we demonstrate that the problem of parameter fitting is an important
one by show that experimental outcome in even small communities is sensitive
to changes in kinetic parameters. Precisely, the kinetic parameters governing
the uptake rate of nutrients (i.e., the parameters of the functions
$\bm{c}^{2}_{i}$ in Eq. 4) have a profound effect on species competition.
Next, we show how repeated simulation with randomly sampled parameters can
provide some insight into community structure even without a well-fit set of
nutrient uptake parameters. These examples demonstrate the importance of
efficient dynamic FBA to microbial community modeling.
### Prediction dependence on nutrient uptake.
The set of unknown functions $\bm{c}_{i}^{2}(\bm{y})$ in Eq. 4 present a
profound problem for dynamic FBA simulation. If the behavior of the system is
sensitive to the functions chosen and parameters of those functions, a single
simulation will be of little use in drawing biological conclusion. In order to
demonstrate that such a sensitivity exists, we repeatedly simulated the same
simple community with different randomly drawn parameters. While a more
realistic choice of function may be saturating or sigmoidal (as with Hill or
Michaelis-Menten kinetics), for the following experiment we take these
functions to be linear:
$c_{ij}^{2}(\bm{y})=\kappa_{ij}y_{j},$ (24)
meaning that the maximum uptake rate of nutrient $y_{j}$ by organism $x_{i}$
is proportional to the concentration of $y_{j}$. This choice minimizes the
number of parameters that must be chosen for our analysis of parameter
sensitivity, and is in line with an assumption of simple mass action
kinetics[53, 54].
The choice of $\kappa_{ij}$ may have a profound effect on the outcome of a
community simulation, as it represents how well an organism can sequester a
resource when this will optimize the organism’s growth. In order study this
effect in a small community, we sampled a three-species community model with
$\kappa_{ij}\in(0,1)$ chosen uniformly at random. We used models for _E.coli_
, _S.cerevisiae_ and _M.tuberculosis_ downloaded from the BiGG model
database[28].
We simulated with no dilution of metabolites or microbes, and no replenishment
of nutrients. In every simulation, some critical metabolite was eventually
depleted and the organisms stopped growing. We recorded the simulated final
biomass of each organism from each simulation, and the results are shown in
Fig. 4.
Fig 4: (Top) Histogram of the final simulated biomass of each of _E.coli_ ,
_S.cerevisiae_ and _M.tuberculosis_ from 95 simulations, each with different
metabolite uptake rates $\kappa_{ij}$. (Bottom) Pair-wise comparison of the
final simulated biomass densities using a kernel density estimation. In red is
the result of uniform uptake rates $\kappa_{ij}=1$ for all $i,j$.
### Community growth effects.
As we saw in previous section, community growth outcomes depend on the choice
of nutrient uptake rates $\kappa_{ij}$. Using Algorithm 1, we can perform
Monte-Carlo sampling in order to understand the possible effects on some
microorganism growing in some community. To do this, we randomly sample the
set of uptake rates $\kappa_{ij}$ and run simulations of various communities
for the chosen uptake rates. Then, the correlation between communities of
final simulated biomass of some organism can be interpreted as the effect of
the community on the growth of that organism. A correlation less than $1$
between growth of an organism in different communities indicates that the
community is having some effect. To see the direction of this effect, we can
fit a simple linear regression model (best fit line) to the final simulated
biomasses. Then, the slope of this line tells us if the organism benefits or
is harmed by being in one community over another.
We again simulated _E.coli_ , _S.cerevisiae_ and _M.tuberculosis_ downloaded
from the BiGG model database [28]. Simulations were run with the M9 medium
described in S1 File., with no replenishment of resources.
Each organism grew to a larger final simulated biomass when alone compared to
when in a trio with the other two, which is unsurprising given the finite
resources. This difference was the least pronounced for _S.cerevisiae_ ,
suggesting that this organism is the least negatively effected by the
competition. However, this can be seen as only a preliminary observation
without better estimates of uptake parameters. Best-fit lines are shown in
Fig. 5. Efficient dynamic FBA allows repeated simulation with randomly sampled
parameters, which gives an indication of likely behavior even without accurate
parameter fitting.
Fig 5: Final simulated biomass of _E.coli_ , _S.cerevisiae_ and
_M.tuberculosis_ when grown alone or in pairs, for randomly sampled modeled
parameters. Best fit lines indicate the average effect of the community on an
organism’s growth.
## Conclusion
Understanding, predicting, and manipulating the make-up of microbial
communities requires understanding a complex dynamic process. Genome-scale
metabolic models provide an approximation to this process through the quasi-
steady state assumption which leads to dynamic flux balance analysis. However,
this system is large and hard to simulate numerically, let alone analyze for
qualitative behaviors. As a first step towards a thorough analysis of
community of organisms modeled with dynamic FBA, an efficient method of
numerical simulation would provide an essential tool. However, modern tools
for simulating dynamic FBA rely on repeatedly solving an optimization problem
at every time step [31, 35, 36, 24, 37, 38].
Dynamic FBA simulation can be improved by considering the structure of these
linear programs so that many fewer optimizations are required. As of now, the
algorithm of Höffner et al. [40] is the only published method which takes
advantage of this observation. However, that method does not account for the
degeneracy of solutions to the relevant linear programs, meaning that it can
choose a solution that cannot be carried forward in time. We present a method
that chooses a basis to for forward simulation. In contrast to the method of
Höffner et al., we choose this basis in such a way that increases the
likelihood that this forward simulation is actually possible.
Efficient dynamic FBA will allow better parameter fitting to time-longitudinal
data. Furthermore, it allows for a search of parameter space which can help
predict likely model outcomes or learn maps from parameter values to model
outcomes.
## Supporting information.
#### S1 File.
M9 medium File. `m9med.csv` defines an M9 minimal medium as adapted from Monk
et al. [55].
#### S2 File.
List of Models Used. `modelsUsed.csv` provides name, ID, and URL for the four
models used in analysis of the method.
#### S3 Software.
`https://github.com/jdbrunner/surfin_fba`. Available code for the algorithm
described in the Python language. This code requires the popular COBRAPy
package for metabolic models.
## Acknowledgments
This work was supported by funding from the DeWitt and Curtiss Family
Foundation, National Cancer Institute grant R01 CA179243, and the Center for
Individualized Medicine, Mayo Clinic.
## References
* 1. Braundmeier AG, Lenz KM, Inman KS, Chia N, Jeraldo P, Walther-António MRS, et al. Individualized medicine and the microbiome in reproductive tract. Frontiers in Physiology. 2015;6:97. doi:10.3389/fphys.2015.00097.
* 2. Calcinotto A, Brevi A, Chesi M, Ferrarese R, Perez LG, Grioni M, et al. Microbiota-driven interleukin-17-producing cells and eosinophils synergize to accelerate multiple myeloma progression. Nature communications. 2018;9(1):4832.
* 3. Flemer B, Lynch DB, Brown JM, Jeffery IB, Ryan FJ, Claesson MJ, et al. Tumour-associated and non-tumour-associated microbiota in colorectal cancer. Gut. 2017;66(4):633–643.
* 4. Hale VL, Jeraldo P, Chen J, Mundy M, Yao J, Priya S, et al. Distinct microbes, metabolites, and ecologies define the microbiome in deficient and proficient mismatch repair colorectal cancers. Genome Medicine. 2018;10(1):78. doi:10.1186/s13073-018-0586-6.
* 5. Ng KM, Ferreyra JA, Higginbottom SK, Lynch JB, Kashyap PC, Gopinath S, et al. Microbiota-liberated host sugars facilitate post-antibiotic expansion of enteric pathogens. Nature. 2013;502:96 EP –.
* 6. Round JL, Mazmanian SK. The gut microbiota shapes intestinal immune responses during health and disease. Nature Reviews Immunology. 2009;9:313 EP –.
* 7. Walsh DM, Mert I, Chen J, Hou X, Weroha SJ, Chia N, et al. The Role of Microbiota in Human Reproductive Tract Cancers. In: AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY. vol. 168. WILEY 111 RIVER ST, HOBOKEN 07030-5774, NJ USA; 2019. p. 260–261.
* 8. Fisher CK, Mehta P. Identifying keystone species in the human gut microbiome from metagenomic timeseries using sparse linear regression. PloS one. 2014;9(7):e102451.
* 9. Friedman J, Higgins LM, Gore J. Community structure follows simple assembly rules in microbial microcosms. Nature Ecology &Amp; Evolution. 2017;1:0109 EP –.
* 10. Goyal A, Maslov S. Diversity, Stability, and Reproducibility in Stochastically Assembled Microbial Ecosystems. Phys Rev Lett. 2018;120:158102. doi:10.1103/PhysRevLett.120.158102.
* 11. Stein RR, Bucci V, Toussaint NC, Buffie CG, Rätsch G, Pamer EG, et al. Ecological modeling from time-series inference: insight into dynamics and stability of intestinal microbiota. PLoS computational biology. 2013;9(12):e1003388.
* 12. Sung J, Kim S, Cabatbat JJT, Jang S, Jin YS, Jung GY, et al. Global metabolic interaction network of the human gut microbiota for context-specific community-scale analysis. Nature communications. 2017;8:15393; 15393–15393. doi:10.1038/ncomms15393.
* 13. Niehaus L, Boland I, Liu M, Chen K, Fu D, Henckel C, et al. Microbial coexistence through chemical-mediated interactions. bioRxiv. 2018;doi:10.1101/358481.
* 14. Posfai A, Taillefumier T, Wingreen NS. Metabolic Trade-Offs Promote Diversity in a Model Ecosystem. Phys Rev Lett. 2017;118:028103. doi:10.1103/PhysRevLett.118.028103.
* 15. Brunner JD, Chia N. Metabolite-mediated modelling of microbial community dynamics captures emergent behaviour more effectively than species–species modelling. Journal of the Royal Society Interface. 2019;16(159):20190423.
* 16. Momeni B, Xie L, Shou W. Lotka-Volterra pairwise modeling fails to capture diverse pairwise microbial interactions. Elife. 2017;6:e25051.
* 17. Heirendt L, Arreckx S, Pfau T, Mendoza S, Richelle A, Heinken A, et al. Creation and analysis of biochemical constraint-based models: the COBRA toolbox v3. 0. arXiv. arXiv preprint arXiv:171004038. 2017;.
* 18. Lewis NE, Nagarajan H, Palsson BO. Constraining the metabolic genotype–phenotype relationship using a phylogeny of in silico methods. Nature Reviews Microbiology. 2012;10:291 EP –.
* 19. Lloyd CJ, Ebrahim A, Yang L, King ZA, Catoiu E, O’Brien EJ, et al. COBRAme: A computational framework for genome-scale models of metabolism and gene expression. PLOS Computational Biology. 2018;14(7):1–14. doi:10.1371/journal.pcbi.1006302.
* 20. Chan SHJ, Simons MN, Maranas CD. SteadyCom: Predicting microbial abundances while ensuring community stability. PLOS Computational Biology. 2017;13(5):1–25. doi:10.1371/journal.pcbi.1005539.
* 21. Diener C, Resendis-Antonio O. Micom: metagenome-scale modeling to infer metabolic interactions in the microbiota. bioRxiv. 2018;doi:10.1101/361907.
* 22. Gottstein W, Olivier BG, Bruggeman FJ, Teusink B. Constraint-based stoichiometric modelling from single organisms to microbial communities. Journal of the Royal Society Interface. 2016;13(124):20160627.
* 23. Mendes-Soares H, Mundy M, Soares LM, Chia N. MMinte: an application for predicting metabolic interactions among the microbial species in a community. BMC Bioinformatics. 2016;17(1):343. doi:10.1186/s12859-016-1230-3.
* 24. Zomorrodi AR, Islam MM, Maranas CD. d-OptCom: Dynamic Multi-level and Multi-objective Metabolic Modeling of Microbial Communities. ACS Synthetic Biology. 2014;3(4):247–257. doi:10.1021/sb4001307.
* 25. Borer B, Ataman M, Hatzimanikatis V, Or D. Modeling metabolic networks of individual bacterial agents in heterogeneous and dynamic soil habitats (IndiMeSH). PLoS computational biology. 2019;15(6).
* 26. Koch S, Kohrs F, Lahmann P, Bissinger T, Wendschuh S, Benndorf D, et al. RedCom: A strategy for reduced metabolic modeling of complex microbial communities and its application for analyzing experimental datasets from anaerobic digestion. PLoS computational biology. 2019;15(2):e1006759.
* 27. Wattam AR, Davis JJ, Assaf R, Boisvert S, Brettin T, Bun C, et al. Improvements to PATRIC, the all-bacterial bioinformatics database and analysis resource center. Nucleic acids research. 2017;45(D1):D535–D542.
* 28. King ZA, Lu J, Dräger A, Miller P, Federowicz S, Lerman JA, et al. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models. Nucleic acids research. 2016;44(D1):D515–D522.
* 29. Mahadevan R, Edwards JS, Doyle FJ. Dynamic Flux Balance Analysis of Diauxic Growth in Escherichia coli. Biophysical Journal. 2002;83(3):1331 – 1340. doi:https://doi.org/10.1016/S0006-3495(02)73903-9.
* 30. Varma A, Palsson BO. Stoichiometric flux balance models quantitatively predict growth and metabolic by-product secretion in wild-type Escherichia coli W3110. Applied and Environmental Microbiology. 1994;60(10):3724–3731.
* 31. Zhuang K, Izallalen M, Mouser P, Richter H, Risso C, Mahadevan R, et al. Genome-scale dynamic modeling of the competition between Rhodoferax and Geobacter in anoxic subsurface environments. The ISME journal. 2011;5(2):305.
* 32. Henson MA, Hanly TJ. Dynamic flux balance analysis for synthetic microbial communities. IET systems biology. 2014;8(5):214–229.
* 33. Song HS, Cannon WR, Beliaev AS, Konopka A. Mathematical modeling of microbial community dynamics: a methodological review. Processes. 2014;2(4):711–752.
* 34. Succurro A, Segrè D, Ebenhöh O. Emergent subpopulation behavior uncovered with a community dynamic metabolic model of Escherichia coli diauxic growth. Msystems. 2019;4(1).
* 35. Zhuang K, Ma E, Lovley DR, Mahadevan R. The design of long-term effective uranium bioremediation strategy using a community metabolic model. Biotechnology and bioengineering. 2012;109(10):2475–2483.
* 36. Harcombe WR, Riehl WJ, Dukovski I, Granger BR, Betts A, Lang AH, et al. Metabolic Resource Allocation in Individual Microbes Determines Ecosystem Interactions and Spatial Dynamics. Cell Reports. 2014;7(4):1104 – 1115. doi:https://doi.org/10.1016/j.celrep.2014.03.070.
* 37. Louca S, Doebeli M. Calibration and analysis of genome-based models for microbial ecology. Elife. 2015;4:e08208.
* 38. Popp D, Centler F. $\mu$bialSim: constraint-based dynamic simulation of complex microbiomes. BioRxiv. 2019; p. 716126.
* 39. Gomez JA, Höffner K, Barton PI. DFBAlab: a fast and reliable MATLAB code for dynamic flux balance analysis. BMC bioinformatics. 2014;15(1):409.
* 40. Höffner K, Harwood SM, Barton PI. A reliable simulator for dynamic flux balance analysis. Biotechnology and Bioengineering. 2012;110(3):792–802. doi:10.1002/bit.24748.
* 41. Feinberg M, Horn F. Dynamics of open chemical systems and the algebraic structure of the underlying reaction network. Chemical Engineering Science. 1973;29:775–787.
* 42. Bradie B. A Friendly Introduction to Numerical Analysis. Pearson Education Inc.; 2006.
* 43. Baroukh C, Muñoz-Tamayo R, Steyer JP, Bernard O. DRUM: a new framework for metabolic modeling under non-balanced growth. Application to the carbon metabolism of unicellular microalgae. PloS one. 2014;9(8).
* 44. Øyås O, Stelling J. Genome-scale metabolic networks in time and space. Current Opinion in Systems Biology. 2018;8:51–58.
* 45. Zazueta CL, Bernard O, Gouzé JL. Reduction of Metabolic Networks keeping Core Dynamics. Discrete Applied Mathematics. 2018;157(10):2483–2493.
* 46. Kondo A, Ishii J, Hara KY, Hasunuma T, Matsuda F. Development of microbial cell factories for bio-refinery through synthetic bioengineering. Journal of biotechnology. 2013;163(2):204–216.
* 47. Bordbar A, Monk JM, King ZA, Palsson BO. Constraint-based models predict metabolic and associated cellular functions. Nature Reviews Genetics. 2014;15(2):107–120.
* 48. Bertsimas D, Tsitsiklis JN. Introduction to linear optimization. vol. 6. Athena Scientific Belmont, MA; 1997.
* 49. Tardella F. The fundamental theorem of linear programming: extensions and applications. Optimization. 2011;60(1-2):283–301.
* 50. Tourigny DS, Muriel JC, Beber ME. dfba: Software for efficient simulation of dynamic flux-balance analysis models in Python; 2020. https://gitlab.com/davidtourigny/dynamic-fba.
* 51. Islam MM, Fernando SC, Saha R. Metabolic modeling elucidates the transactions in the rumen microbiome and the shifts upon virome interactions. Frontiers in microbiology. 2019;10:2412.
* 52. Xu X, Zarecki R, Medina S, Ofaim S, Liu X, Chen C, et al. Modeling microbial communities from atrazine contaminated soils promotes the development of biostimulation solutions. The ISME journal. 2019;13(2):494–508.
* 53. Horn F, Jackson R. General mass action kinetics. Archive for Rational Mechanics and Analysis. 1972;47.
* 54. Feinberg M. Lectures on Chemical Reaction Networks; 1979. http://www.crnt.osu.edu/LecturesOnReactionNetworks.
* 55. Monk JM, Charusanti P, Aziz RK, Lerman JA, Premyodhin N, Orth JD, et al. Genome-scale metabolic reconstructions of multiple Escherichia coli strains highlight strain-specific adaptations to nutritional environments. Proceedings of the National Academy of Sciences. 2013;110(50):20338–20343.
## Appendix A Existence of desired optimal basis.
###### Lemma 1.
For a linear program with the form given in Eq. 15 with a basic optimal
solution $\bm{w}$, there exists a basic index set $\mathcal{I}$ such that Eq.
21 holds and $\bm{\dot{w}}$ is optimal over the possible choice of basic index
sets for $\bm{w}$.
###### Proof.
For convenience, we now restate Eq. 15:
$\left\\{\begin{array}[]{c}\max(\bm{\tilde{\phi}}\cdot\bm{\tilde{\gamma}})\\\
\begin{bmatrix}\tilde{A}&I\end{bmatrix}\begin{bmatrix}\bm{\tilde{\phi}}\\\
\bm{s}\end{bmatrix}=\bm{c}\\\ \tilde{\phi}_{i}\geq 0,s_{i}\geq
0\end{array}\right\\}$
where we write $(\bm{\tilde{\phi}},\bm{s})=\bm{w}$.
We note that there is a finite number of basic index sets for $\bm{w}$, and so
we need only show that there exists $\mathcal{I}$ such that Eq. 21 holds.
Then, the existence of an optimal such $\mathcal{I}$ follows trivially.
If $\bm{w}$ is not degenerate, then the unique choice of basic index set
$\mathcal{I}$ satisfies Eq. 21. To see this, simply note that if $\bm{w}$ is
non-degenerate, then for every $i\in\mathcal{I}$, $w_{i}>0$. Thus, Eq. 21 only
includes non-negativity constraints on $\dot{w}_{i}$ if $i\not\in\mathcal{I}$,
and for any $i\not\in\mathcal{I}$, $\dot{w}_{i}=0$. Thus, the non-negativity
constraints are enforced. The equality constraints are enforced by the
definition of $\bm{w}_{\mathcal{I}}(\bm{a})$ given in Eq. 16, which implies
that $[\tilde{A}\;I]\bm{w}_{\mathcal{I}}(\bm{a})=\bm{a}$ for any vector
$\bm{a}\in\mathbb{R}^{n}$.
In the case of a degenerate solution $\bm{w}$, we use the following procedure
to choose a set of basic variables. Let $\mathcal{J}\subset\\{1,...,n\\}$ be
the indices of the $n_{1}$ slack variables such that $s_{j}=0$ if
$j\in\mathcal{J}$ (recalling that each $s_{i}$ is a component of the vector
$\bm{w}$). Then, let $\tilde{A}_{\mathcal{J}}$ be the matrix with rows $m_{j}$
of $\tilde{A}$ for $j\in\mathcal{J}$. Next, let $\mathcal{J}^{*}$ be the
indices of the $n_{2}$ non-slack variables such that $\phi_{j}=0$ and
$I_{\mathcal{J}^{*}}$ the corresponding rows of the identity matrix $I$.
Notice that we now have that
$M\bm{\tilde{\phi}}=\begin{bmatrix}\tilde{A}_{\mathcal{J}}\\\
-I_{\mathcal{J}^{*}}\end{bmatrix}\bm{\tilde{\phi}}=\begin{bmatrix}\bm{c}_{\mathcal{J}}\\\
\bm{0}\end{bmatrix}.$ (25)
and that if $w_{j}=0$ then either $j\in\mathcal{J}^{*}$ or $w_{j}=s_{k}$ where
$k\in\mathcal{J}$ so that $\bm{m}_{k}\cdot\bm{\tilde{\phi}}=c_{k}$ (i.e.
$s_{k}$ is a slack variable and $s_{k}=0$). Notice that because Eq. 15 has a
bounded solution, then we can assume without loss of generality that if
$M\in\mathbb{R}^{q\times r}$, then $\mathit{rank}(M)=r$ (i.e. $M$ is full
rank) because $\bm{w}$ must satisfy at least $r$ linearly independent
constraints. If this is not the case, then the problem can be projected onto a
lower dimensional subspace.
Consider the linear program
$\left\\{\begin{array}[]{c}\max(\bm{y}\cdot\bm{\gamma})\\\
\begin{bmatrix}M&I\end{bmatrix}\begin{bmatrix}\bm{y}_{\bm{\tilde{\phi}}}\\\
\bm{y}_{\bm{s}}\end{bmatrix}=\begin{bmatrix}\frac{d}{dt}\bm{c}_{\mathcal{J}}\\\
\bm{0}\end{bmatrix}\\\ y_{j}\geq 0\end{array}\right\\}.$ (26)
Assume that there is some basic optimal solution to Eq. 26 with a basic index
set $\hat{\mathcal{I}}$ such that exactly $r$ slack variables are non-basic,
where again $r=|\bm{\phi}|$ is the rank of the matrix $M$. This implies that
there are $r$ linearly independent rows of $M$ (which we index by
$\mathcal{J}^{{\dagger}}$) which form an invertible matrix $\tilde{M}$ such
that
$\tilde{M}\bm{y}_{\bm{\tilde{\phi}}}=\begin{bmatrix}\frac{d}{dt}\bm{c}_{\mathcal{J}^{{\dagger}}}\\\
\bm{0}\end{bmatrix}$ (27)
and we can then determine $\bm{y}_{\bm{s}}$ by
$\bm{y}_{\bm{s}}=\begin{bmatrix}\frac{d}{dt}\bm{c}_{\mathcal{J}}\\\
\bm{0}\end{bmatrix}-M\bm{y}_{\bm{\tilde{\phi}}}$ (28)
and note that each $(\bm{y}_{\bm{s}})_{i}\geq 0$. We now rewrite
$\bm{\dot{w}}=(\bm{\dot{w}}_{\bm{\tilde{\phi}}},\bm{\dot{w}}_{\bm{s}})$ from
Eq. 21 and define
$\bm{\dot{w}}_{\bm{\tilde{\phi}}}=\bm{y}_{\bm{\tilde{\phi}}}$ and
$\bm{\dot{w}}_{\bm{s}}=\frac{d}{dt}\bm{c}-M\bm{\dot{w}}_{\bm{\tilde{\phi}}}$
(29)
and conclude that this satisfies the constraints of Eq. 21. Next, we take
$\bm{\tilde{\phi}}$ to be the unique solution to
$\tilde{M}\bm{\tilde{\phi}}=\begin{bmatrix}\bm{c}_{\mathcal{J}^{{\dagger}}}\\\
\bm{0}\end{bmatrix}$ (30)
and $\bm{s}=\bm{c}-\tilde{A}\bm{\tilde{\phi}}$.
Finally, we take
$\mathcal{I}=(\hat{\mathcal{I}}\setminus\mathcal{J}^{*})\cup\mathcal{J}^{c}$
and note that this basis set enforces exactly the same $r$ linearly
independent constraints as $\tilde{M}$333In practice, we may simply use
$\tilde{M}$ to find $\tilde{\bm{\phi}}$.
We now prove that there is some basic optimal solution to Eq. 26 with a basic
index set $\hat{\mathcal{I}}$ such that exactly $r$ slack variables are non-
basic, where $r$ is the rank of the matrix $M$.
First we note that for any basic optimal solution, if there are $r^{*}>r$
slack variables which are non-basic, then there are $r^{*}$ rows of
$B_{\hat{\mathcal{I}}}$ which are non-zero only in the columns of ${M}$.
Therefore, $B_{\hat{\mathcal{I}}}$ is not invertible. We can conclude that the
number of non-basic slack variables is at most $r$.
Next, suppose $\bm{\dot{w}}^{*}$ is a basic optimal solution with basis
$\mathcal{I}^{*}$ such that there are $r^{*}<r$ slack variables which are non-
basic.
We would like to assume that there are at least $r$ slack variables
$s_{k}^{*}$ corresponding to $r$ linearly independent constraints such that
$s_{k}^{*}=0$. Recall that $\tilde{A}$ was formed with repeated (negated)
columns in order write the problem in standard form (the non-negativity bounds
of Eq. 15 are artificial). Therefore, we can find some vector $\bm{x}$ in the
kernel of the matrix formed by the rows of $\tilde{A}$ corresponding to zero
slacks which also has $\bm{x}\cdot\bm{\gamma}=0$. We can therefore find a
vector $\bm{y}$ in the kernel of
$\begin{bmatrix}\tilde{A}_{\mathcal{J}}&I&0\\\
-I_{\mathcal{J}^{*}}&0&I\end{bmatrix}$
which has $y_{k}=0$ if $s_{k}=0$ and $y_{j}\neq 0$ if $s_{j}\neq 0$ and
$s_{j}$ corresponds to a constraint that is not a linear combination of the
constraints corresponding to the $s_{k}=0$. There is at least one such
constraint as long as the $0$ slack variables correspond to constraints with
span less than dimension $r$, and so we can take $\bm{\dot{w}}+\lambda\bm{y}$
for some $\lambda$ and so increase the number of non-zero slack variables. We
can therefore assume without loss of generality that there are at least $r$
slack variables $s_{k}^{*}$ corresponding to $r$ linearly independent
constraints such that $s_{k}^{*}=0$, as was desired.
We can finally choose some linearly independent set of $r$ constraints which
correspond to $0$ slack variables, and call the matrix whose rows are these
constraint vectors $M^{*}$. Now, because there are $r^{*}<r$ non-slack basic
variables, there is some non-slack, non-basic variable $v_{j}$ such that the
column $m_{j}^{*}$ of $M^{*}$ (and ${m}_{j}$ of ${M}$) is linearly independent
from the columns corresponding to the $r^{*}$ non-slack basic variables. We
can conclude that if
$B_{\mathcal{I}^{*}}\bm{\lambda}={m}_{j}$ (31)
then there is some $\lambda_{k}\neq 0$ where $k$ corresponds to the index of a
slack variable with $s_{k}=0$. We can remove $k$ from the basic index set and
add $j$ without changing $\bm{\dot{w}}^{*}$, and therefore preserving
optimality and feasibility. We have then increased the number of non-basic
slack variables, and we can repeat if necessary to form $\hat{\mathcal{I}}$
with exactly $r$ non-basic slack variables.
∎
|
2024-09-04T02:54:58.132395 | 2020-03-07T23:33:34 | 2003.03685 | {
"authors": "Jakob Runge",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26100",
"submitter": "Jakob Runge",
"url": "https://arxiv.org/abs/2003.03685"
} | arxiv-papers | # Discovering contemporaneous and lagged causal relations in autocorrelated
nonlinear time series datasets
Jakob Runge
German Aerospace Center
Institute of Data Science
07745 Jena, Germany
and
Technische Universität Berlin
10623 Berlin, Germany
###### Abstract
The paper introduces a novel conditional independence (CI) based method for
linear and nonlinear, lagged and contemporaneous causal discovery from
observational time series in the causally sufficient case. Existing CI-based
methods such as the PC algorithm and also common methods from other frameworks
suffer from low recall and partially inflated false positives for strong
autocorrelation which is an ubiquitous challenge in time series. The novel
method, PCMCI+, extends PCMCI [Runge et al., 2019b] to include discovery of
contemporaneous links. PCMCI+ improves the reliability of CI tests by
optimizing the choice of conditioning sets and even benefits from
autocorrelation. The method is order-independent and consistent in the oracle
case. A broad range of numerical experiments demonstrates that PCMCI+ has
higher adjacency detection power and especially more contemporaneous
orientation recall compared to other methods while better controlling false
positives. Optimized conditioning sets also lead to much shorter runtimes than
the PC algorithm. PCMCI+ can be of considerable use in many real world
application scenarios where often time resolutions are too coarse to resolve
time delays and strong autocorrelation is present.
## 1 INTRODUCTION
A number of frameworks address the problem of causal discovery from
observational data utilizing different assumptions. Next to Bayesian score-
based methods [Chickering, 2002], classical Granger causality (GC) [Granger,
1969], and the more recent restricted structural causal models (SCM) framework
[Peters et al., 2017, Spirtes and Zhang, 2016], conditional independence (CI)
based network learning algorithms [Spirtes et al., 2000] form a main pillar. A
main representative of the CI framework in the causally sufficient case (no
unobserved common drivers) is the PC algorithm [Spirtes and Glymour, 1991].
Its advantages lie, firstly, in the flexibility of utilizing a wide and
growing class of CI tests, from linear partial correlation (ParCorr) and non-
parametric residual-based approaches [Ramsey, 2014, Runge et al., 2019b] to
Kernel measures [Zhang et al., 2011], tests based on conditional mutual
information [Runge, 2018b], and neural networks [Sen et al., 2017]. Secondly,
the PC algorithm utilizes sparsity making it applicable also to large numbers
of variables while score- and SCM-based methods are more difficult to adapt to
nonlinear high-dimensional causal discovery.
Causal discovery in the time series case is partially less and partially more
challenging [Runge et al., 2019a]. Obviously, time-order greatly helps in
identifying causal directions for lagged links (causes precede effects). This
forms the basis of GC which, however, cannot deal with contemporaneous links
and suffers from the curse of dimensionality [Runge et al., 2019b]. SCM-based
methods such as LiNGAM [Hyvärinen et al., 2010] and also CI-based methods
[Runge et al., 2019b, Entner and Hoyer, 2010, Malinsky and Spirtes, 2018] have
been adapted to the time series case. In [Moneta et al., 2011] GC is augmented
by the PC algorithm. However, properties such as non-stationarity and
especially autocorrelation can make causal discovery much less reliable.
Here I show that autocorrelation, an ubiquitous property of time series (e.g.,
temperature data), is especially detrimental and propose a novel CI-based
method, PCMCI+, that extends the PCMCI method from [Runge et al., 2019b] to
also include discovery of contemporaneous links, which requires substantial
changes. PCMCI+ is based on two central ideas that deviate from the PC
algorithm and the time-series adaptations of FCI in [Entner and Hoyer, 2010,
Malinsky and Spirtes, 2018]: First, an edge removal phase is conducted
separately for lagged and contemporaneous conditioning sets and the lagged
phase uses much fewer CI tests. Secondly, and more importantly, PCMCI+
optimizes the choice of conditioning sets for the individual CI tests to make
them better calibrated under autocorrelation and increase detection power by
utilizing the momentary conditional independence idea [Runge et al., 2019b].
The paper is structured as follows. Section 2 briefly introduces the problem
and Sect. 3 describes the method and states theoretical results. Numerical
experiments in Sect. 4 show that PCMCI+ benefits from strong autocorrelation
and yields much more adjacency detection power and especially more orientation
recall for contemporaneous links while better controlling false positives at
much shorter runtimes than the PC algorithm. A Supplementary Material (SM)
contains proofs and further numerical experiments.
## 2 TIME SERIES CAUSAL DISCOVERY
### 2.1 PRELIMINARIES
We are interested in discovering time series graphs (e.g., [Runge, 2018a])
that can represent the temporal dependency structure underlying complex
dynamical systems. Consider an underlying discrete-time structural causal
process $\mathbf{X}_{t}=(X^{1}_{t},\ldots,X^{N}_{t})$ with
$\displaystyle X^{j}_{t}$
$\displaystyle=f_{j}\left(\mathcal{P}(X^{j}_{t}),\,\eta^{j}_{t}\right)$ (1)
where $f_{j}$ are arbitrary measurable functions with non-trivial dependencies
on their arguments and $\eta^{j}_{t}$ represents mutually ($i\neq j$) and
serially ($t^{\prime}\neq t$) independent dynamical noise. The nodes in a time
series graph $\mathcal{G}$ (example in Fig. 1) represent the variables
$X^{j}_{t}$ at different lag-times and the set of variables that $X^{j}_{t}$
depends on defines the causal parents
$\mathcal{P}(X^{j}_{t})\subset\mathbf{X}^{-}_{t+1}=(\mathbf{X}_{t},\mathbf{X}_{t-1},\ldots){\setminus}\\{X^{j}_{t}\\}$.
We denote _lagged parents_ by
$\mathcal{P}^{-}_{t}(X^{j}_{t})=\mathcal{P}(X^{j}_{t})\cap\mathbf{X}^{-}_{t}$.
A lagged ($\tau>0$) or contemporaneous ($\tau=0$) causal link
$X^{i}_{t-\tau}\to X^{j}_{t}$ exists if
$X^{i}_{t-\tau}\in\mathcal{P}(X^{j}_{t})$. Throughout this work the graph
$\mathcal{G}$ is assumed _acyclic_ and the causal links _stationary_ meaning
that if $X^{i}_{t-\tau}\to X^{j}_{t}$ for some $t$, then
$X^{i}_{t^{\prime}-\tau}\to X^{j}_{t^{\prime}}$ for all $t^{\prime}\neq t$.
Then we can always fix one variable at $t$ and take $\tau\geq 0$. Note that
the stationarity assumption may be relaxed. The graph is actually infinite in
time, but in practice only considered up to some maximum time lag
$\tau_{\max}$. We define the set of adjacencies $\mathcal{A}(X^{j}_{t})$ of a
variable $X^{j}_{t}$ to include all $X^{i}_{t-\tau}$ for $\tau\geq 0$ that
have a (lagged or contemporaneous) link with $X^{j}_{t}$ in $\mathcal{G}$. We
define contemporaneous adjacencies as
$\mathcal{A}_{t}(X^{j}_{t})=\mathcal{A}(X^{j}_{t})\cap\mathbf{X}_{t}$. A
sequence of $m$ contemporaneous links is called a _directed contemporaneous
path_ if for all $k\in\\{1,\ldots,m\\}$ the link $X^{i+k-1}_{t}\to
X^{i+k}_{t}$ occurs. We call $X^{i}_{t}$ a _contemporaneous ancestor_ of
$X^{j}_{t}$ if there is a directed contemporaneous path from $X^{i}_{t}$ to
$X^{j}_{t}$ and we denote the set of all contemporaneous ancestors as
$\mathcal{C}_{t}(X^{j}_{t})$ (which excludes $X^{j}_{t}$ itself). We denote
separation in the graph by $\bowtie$, see [Runge, 2018a] for further notation
details.
### 2.2 PC ALGORITHM
The PC algorithm is the most wide-spread CI-based causal discovery algorithm
for the causally sufficient case and utilizes the Markov and Faithfulness
assumptions as formally defined in Sect. S1. Adapted to time series
(analogously to the methods for the latent case in [Entner and Hoyer, 2010,
Malinsky and Spirtes, 2018]), it consists of three phases: First, a skeleton
of adjacencies is learned based on iteratively testing which pairs of
variables (at different time lags) are conditionally independent at some
significance level $\alpha_{\rm PC}$ (Alg. 2 with the PC option). For lagged
links, time-order automatically provides orientations, while for
contemporaneous links a collider phase (Alg. S2) and rule phase (Alg. S3)
determine the orientation of links. CI-based discovery algorithms can identify
the contemporaneous graph structure only up to a Markov equivalence class
represented as a completed partially directed acyclic graph (CPDAG). We denote
links for which more than one orientation occurs in the Markov equivalence
class by $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. Here we consider a
modification of PC that removes an undesired dependence on the order of
variables, called PC-stable [Colombo and Maathuis, 2014]. These modifications
also include either the _majority_ or _conservative_ [Ramsey et al., 2006]
rule for handling ambiguous triples where separating sets are inconsistent,
and conflicting links where different triples in the collider or orientation
phase lead to conflicting link orientations. With the _conservative_ rule the
PC algorithm is consistent already under the weaker Adjacency Faithfulness
condition [Ramsey et al., 2006]. Another approach for the time series case
(considered in the numerical experiments) is to combine vector-autoregressive
modeling to identify lagged links with the PC algorithm for the
contemporaneous causal structure [Moneta et al., 2011].
### 2.3 AUTOCORRELATION
Figure 1: The curse and blessing of autocorrelation. Linear example of model
(3) with ground truth links shown for the PCMCI+ case (right panel). All
autodependency coefficients are $0.95$ (except $0.475$ for $X^{5,6}$) and all
cross-coupling coefficients are $0.4$ ($\pm$ indicated by red/blue links). The
graphs show true and false link detection rates as the link width (if $>$
0.06) for true (color indicating ParCorr) and incorrect links (grey) for the
PC algorithm, Alg. 1, and the variants PCMCI+ and PCMCI${}^{+}_{0}$ as
explained in the text (detection rates based on $500$ realizations run at
$\alpha_{\rm PC}=0.01$ for $T=500$).
To illustrate the challenge of autocorrelation, in Fig. 1 we consider a linear
example with lagged and contemporaneous ground truth links shown for the
PCMCI+ case (right panel). The PC algorithm (Alg. 2 with ParCorr CI test)
starts by testing all unconditional independencies ($p=0$). Here the coupled
pairs $(X^{5},X^{6})$ as well as $(X^{7},X^{8})$ are independent of the other
variables and removed from each others adjacency sets, which shows how PC
exploits sparsity and reduces the estimation dimension compared to fitting a
full model on the whole past as in the GC framework. Due to the strong
autocorrelation the remaining variables, on the other hand, are almost all
adjacent to each other at multiple time lags in this iteration. In the next
iteration, CI for all remaining links is tested conditional on all one-
dimensional ($p=1$) conditioning sets. Here the PC algorithm removes the true
lagged link $X^{1}_{t-1}\to X^{0}_{t}$ (black dots) due to the incorrect CI
result $X^{1}_{t-1}\perp\\!\\!\\!\perp X^{0}_{t}|X^{1}_{t-2}$ (condition
marked by blue box). Later this then leads to the false positive
$X^{1}_{t-2}\to X^{0}_{t}$ (grey link) since $X^{1}_{t-1}$ is not conditioned
on. In a similar way the true link $X^{1}_{t-2}\to X^{3}_{t}$ is missed
leading to the false positive $X^{0}_{t-1}\to X^{3}_{t}$. Further, the true
contemporaneous link $X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ (similarly
$X^{3}_{t}{\circ\\!{\\--}\\!\circ}X^{4}_{t}$) is removed when conditioning on
$\mathcal{S}=(X^{4}_{t-1},X^{3}_{t-1})$ (blue boxes), which leads to the false
positive autodependencies at lag $2$ for $X^{2}_{t},X^{4}_{t}$, while the
false autodependency $X^{3}_{t-2}\to X^{3}_{t}$ is due to missing
$X^{1}_{t-2}\to X^{3}_{t}$. This illustrates the pattern of a cascade of false
negative errors (missing links) leading to false positives in later stages of
the PC algorithm.
What determines the removal of a true link in the finite sample case?
Detection power depends on sample size, the significance level $\alpha_{\rm
PC}$, the CI test dimension ($p+2$), and effect size, e.g., the absolute
ParCorr (population) value, here denoted
$I(X^{i}_{t-\tau};X^{j}_{t}|\mathcal{S})$ for some conditioning set
$\mathcal{S}$. Within each $p$-iteration the sample size, $\alpha_{\rm PC}$,
and the dimension are the same and a link will be removed if
$I(X^{i}_{t-\tau};X^{j}_{t}|\mathcal{S})$ falls below the $\alpha_{\rm
PC}$-threshold for _any_ considered $\mathcal{S}$. Hence, the overall minimum
effect size $\min_{\mathcal{S}}[I(X^{i}_{t-\tau};X^{j}_{t}|\mathcal{S})]$
determines whether a link is removed. The PC algorithm will iterate through
_all_ subsets of adjacencies such that this minimum can become very small. Low
effect size can be understood as a low (causal) signal-to-noise ratio: Here
$I(X^{1}_{t-1};X^{0}_{t}|X^{1}_{t-2})$ is small since the signal $X^{1}_{t-1}$
is reduced by conditioning on its autodependency $X^{1}_{t-2}$ and the ‘noise’
in $X^{0}_{t}$ is large due to its strong autocorrelation.
But autocorrelation can also be a blessing. The contemporaneously coupled pair
$(X^{7},X^{8})$ illustrates a case where autocorrelation helps to identify the
orientation of the link. Without autocorrelation the output of PC would be an
unoriented link to indicate the Markov equivalence class. On the other hand,
the detection rate here is rather weak since, as above, the signal (link from
$X^{8}_{t}$) is small compared to the noise (autocorrelation in $X^{7}$).
This illustrates the curse and blessing of autocorrelation. In summary, the PC
algorithm often results in false negatives (low recall) and these then lead to
false positives. Another reason for false positives are ill-calibrated tests:
To correctly model the null distribution, each individual CI test would need
to account for autocorrelation, which is difficult in a complex multivariate
and potentially nonlinear setting [Runge, 2018a]. In the experiments we will
see that the PC algorithm features inflated false positives.
As a side comment, the pair $(X^{5},X^{6})$ depicts a feedback cycle. These
often occur in real data and the example shows that time series graphs allow
to resolve time-delayed feedbacks while an aggregated _summary graph_ would
contain a cyclic dependency and summary graph-based methods assuming acyclic
graphs would not work. The orientation of the contemporaneous link
$X^{6}_{t}\to X^{5}_{t}$ is achieved via rule R1 in the orientation phase of
PC (Alg. S3).
## 3 PCMCI+
Figure 2: Schematic of PCMCI+. Note that for ease of visualization repeating
edges due to stationarity are only partially shown.
### 3.1 ALGORITHM
The goal of PCMCI+ is to optimize the choice of conditioning sets in CI tests
in order to increase detection power and at the same time maintain well-
calibrated tests. The approach is based on two central ideas, (1) separating
the skeleton edge removal phase into a lagged and contemporaneous conditioning
phase with much fewer CI tests and (2) utilizing the momentary conditional
independence (MCI) test [Runge et al., 2019b] idea in the contemporaneous
conditioning phase. Below, I explain the reasoning behind. Figure 2
illustrates the steps.
First, the goal of PC’s skeleton phase is to remove all those adjacencies that
are due to indirect paths and common causes by conditioning on subsets
$\mathcal{S}$ of the variables’ neighboring adjacencies in each iteration.
Consider a variable $X^{j}_{t}$. If we test lagged adjacencies from nodes
$X^{i}_{t-\tau}\in\mathbf{X}^{-}_{t}$ conditional on the whole past, i.e.,
$\mathcal{S}=\mathbf{X}^{-}_{t}\setminus\\{X^{i}_{t-\tau}\\}$, the only
indirect adjacencies remaining are due to paths through contemporaneous
parents of $X^{j}_{t}$. This is in contrast to conditioning sets on
contemporaneous adjacencies which can also open up paths $X^{j}_{t}\to
X^{k}_{t}\leftarrow X^{i}_{t-\tau}$ if $X^{k}_{t}$ is conditioned on. One
reason why the PC algorithm tests _all_ combinations of subsets $\mathcal{S}$
is to avoid opening up such collider paths. Therefore, one approach would be
to start by $\mathcal{S}=\mathbf{X}^{-}_{t}\setminus\\{X^{i}_{t-\tau}\\}$ and
then iterate through contemporaneous conditions. A similar idea lies behind
the combination of GC and the PC algorithm in [Moneta et al., 2011]. However,
conditioning on large-dimensional conditioning sets strongly affects detection
power [Runge et al., 2019b]. To avoid this, the lagged conditioning phase of
PCMCI+ (Alg. 1, see Fig. 2 left panels) tests all pairs
$(X^{i}_{t-\tau},X^{j}_{t})$ for $\tau>0$ conditional on only the _strongest_
$p$ adjacencies of $X^{j}_{t}$ in each $p$-iteration without going through all
$p$-dimensional subsets of adjacencies. This choice $(i)$ improves the causal
signal-to-noise ratio and recall since for a given test
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ the ‘noise’ in
$X^{j}_{t}$ due to other lagged adjacencies is conditioned out, $(ii)$ leads
to fewer CI tests further improving recall, and $(iii)$ speeds up the skeleton
phase. We denote the lagged adjacency set resulting from Alg. 1 as
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$. Lemma 1 in Sect. 3.2 states that
the only remaining indirect adjacencies in
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ are then due to paths passing
through contemporaneous parents of $X^{j}_{t}$. In the schematic in Fig. 2
this is the link $Y_{t-1}\to X_{t}$.
Secondly, in Alg. 2 (Fig. 2 center panels) the graph $\mathcal{G}$ is
initialized with all contemporaneous adjacencies plus all lagged adjacencies
from $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$. Algorithm
2 tests all (unordered lagged and ordered contemporaneous) adjacent pairs
$(X^{i}_{t-\tau},X^{j}_{t})$ and iterates through contemporaneous conditions
$\mathcal{S}\subseteq\mathcal{A}_{t}(X^{j}_{t})$ with the MCI test
$\displaystyle X^{i}_{t-\tau}$
$\displaystyle{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau}).$
(2)
In the schematic in Fig. 2 the condition on $\mathcal{S}=Y_{t}$, as part of
the full conditioning set
$\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$,
removes the link $X_{t}{\circ\\!{\\--}\\!\circ}Z_{t}$. The condition on
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ blocks paths through lagged parents
and the advantage of the additional conditioning on
$\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ is discussed in the
following. We denote the variant without the condition on
$\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ as PCMCI${}^{+}_{0}$.
Both versions are followed by the collider orientation phase (Alg. S2) and
rule orientation phase (Alg. S3) which are deferred to the SM since they are
equivalent to the PC algorithm with the modification that the additional CI
tests in the collider phase for the conservative or majority rule are also
based on the test (2) (Fig. 2 right panel).
We now discuss PCMCI${}^{+}_{0}$ and PCMCI+ on the example in Fig. 1.
Algorithm 1 tests $X^{1}_{t-1}\to X^{0}_{t}$ conditional on
$\mathcal{S}=\\{X^{0}_{t-1}\\}$ for $p=1$ and
$\mathcal{S}=\\{X^{0}_{t-1},X^{1}_{t-2}\\}$ for $p=2$ as the two strongest
adjacencies (as determined by the test statistic value, see pseudo-code). In
both of these tests the effect size $I$ (causal signal-to-noise ratio) is much
larger than for the condition on $\mathcal{S}=\\{X^{1}_{t-2}\\}$ which lead to
the removal of $X^{1}_{t-1}\to X^{0}_{t}$ in the PC algorithm. In Sect. 3.2 we
elaborate more rigorously on effect size. In the example
$\widehat{\mathcal{B}}^{-}_{t}(X^{2}_{t})$ is indicated as blue boxes in the
second panel and contains lagged parents as well as adjacencies due to paths
passing through contemporaneous parents of $X^{2}_{t}$. One false positive,
likely due to an ill-calibrated test caused by autocorrelation, is marked by a
star.
Based on these lagged adjacencies, Alg. 2 with the PCMCI${}^{+}_{0}$ option
then recovers all lagged links (3rd panel), but it still the misses
contemporaneous adjacencies $X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ and
$X^{3}_{t}{\circ\\!{\\--}\\!\circ}X^{4}_{t}$ and we also see strong lagged
false positives from $X^{3}$ to $X^{2}$ and $X^{4}$. What happened here? The
problem are now tests on contemporaneous links: The CI test for
PCMCI${}^{+}_{0}$ in the $p=0$ loop, like the original PC algorithm, will test
_ordered_ contemporaneous pairs. Hence, first
$X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ conditional on
$\widehat{\mathcal{B}}^{-}_{t}(X^{3}_{t})$ and, if the link is not removed,
$X^{3}_{t}{\circ\\!{\\--}\\!\circ}X^{2}_{t}$ conditional on
$\widehat{\mathcal{B}}^{-}_{t}(X^{2}_{t})$. Here
$X^{2}_{t}{\circ\\!{\\--}\\!\circ}X^{3}_{t}$ is removed conditional on
$\widehat{\mathcal{B}}^{-}_{t}(X^{3}_{t})$ (indicated by blue boxes in the
panel) because
$I(X^{2}_{t};X^{3}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{3}_{t}))$ falls below
the significance threshold.
The second central idea of PCMCI+ is to improve the effect size of CI tests
for contemporaneous links by conditioning on _both_ lagged adjacencies
$\widehat{\mathcal{B}}^{-}_{t}$ in the CI test (2) (see blue and red boxes in
Fig. 1 right panel). At least for the initial phase $p=0$ one can prove that
for non-empty $\widehat{\mathcal{B}}^{-}_{t}$ the effect size of the PCMCI+ CI
test is always strictly larger than that of the PCMCI${}^{+}_{0}$ test (Thm.
4). I conjecture that this similarly holds for PCMCI+ vs. the PC algorithm.
Higher effect size leads to higher recall and PCMCI+ now recovers all lagged
as well as contemporaneous links and also correctly removes the lagged false
positives that PCMCI${}^{+}_{0}$ obtains. Also the contemporaneous coupled
pair $(X^{7},X^{8})$ is now much better detected since the MCI effect size
$I(X^{7}_{t};X^{8}_{t}|X^{7}_{t-1})$ is larger than $I(X^{7}_{t};X^{8}_{t})$,
one of the two PCMCI${}^{+}_{0}$ and PC algorithm effect sizes tested here.
Another advantage, discussed in [Runge et al., 2019b] is that PCMCI+ CI tests
are better calibrated, in contrast to PCMCI${}^{+}_{0}$ and PC algorithm
tests, since the condition on both parents removes autocorrelation effects.
Note that for lagged links the effect size of PCMCI+ is generally smaller than
that of PCMCI${}^{+}_{0}$ since the extra condition on
$\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$ can only reduce
effect size (see [Runge et al., 2012]). This is the cost of avoiding inflated
false positives.
In summary, the central PCMCI+ idea is to increase effect size in individual
CI tests to achieve higher detection power and at the same time maintain well-
controlled false positives also for high autocorrelation. Correct adjacency
information then leads to better orientation recall in Alg. S2, S3. The other
advantage of PCMCI+ compared to the PC algorithm is a much faster and, as
numerical examples show, also much less variable runtime.
The full algorithm is detailed in pseudo-code Algorithms 1,2,S2,S3 with
differences to PC and PCMCI${}^{+}_{0}$ indicated. Note that pairs
$(X^{i}_{t-\tau},X^{j}_{t})$ in lines 5 and 6 of Alg. 2 are ordered for
$\tau=0$ and unordered for $\tau>0$. One can construct (rather conservative)
$p$-values for the skeleton adjacencies $(X^{i}_{t-\tau},X^{j}_{t})$ by taking
the maximum $p$-value over all CI tests conducted in Alg. 2. A link strength
can be defined corresponding to the test statistic value of the maximum
$p$-value. Based on the PC stable variant, PCMCI+ is fully order-independent.
Here shown is the majority-rule implementation of the collider phase, the
version without handling ambiguous triples and for the conservative rule are
detailed in Alg. S2. Note that the tests in the collider phase also use the CI
tests (2).
Like other CI-based methods, PCMCI+ has the free parameters $\alpha_{\rm PC}$,
$\tau_{\max}$, and the choice of the CI test. $\alpha_{\rm PC}$ can be chosen
based on cross-validation or an information criterion (implemented in
tigramite). $\tau_{\max}$ should be larger or equal to the maximum true time
lag of any parent and can in practice also be chosen based on model selection.
However, the numerical experiments indicate that, in contrast to GC, a too
large $\tau_{\max}$ does not degrade performance much and $\tau_{\max}$ can
also be chosen based on the lagged dependence functions, see [Runge et al.,
2019b]. PCMCI+ can flexibly be combined with different CI tests for nonlinear
causal discovery, and for different variable types (discrete or continuous,
univariate or multivariate).
The computational complexity of PCMCI+ strongly depends on the network
structure. The sparser the causal dependencies, the faster the convergence.
Compared to the original PC algorithm with worst-case exponential complexity,
the complexity is much reduced since Alg. 1 only has polynomial complexity
[Runge et al., 2019b] and Alg. 2 only iterates through contemporaneous
conditioning sets, hence the worst-case exponential complexity only applies to
$N$ and not to $N\tau_{\max}$.
Algorithm 1 (PCMCI+ / PCMCI${}^{+}_{0}$ lagged skeleton phase)
1:Time series dataset $\mathbf{X}=(X^{1},\,\ldots,X^{N})$, max. time lag
$\tau_{\max}$, significance threshold $\alpha_{\rm PC}$, CI test ${\rm
CI}(X,\,Y,\,\mathbf{Z})$ returning $p$-value and test statistic value $I$
2:for all $X^{j}_{t}$ in $\mathbf{X}_{t}$ do
3: Initialize
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){=}\mathbf{X}^{-}_{t}{=}(\mathbf{X}_{t-1},\dots,\mathbf{X}_{t-\tau_{\max}})$
and
$I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\infty~{}~{}\forall~{}X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$
4: Let $p=0$
5: while any $X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$
satisfies
$|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq
p$ do
6: for all $X^{i}_{t-\tau}$ in $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$
satisfying
$|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq
p$ do
7: $\mathcal{S}=$ first $p$ variables in
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\}$
8: $(\text{$p$-value},\,I)\leftarrow$
CI($X^{i}_{t-\tau},\,X^{j}_{t},\,\mathcal{S}$)
9:
$I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\min(|I|,I^{\min}(X^{i}_{t-\tau},X^{j}_{t}))$
10: if $p$-value $>\alpha_{\rm PC}$ then mark $X^{i}_{t-\tau}$ for removal
11: Remove non-significant entries and sort
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ by
$I^{\min}(X^{i}_{t-\tau},X^{j}_{t})$ from largest to smallest
12: Let $p=p+1$
13:return $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in
$\mathbf{X}_{t}$
Algorithm 2 (PCMCI+ / PCMCI${}^{+}_{0}$ contemporaneous skeleton phase / PC
full skeleton phase)
1:Time series dataset $\mathbf{X}=(X^{1},\,\ldots,X^{N})$, max. time lag
$\tau_{\max}$, significance threshold $\alpha_{\rm PC}$, ${\rm
CI}(X,\,Y,\,\mathbf{Z})$, PCMCI+ / PCMCI${}^{+}_{0}$:
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in
$\mathbf{X}_{t}$
2:PCMCI+ / PCMCI${}^{+}_{0}$: Form time series graph $\mathcal{G}$ with lagged
links from $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all $X^{j}_{t}$ in
$\mathbf{X}_{t}$ and fully connect all contemporaneous variables, i.e., add
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ for all $X^{i}_{t}\neq
X^{j}_{t}\in\mathbf{X}_{t}$
3:PC: Form fully connected time series graph $\mathcal{G}$ with lagged and
contemporaneous links
4:PCMCI+ / PCMCI${}^{+}_{0}$: Initialize contemporaneous adjacencies
$\widehat{\mathcal{A}}(X^{j}_{t}):=\widehat{\mathcal{A}}_{t}(X^{j}_{t})=\\{X^{i}_{t}{\neq}X^{j}_{t}\in\mathbf{X}_{t}:X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}~{}\text{in
$\mathcal{G}$}\\}$
5:PC: Initialize full adjacencies $\widehat{\mathcal{A}}(X^{j}_{t})$ for all
(lagged and contemporaneous) links in $\mathcal{G}$
6:Initialize $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\infty$ for all links in
$\mathcal{G}$
7:Let $p=0$
8:while any adjacent pairs $(X^{i}_{t-\tau},X^{j}_{t})$ for $\tau\geq 0$ in
$\mathcal{G}$ satisfy
$|\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq p$ do
9: Select new adjacent pair $(X^{i}_{t-\tau},X^{j}_{t})$ for $\tau\geq 0$
satisfying
$|\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}|\geq p$
10: while $(X^{i}_{t-\tau},X^{j}_{t})$ are adjacent in $\mathcal{G}$ and not
all
$\mathcal{S}\subseteq\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
with $|\mathcal{S}|=p$ have been considered do
11: Choose new
$\mathcal{S}{\subseteq}\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
with $|\mathcal{S}|{=}p$
12: PCMCI+: Set
$\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau}))$
13: PCMCI${}^{+}_{0}$: Set
$\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\})$
14: PC: Set $\mathbf{Z}{=}\mathcal{S}$
15: $(\text{$p$-value},\,I)\leftarrow$
CI($X^{i}_{t{-}\tau},X^{j}_{t},\mathbf{Z})$
16:
$I^{\min}(X^{i}_{t-\tau},X^{j}_{t})=\min(|I|,I^{\min}(X^{i}_{t-\tau},X^{j}_{t}))$
17: if $p$-value $>\alpha_{\rm PC}$ then
18: Delete link $X^{i}_{t-\tau}\to X^{j}_{t}$ for $\tau>0$ (or
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ for $\tau=0$) from $\mathcal{G}$
19: Store (unordered) ${\rm sepset}(X^{i}_{t-\tau},X^{j}_{t})=\mathcal{S}$
20: Let $p=p+1$
21: Re-compute $\widehat{\mathcal{A}}(X^{j}_{t})$ from $\mathcal{G}$ and sort
by $I^{\min}(X^{i}_{t-\tau},X^{j}_{t})$ from largest to smallest
22:return $\mathcal{G}$, sepset
### 3.2 THEORETICAL RESULTS
This section states asymptotic consistency, finite sample order-independence,
and further results regarding effect size and false positive control. The
consistency of network learning algorithms is separated into _soundness_ ,
i.e., the returned graph has correct adjacencies, and _completeness_ , i.e.,
the returned graph is also maximally informative (links are oriented as much
as possible). We start with the following assumptions.
###### Assumptions 1 (Asymptotic case).
Throughout this paper we assume Causal Sufficiency, the Causal Markov
Condition, the Adjacency Faithfulness Conditions, and consistent CI tests
(oracle). In the present time series context we also assume stationarity and
time-order and that the maximum time lag
$\tau_{\max}\geq\tau^{\mathcal{P}}_{\max}$, where $\tau^{\mathcal{P}}_{\max}$
is the maximum time lag of any parent in the SCM (1). Furthermore, we rule out
_selection variables_ and _measurement error_.
Definitions of these assumptions, adapted from [Spirtes et al., 2000] to the
time series context, are in Sect. S1 and all proofs are in Sect. S2. We start
with the following lemma.
###### Lemma 1.
Under Assumptions 1 Alg. 1 returns a set that always contains the parents of
$X^{j}_{t}$ and, _at most_ , the lagged parents of all contemporaneous
ancestors of $X^{j}_{t}$, i.e.,
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})=\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ contains _all_ lagged parents of
all contemporaneous ancestors if the weaker Adjacency Faithfulness assumption
is replaced by standard Faithfulness.
This establishes that the conditions
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ estimated in the first phase of
PCMCI+ will suffice to block all lagged confounding paths that do not go
through contemporaneous links. This enables to prove the soundness of Alg. 2,
even though Alg. 2 is a variant of the PC algorithm that only iterates through
contemporaneous conditioning sets.
###### Theorem 1 (Soundness of PCMCI+).
Algorithm 2 returns the correct adjacencies under Assumptions 1, i.e.,
$\widehat{\mathcal{G}^{*}}=\mathcal{G}^{*}$, where the $\mathcal{G}^{*}$
denotes the skeleton of the time series graph.
To prove the completeness of PCMCI+, we start with the following observation.
###### Lemma 2.
Due to time-order and the stationarity assumption, the considered triples in
the collider phase (Alg. S2) and rule orientation phase (Alg. S3) can be
restricted as follows: In the collider orientation phase only unshielded
triples $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for
$\tau>0$) or
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$
(for $\tau=0$) in $\mathcal{G}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not
adjacent are relevant. For orientation rule R1 triples $X^{i}_{t-\tau}\to
X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ where $(X^{i}_{t-\tau},X^{j}_{t})$
are not adjacent, for orientation rule R2 triples $X^{i}_{t}\to X^{k}_{t}\to
X^{j}_{t}$ with $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$, and for
orientation rule R3 pairs of triples
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}\to X^{j}_{t}$ and
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{l}_{t}\to X^{j}_{t}$ where
$(X^{k}_{t},X^{l}_{t})$ are not adjacent and
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ are relevant. These restrictions
imply that only contemporaneous parts of separating sets are relevant for the
collider phase.
###### Theorem 2 (PCMCI+ is complete).
PCMCI+ (Algorithms 1,2,S2,S3) when used with the conservative rule for
orienting colliders in Alg. S2 returns the correct CPDAG under Assumptions 1.
Under standard Faithfulness also PCMCI+ when used with the majority rule or
the standard orientation rule is complete.
Also the proof of order-independence follows straightforwardly from the proof
in [Colombo and Maathuis, 2014]. Of course, order independence does not apply
to time-order.
###### Theorem 3 (Order independence).
Under Assumptions 1 PCMCI+ with the conservative or majority rule in Alg. S2
is independent of the order of variables $(X^{1},\ldots,X^{N})$.
Next, we consider effect size. The toy example showed that a major problem of
PCMCI${}^{+}_{0}$ (and also PC) is lack of detection power for contemporaneous
links. A main factor of statistical detection power is effect size, i.e., the
population value of the test statistic considered (e.g., absolute partial
correlation). In the following, I will base my argument in an information-
theoretic framework and consider the conditional mutual information as a
general test statistic, denoted $I$. In Alg. 2 PCMCI${}^{+}_{0}$ will test a
contemporaneous dependency $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ first
with the test statistic
$I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}))$, and, if
that test was positive, secondly with
$I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})))$. If either
of these tests finds (conditional) independence, the adjacency is removed.
Therefore, the minimum test statistic value determines the relevant effect
size. On the other hand, PCMCI+ treats both cases symmetrically since the test
statistic is always
$I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t}))$.
###### Theorem 4 (Effect size of MCI tests for $p=0$).
Under Assumptions 1 the PCMCI+ oracle case CI tests in Alg. 2 for $p=0$ for
contemporaneous true links $X^{i}_{t}\to X^{j}_{t}\in\mathcal{G}$ have an
effect size that is always greater than that of the PCMCI${}^{+}_{0}$ CI
tests, i.e.,
$I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t}))>\min(I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})),\,I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})))$
if both $X^{i}_{t}$ and $X^{j}_{t}$ have parents that are not shared with the
other.
I conjecture that this result holds similarly for $p>0$ and also that PCMCI+
has greater effect sizes than the PC algorithm since the latter iterates over
_all_ subsets of adjacencies and, hence, the minimum is taken generally over
an even larger set leading to even smaller effect sizes. For lagged links the
effect size of the PCMCI+ tests is always smaller (or equal) than that of the
PCMCI${}^{+}_{0}$ tests (see [Runge et al., 2012]).
Last, we discuss false positive control. While the effect size result regards
detection power, in the following I give a mathematical intuition why the MCI
tests are better calibrated than the PC algorithm CI tests and control false
positives below the expected significance level. Lemma 1 implies that even
though Alg. 1 does not aim to estimate the contemporaneous parents, it still
yields a set of conditions that shields $X^{j}_{t}$ from the ‘infinite’ past
$\mathbf{X}^{-}_{t}$, either by blocking the parents of $X^{j}_{t}$ or by
blocking indirect contemporaneous paths through contemporaneous ancestors of
$X^{j}_{t}$. Blocking paths from the infinite past, I conjecture, is key to
achieve well-calibrated CI tests in Alg. 2. The authors in [Runge et al.,
2019b] showed that under certain model assumptions the MCI tests reduce to CI
tests among the noise terms $\eta$ from model (1) which are assumed to be
i.i.d. and help to achieve well-calibrated CI tests. In the numerical
experiments below we can see that the PC algorithm has inflated false positive
for high autocorrelation, while PCMCI+ well controls false positives, but a
formal proof of correct false positive control for this challenging nonlinear,
high-dimensional setting is beyond the scope of this paper.
## 4 NUMERICAL EXPERIMENTS
We consider a number of typical challenges [Runge et al., 2019a],
contemporaneous and time lagged causal dependencies, strong autocorrelation,
large number of variables and considered time lags, different noise
distributions and nonlinearity, in the following additive variant of model
(1):
$\displaystyle X_{t}^{j}$
$\displaystyle=a_{j}X^{j}_{t-1}+\textstyle{\sum_{i}}c_{i}f_{i}(X^{i}_{t-\tau_{i}})+\eta^{j}_{t}$
(3)
for $j\in\\{1,\ldots,N\\}$. Autocorrelations $a_{j}$ are uniformly drawn from
$[\max(0,a-0.3),\,a]$ for $a$ as indicated in Fig. 3 and $\eta^{j}$ is
_i.i.d._ and follows a zero-mean Gaussian $\mathcal{N}$ or Weibull
$\mathcal{W}$ (scale parameter $2$) distribution (depending on setup) with
standard deviation drawn from $[0.5,\,2]$. In addition to autodependency
links, for each model $L=\lfloor 1.5\cdot N\rfloor$ (except for $N=2$ with
$L=1$) cross-links are chosen whose functional dependencies are linear or
$f_{i}(x)=f^{(2)}(x)=(1+5xe^{-x^{2}/20})x$ (depending on setup), with
$f^{(2)}$ designed to yield more stationary dynamics. Coefficients $c_{i}$ are
drawn uniformly from $\pm[0.1,0.5]$. 30% of the links are contemporaneous
($\tau_{i}=0$) and the remaining $\tau_{i}$ are drawn from $[1,\,5]$. Only
stationary models are considered. We have an average cross-in-degree of
$d=1.5$ for all network sizes (plus an auto-dependency) implying that models
become sparser for larger $N$. We consider several model setups: linear
Gaussian, linear mixed noise (among the $N$ variables: 50% Gaussian, 50%
Weibull), and nonlinear mixed noise (50% linear, 50% $f^{(2)}(x)$; 66%
Gaussian, 34% Weibull).
For the linear model setups we consider the PC algorithm and PCMCI+ in the
majority-rule variant with ParCorr and compare these with GCresPC [Moneta et
al., 2011], a combination of GC with PC applied to residuals, and a
autoregressive model version of LiNGAM [Hyvärinen et al., 2010], a
representative of the SCM framework (implementation details in Sect. S4). For
the LiNGAM implementation I could not find a way to set a significance level
and used the LASSO option which prunes ‘non-active’ links to zero. Both
GCresPC and LiNGAM assume linear dependencies and LiNGAM also non-Gaussianity.
For the nonlinear setup the PC algorithm and PCMCI+ are implemented with the
GPDC test [Runge et al., 2019b] that is based on Gaussian process regression
and a distance correlation test on the residuals, which is suitable for a
large class of nonlinear dependencies with additive noise.
Performance is evaluated as follows: True (TPR) and false positive rates (FPR,
shown to evaluate false positive control, not applicable to LiNGAM) for
adjacencies are distinguished between lagged cross-links ($i\neq j$),
contemporaneous, and autodependency links. Due to time order, lagged links
(and autodependencies) are automatically oriented. Contemporaneous orientation
precision is measured as the fraction of correctly oriented links
(${\circ\\!{\\--}\\!\circ}$ or $\to$) among all estimated adjacencies, and
recall as the fraction of correct orientations among all true contemporaneous
links. Further shown is the fraction of conflicting links among all detected
contemporaneous adjacencies (not applicable to LiNGAM). All metrics (and their
std. errors) are computed across all estimated graphs from $500$ realizations
of model (3) at time series length $T$. The average runtimes were evaluated on
Intel Xeon Platinum 8260. In Fig. 3 results for the linear Gaussian setup with
default model parameters $N=5,\,T=500,\,a=0.95$ and method parameters
$\tau_{\max}=5$ and $\alpha=0.01$ (not applicable to LiNGAM) are shown. Each
of the four panels shows results for varying one of $a,\,N,\,T,\,\tau_{\max}$.
The insets show ANOVA statistics $r\pm\bar{\Delta}r$ [per unit], where $r$ is
the performance metric at the leftmost parameter on the $x$-axis
($a,\,N,\,T,\,\tau_{\max}$, respectively) and $\bar{\Delta}r$ denotes the
average change per parameter unit. In the adjacency subplots the statistics
refer to lagged links.
Figure 3: Numerical experiments with linear Gaussian setup for varying (A)
autocorrelation strength $a$ (B) number of variables $N$ (C) sample size $T$
and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters
indicated in the top right. Errorbars show std. errors or the 90% range (for
runtime). The insets show ANOVA statistics.
Figure 3A demonstrates that the TPR of PCMCI+ and GCresPC for contemporaneous
links is stable even under high autocorrelation while PC and LiNGAM show
strong declines. Since LiNGAM has no $\alpha_{\rm PC}$ for FPR-control we
focus on its relative changes rather than absolute performance. Lagged TPR
decreases strongly for PC while the other methods are more robust. FPR is
well-controlled for PCMCI+ while PC and slightly also GCresPC show inflated
lagged FPR for high autocorrelation. LiNGAM features a strong increase of
lagged FPR. These adjacency results translate into higher contemporaneous
orientation recall for PCMCI+ which increases with autocorrelation, while it
decreases for all other methods. GCresPC has steady low recall since it does
not use lagged links in the orientation phase. Except for GCresPC, all methods
have increasing precision with PCMCI+ and PC outperforming LiNGAM. PCMCI+
shows almost no conflicts while PC’s conflicts increase with autocorrelation
until low power reduces them again. Finally, runtimes are almost constant for
GCresPC and LiNGAM, while they increase for PCMCI+ and much stronger for PC.
Figure 3B shows that PCMCI+ and GCresPC have the highest TPR for increasing
number of variables $N$, especially for contemporaneous links. FPR is well
controlled only for PCMCI+ while PC has false positives for small $N$ where
model connectivity is denser and false negatives are more likely leading to
false positives. For high $N$ PC has false positives only regarding
autodependencies while inflated FPR appears for GCresPC. PCMCI+ has more than
twice as much contemporaneous recall compared to the other methods and is
almost not affected by higher $N$. Orientation precision is decreasing for all
methods (except PC) with a higher decrease for PCMCI+. Runtime is increasing
at a much smaller rate for PCMCI+ compared to PC, which also has a very high
runtime variability across the different model realizations. LiNGAM and
especially GCresPC are fastest.
PCMCI+, GCresPC, and LiNGAM benefit similarly and PC less so for increasing
sample size regarding TPR (Fig. 3C). FPR is still not controlled for PC for
large sample sizes, lagged FPR increases for GCresPC. PCMCI+ shows the highest
increases in contemporaneous recall and precision. Runtime increases are
moderate compared to PC, conflicts decrease.
Last, Fig. 3D shows that all methods are relatively robust to large maximum
time lags $\tau_{\max}$ (beyond the true max. time lag $5$) for the considered
sample size $T=500$. Contemporaneous FPR and runtime increase for PC.
In the SM further results are shown. For too large $N\tau_{\max}$ (relative to
$T$) GCresPC and LiNGAM (despite LASSO-regularization) sharply drop in
performance. For the linear mixed noise setup (Fig. S2) results are almost
unchanged for all methods except for LiNGAM for which recall and precision
rise, as expected. Recall is then higher than PCMCI+ for low autocorrelation,
but still much lower for high autocorrelation and large $N$ or $\tau_{\max}$,
at similar precision.
In the nonlinear mixed noise setup (Fig. S3), the difference between PC and
PCMCI+ is similar. We observe slight FPR inflation for high autocorrelation.
GPDC seems to not work well in high-dimensional, highly autocorrelated
settings. Runtime for GPDC compared to ParCorr is orders of magnitude longer,
especially for PC. Further figures in the SM show many combinations of
$a,\,N,\,T,\,\tau_{\max}$ and $\alpha_{\rm PC}$ for the model setups and
demonstrate that the above findings are robust.
## 5 CONCLUSIONS
PCMCI+ improves the reliability of CI tests by optimizing the choice of
conditioning sets and yields much higher recall, well-controlled false
positives, and faster runtime than the original PC algorithm for highly
autocorrelated time series, while maintaining similar performance for low
autocorrelation. The algorithm well exploits sparsity in high-dimensional
settings and can flexibly be combined with different CI tests for nonlinear
causal discovery, and for different variable types (discrete or continuous,
univariate or multivariate). Autocorrelation is actually key to increase
contemporaneous orientation recall since it creates triples $X^{i}_{t-1}\to
X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ that can often be oriented while
an isolated link $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ stays undirected
in the Markov equivalence class, a drawback of CI-based methods. If the data
is at least non-Gaussian, a SCM method like LiNGAM can exploit this property
and recover directionality in such cases. Still, we saw that LiNGAM suffers
from large autocorrelation. PCMCI+ is available as part of the _tigramite_
Python package at https://github.com/jakobrunge/tigramite. A next step will be
to extend the present ideas to an algorithm accounting for latent confounders
and to explore combinations between SCM-based methods and PCMCI+. The
numerical results will be contributed to the causality benchmark platform
`www.causeme.net` [Runge et al., 2019a] to facilitate a further expanded
method evaluation.
#### Acknowledgments
DKRZ provided computational resources (grant no. 1083). I thank Andreas
Gerhardus for helpful comments.
## References
* [Chickering, 2002] Chickering, D. M. (2002). Learning Equivalence Classes of Bayesian-Network Structures. J. Mach. Learn. Res., 2:445–498.
* [Colombo and Maathuis, 2014] Colombo, D. and Maathuis, M. H. (2014). Order-Independent Constraint-Based Causal Structure Learning. J. Mach. Learn. Res., 15:3921–3962.
* [Entner and Hoyer, 2010] Entner, D. and Hoyer, P. O. (2010). On causal discovery from time series data using FCI. In Proc. Fifth Eur. Work. Probabilistic Graph. Model., pages 121–128.
* [Granger, 1969] Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3):424–438.
* [Hyvärinen et al., 2010] Hyvärinen, A., Zhang, K., Shimizu, S., and Hoyer, P. O. (2010). Estimation of a structural vector autoregression model using non-gaussianity. J. Mach. Learn. Res., 11:1709–1731.
* [Malinsky and Spirtes, 2018] Malinsky, D. and Spirtes, P. (2018). Causal structure learning from multivariate time series in settings with unmeasured confounding. In Proc. of 2018 ACM SIGKDD Work. on Causal Discovery, pages 23–47.
* [Moneta et al., 2011] Moneta, A., Chlaß, N., Entner, D., and Hoyer, P. (2011). Causal search in structural vector autoregressive models. In NIPS Mini-Symp. on Causality in Time Series, pages 95–114.
* [Peters et al., 2017] Peters, J., Janzing, D., and Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT Press, Cambridge, MA.
* [Ramsey et al., 2006] Ramsey, J., Spirtes, P., and Zhang, J. (2006). Adjacency-faithfulness and conservative causal inference. In Proc. 22nd Conf. on Uncertainty in Art. Int., pages 401–408.
* [Ramsey, 2014] Ramsey, J. D. (2014). A Scalable Conditional Independence Test for Nonlinear, Non-Gaussian Data. https://arxiv.org/abs/1401.5031.
* [Runge, 2018a] Runge, J. (2018a). Causal network reconstruction from time series: From theoretical assumptions to practical estimation. Chaos: An Interdiscip. J. Nonlinear Sci., 28(7):075310.
* [Runge, 2018b] Runge, J. (2018b). Conditional independence testing based on a nearest-neighbor estimator of conditional mutual information. In Storkey, A. & Perez-Cruz, F., editor, Proc. 21st Int. Conf. Artif. Intell. Stat. Playa Blanca, Lanzarote, Canary Islands: PMLR.
* [Runge et al., 2019a] Runge, J., Bathiany, S., Bollt, E., Camps-Valls, G., Coumou, D., Deyle, E., Glymour, C., Kretschmer, M., Mahecha, M. D., Muñoz-Marí, J., van Nes, E. H., Peters, J., Quax, R., Reichstein, M., Scheffer, M., Schölkopf, B., Spirtes, P., Sugihara, G., Sun, J., Zhang, K., and Zscheischler, J. (2019a). Inferring causation from time series in earth system sciences. Nature Comm., 10(1):2553.
* [Runge et al., 2012] Runge, J., Heitzig, J., Marwan, N., and Kurths, J. (2012). Quantifying causal coupling strength: A lag-specific measure for multivariate time series related to transfer entropy. Phys. Rev. E, 86(6):061121.
* [Runge et al., 2019b] Runge, J., Nowack, P., Kretschmer, M., Flaxman, S., and Sejdinovic, D. (2019b). Detecting and quantifying causal associations in large nonlinear time series datasets. Science Advances, eaau4996(5).
* [Sen et al., 2017] Sen, R., Suresh, A. T., Shanmugam, K., Dimakis, A. G., and Shakkottai, S. (2017). Model-Powered Conditional Independence Test. In Proc. 30th Conf. Adv. Neural Inf. Process. Syst., pages 2955–2965.
* [Spirtes and Glymour, 1991] Spirtes, P. and Glymour, C. (1991). An Algorithm for Fast Recovery of Sparse Causal Graphs. Soc. Sci. Comput. Rev., 9(1):62–72.
* [Spirtes et al., 2000] Spirtes, P., Glymour, C., and Scheines, R. (2000). Causation, Prediction, and Search. MIT Press, Boston, MA.
* [Spirtes and Zhang, 2016] Spirtes, P. and Zhang, K. (2016). Causal discovery and inference: concepts and recent methodological advances. Appl. Informatics, 3(1):3.
* [Zhang et al., 2011] Zhang, K., Peters, J., Janzing, D., and Schölkopf, B. (2011). Kernel-based Conditional Independence Test and Application in Causal Discovery. In Proc. 27th Conf. Uncertain. Artif. Intell., pages 804–813.
## Appendix S1 Definitions
The following definitions are adaptations of the standard assumptions of
causal discovery to the time series case. Here we consider the causally
sufficient case and assume that all variables
$\mathbf{X}=(X^{1},\ldots,X^{N})$ of the underlying SCM (1) are observed.
Additionally, we assume that the maximum PCMCI+ time lag
$\tau_{\max}\geq\tau^{\mathcal{P}}_{\max}$, where $\tau^{\mathcal{P}}_{\max}$
is the maximum time lag of any parent in the SCM (1).
###### Definition S1 (Causal Markov Condition).
The joint distribution of a process $\mathbf{X}$ whose causal structure can be
represented in a time series graph $\mathcal{G}$ fulfills the Causal Markov
Condition iff for all $X^{j}_{t}\in\mathbf{X}_{t}$ every non-descendent of
$X^{j}_{t}$ in $\mathcal{G}$ is independent of $X^{j}_{t}$ given the parents
$\mathcal{P}(X^{j}_{t})$. In particular,
$\mathbf{X}_{t}^{-}{\setminus}\mathcal{P}(X^{j}_{t})\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t})$ since all variables in
$\mathbf{X}_{t}^{-}$ are non-descendants of $X^{j}_{t}$ by time-order.
Note that for the SCM (1) with independent noise terms the Causal Markov
Condition is automatically fulfilled.
###### Definition S2 (Adjacency and standard faithfulness Condition).
The joint distribution of a process $\mathbf{X}$ whose causal structure can be
represented in a time series graph $\mathcal{G}$ fulfills the Adjacency
Faithfulness Condition iff for all disjoint
$X^{i}_{t-\tau},X^{j}_{t},\mathcal{S}\in\mathbf{X}^{-}_{t+1}$ with $\tau>0$
$\displaystyle X^{i}_{t-\tau}$ $\displaystyle\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t-\tau}\to
X^{j}_{t}\notin\mathcal{G}$ $\displaystyle X^{i}_{t-\tau}$ $\displaystyle\to
X^{j}_{t}\in\mathcal{G}~{}\Rightarrow~{}X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}~{}~{}\text{(contrapositive)}$
and with $\tau=0$
$\displaystyle X^{i}_{t}$ $\displaystyle\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}\notin\mathcal{G}$
$\displaystyle X^{i}_{t}$
$\displaystyle{\circ\\!{\\--}\\!\circ}X^{j}_{t}\in\mathcal{G}~{}\Rightarrow~{}X^{i}_{t}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}~{}~{}\text{(contrapositive)}\,.$
Furthermore, the variables fulfill the (standard) Faithfulness Condition iff
for $\tau\geq 0$
$\displaystyle X^{i}_{t-\tau}$ $\displaystyle\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t-\tau}\bowtie
X^{j}_{t}~{}|~{}\mathcal{S}$ $\displaystyle X^{i}_{t-\tau}$
$\displaystyle\cancel{\bowtie}X^{j}_{t}~{}|~{}\mathcal{S}~{}\Rightarrow~{}X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}~{}~{}\text{(contrapositive)}\,.$
## Appendix S2 Proofs
### S2.1 Proof of Lemma 1
We first consider the following Lemma:
###### Lemma S1.
Algorithm 1 returns a superset of lagged parents under Assumptions 1, i.e.,
$\mathcal{P}^{-}_{t}(X^{j}_{t})\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$
for all $X^{j}_{t}$ in $\mathbf{X}_{t}$.
###### Proof.
We need to show that for arbitrary $(X^{i}_{t-\tau},X^{j}_{t})$ with $\tau>0$
we have
$X^{i}_{t-\tau}\notin\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})~{}\Rightarrow~{}X^{i}_{t-\tau}\notin\mathcal{P}^{-}_{t}(X^{j}_{t})$.
Algorithm 1 removes $X^{i}_{t-\tau}$ from
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ iff
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ for some
$\mathcal{S}\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\}$
in the iterative CI tests. Then Adjacency Faithfulness directly implies that
$X^{i}_{t-\tau}$ is not adjacent to $X^{j}_{t}$ and in particular
$X^{i}_{t-\tau}\notin\mathcal{P}^{-}_{t}(X^{j}_{t})$. ∎
With this step we can prove Lemma 1.
###### Proof.
The lemma states that under Assumptions 1 with Adjacency Faithfulness replaced
by standard Faithfulness Alg. 1 for all $X^{j}_{t}\in\mathbf{X}_{t}$ returns
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})=\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$
where $\mathcal{C}_{t}(X^{j}_{t})$ denotes the contemporaneous ancestors of
$X^{j}_{t}$. We need to show that for arbitrary
$X^{i}_{t-\tau},X^{j}_{t}\in\mathbf{X}^{-}_{t+1}$ with $\tau>0$: (1)
$X^{i}_{t-\tau}\notin\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})~{}\Rightarrow~{}X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$
and (2)
$X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})~{}\Rightarrow~{}X^{i}_{t-\tau}\in\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
Ad 1) Algorithm 1 removes $X^{i}_{t-\tau}$ from
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ iff
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp X^{j}_{t}~{}|~{}\mathcal{S}$ for some
$\mathcal{S}\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
in the iterative CI tests. Then standard Faithfulness implies that
$X^{i}_{t-\tau}\bowtie X^{j}_{t}~{}|~{}\mathcal{S}$ and in particular
$X^{i}_{t-\tau}\notin\mathcal{P}^{-}_{t}(X^{j}_{t})$, as proven already in
Lemma S1 under the weaker Adjacency Faithfulness Condition. To show that
$X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$
we note that
$\mathcal{S}\subseteq\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
does not include any contemporaneous conditions and, hence, all
contemporaneous directed paths from contemporaneous ancestors of $X^{j}_{t}$
are open and also paths from parents of those ancestors are open. If
$X^{i}_{t-\tau}\in\bigcup_{X^{i}_{t}\in\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$,
by the contraposition of standard Faithfulness we should observe
$X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\mathcal{S}$. Then
the fact that on the contrary we observe $X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S}$ implies that
$X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
Ad 2) Now we have $X^{i}_{t-\tau}\in\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$
which implies that
$X^{i}_{t-\tau}\cancel{\perp\\!\\!\\!\perp}X^{j}_{t}~{}|~{}\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
in the last iteration step of Alg. 1. By (1),
$\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ is a superset of
$\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
Define the lagged extra conditions as
$W^{-}_{t}=\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t}),X^{i}_{t-\tau}\\}$.
Since $W^{-}_{t}$ is lagged, it is a non-descendant of $X^{j}_{t}$ or any
$X^{k}_{t}\in\mathcal{C}_{t}(X^{j}_{t})$. We now proceed by a proof by
contradiction. Suppose to the contrary that
$X^{i}_{t-\tau}\notin\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
The Causal Markov Condition applies to both $X^{i}_{t-\tau}$ and $W^{-}_{t}$
and implies that $(X^{i}_{t-\tau},W^{-}_{t})\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
From the weak union property of conditional independence we get
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t}),W^{-}_{t}$
which is equivalent to $X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\}$,
contrary to the assumption, hence
$X^{i}_{t-\tau}\in\bigcup_{X^{i}_{t}\in\\{X^{j}_{t}\\}\cup\mathcal{C}_{t}(X^{j}_{t})}\mathcal{P}^{-}_{t}(X^{i}_{t})$.
∎
### S2.2 Proof of Theorem 1
###### Proof.
The theorem states that under Assumptions 1
$\widehat{\mathcal{G}^{*}}=\mathcal{G}^{*}$, where the $\mathcal{G}^{*}$
denotes the skeleton of the time series graph. We denote the two types of
skeleton links $\to$ and ${\circ\\!{\\--}\\!\circ}$ here generically as
${\star\\!{\\--}\\!\star}$ and can assume $\tau_{\max}\geq\tau\geq 0$. We need
to show that for arbitrary $X^{i}_{t-\tau},X^{j}_{t}\in\mathbf{X}^{-}_{t+1}$:
(1)
$X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\widehat{\mathcal{G}^{*}}~{}\Rightarrow~{}X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\mathcal{G}^{*}$
and (2)
$X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\mathcal{G}^{*}~{}\Rightarrow~{}X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\widehat{\mathcal{G}^{*}}$.
Ad (1): Algorithm 2 deletes a link
$X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}$ from
$\widehat{\mathcal{G}^{*}}$ iff $X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$
for some $\mathcal{S}\subseteq\widehat{\mathcal{A}}_{t}(X^{j}_{t})$ in the
iterative CI tests with $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ estimated
in Alg. 1. $\widehat{\mathcal{A}}_{t}(X^{j}_{t})$ denotes the contemporaneous
adjacencies. Then Adjacency Faithfulness directly implies that
$X^{i}_{t-\tau}$ is not adjacent to $X^{j}_{t}$:
$X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}\notin\mathcal{G}^{*}$.
Ad (2): By Lemma 1 we know that $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ is
a superset of the lagged parents of $X^{j}_{t}$. Denote the lagged, extra
conditions occurring in the CI tests of Alg. 2 as
$W^{-}_{t}=(\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau}))\setminus\mathcal{P}(X^{j}_{t})$.
$W^{-}_{t}$ does not contain parents of $X^{j}_{t}$ and by the assumption also
$X^{i}_{t-\tau}$ is not a parent of $X^{j}_{t}$. We further assume that for
$\tau=0$ $X^{i}_{t}$ is also not a descendant of $X^{j}_{t}$ since that case
is covered if we exchange $X^{i}_{t}$ and $X^{j}_{t}$. Then the Causal Markov
Condition implies $(X^{i}_{t-\tau},W^{-}_{t})\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t})$. By the weak union property of
conditional independence this leads to $X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t}),W^{-}_{t}$ which is equivalent to
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau})$.
Now Alg. 2 iteratively tests $X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau})$
for all $\mathcal{S}\subseteq\widehat{\mathcal{A}_{t}}(X^{j}_{t})$. By the
first part of this proof, the estimated contemporaneous adjacencies are always
a superset of the true contemporaneous adjacencies, i.e.,
$\mathcal{A}_{t}(X^{j}_{t})\subseteq\widehat{\mathcal{A}_{t}}(X^{j}_{t})$, and
by Lemma 1 $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ is a superset of the
lagged parents. Hence, at some iteration step
$\mathcal{S}=\mathcal{P}_{t}(X^{j}_{t})$ and Alg. 2 will find
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{P}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\\{X^{i}_{t-\tau}\\},\widehat{\mathcal{B}}^{-}_{t-\tau}(X^{i}_{t-\tau})$
and remove $X^{i}_{t-\tau}{\star\\!{\\--}\\!\star}X^{j}_{t}$ from
$\widehat{\mathcal{G}^{*}}$. ∎
For empty conditioning sets $\mathcal{S}$ ($p=0$), Alg. 2 is equivalent to the
MCI algorithm [Runge et al., 2019b] with the slight change that the latter is
initialized with a fully connected (lagged) graph, which has no effect
asymptotically. In [Runge et al., 2019b] the authors prove the consistency of
PCMCI assuming no contemporaneous causal links under the standard Faithfulness
Condition. The proof above implies that PCMCI is already consistent under the
weaker Adjacency Faithfulness Condition.
### S2.3 Proof of Lemma 2
###### Proof.
Time order and stationarity can be used to constrain the four cases as
follows. Let us first consider a generic triple
$X^{i}_{t_{i}}{\star\\!{\\--}\\!\star}X^{k}_{t_{k}}{\star\\!{\\--}\\!\star}X^{j}_{t_{j}}$.
By stationarity we can fix $t=t_{j}$. We only need to consider cases with
$t_{i},t_{k}\leq t$. If $t_{k}>t_{j}$, the triple is oriented already by time
order and the case $t_{i}>t_{j}$ is symmetric.
The possible triples in the collider phase of the original PC algorithm are
$X^{i}_{t_{i}}{\star\\!{\\--}\\!\star}X^{k}_{t_{k}}{\star\\!{\\--}\\!\star}X^{j}_{t}$
where $(X^{i}_{t_{i}},X^{j}_{t})$ are not adjacent. For $t_{k}<t$ the time-
order constraint automatically orients $X^{k}_{t_{k}}\to X^{j}_{t}$ and hence
$X^{k}_{t_{k}}$ is a parent of $X^{j}_{t}$ and must always be in the
separating set that makes $X^{i}_{t_{i}}$ and $X^{j}_{t}$ independent. Hence
we only need to consider $t_{k}=t$ and can set $\tau=t-t_{i}$
($\tau_{\max}\geq\tau\geq 0$), leaving the two cases of unshielded triples
$X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau>0$)
or
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$
(for $\tau=0$) in $\mathcal{G}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not
adjacent. Since $X^{k}_{t}$ is contemporaneous to $X^{j}_{t}$, this
restriction implies that only contemporaneous parts of separating sets are
relevant for the collider orientation phase.
For rule R1 in the orientation phase the original PC algorithm considers the
remaining triples with $X^{i}_{t-\tau}\to X^{k}_{t}$ that were not oriented by
the collider phase (or by time order). This leaves $X^{i}_{t-\tau}\to
X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ where $\tau_{\max}\geq\tau\geq 0$.
For rule R2 the original PC algorithm considers $X^{i}_{t_{i}}\to
X^{k}_{t_{k}}\to X^{j}_{t}$ with
$X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. The latter type of link
leads to $t_{i}=t$ and time order restricts the triples to $X^{i}_{t}\to
X^{k}_{t}\to X^{j}_{t}$ with $X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$.
For rule R3 the original PC algorithm considers
$X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{k}_{t_{k}}\to X^{j}_{t}$ and
$X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{l}_{t_{l}}\to X^{j}_{t}$ where
$(X^{k}_{t_{k}},X^{l}_{t_{l}})$ are not adjacent and
$X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. The latter constraint leads
to $t_{i}=t$ and $X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{k}_{t_{k}}$ and
$X^{i}_{t_{i}}{\circ\\!{\\--}\\!\circ}X^{k}_{t_{l}}$ imply $t_{k}=t_{l}=t$.
Hence we only need to check triples
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}\to X^{j}_{t}$ and
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{l}_{t}\to X^{j}_{t}$ where
$(X^{k}_{t},X^{l}_{t})$ are not adjacent and
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$. ∎
### S2.4 Proof of Theorem 2
###### Proof.
We first consider the case under Assumptions 1 with Adjacency Faithfulness and
PCMCI+ in conjunction with the conservative collider orientation rule in Alg.
S2. We need to show that all separating sets estimated in Alg. S2 during the
conservative orientation rule are correct. From the soundness (Theorem 1) and
correctness of the separating sets follows the correctness of the collider
orientation phase and the rule orientation phase which implies the
completeness.
By Lemma 2 we only need to prove that in Alg. S2 for unshielded triples
$X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ (for $\tau>0$)
or
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$
(for $\tau=0$) the separating sets among subsets of contemporaneous neighbors
of $X^{j}_{t}$ and, if $\tau=0$, of $X^{i}_{t}$, are correct. Algorithm S2
tests $X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$
for all
$\mathcal{S}{\subseteq}\widehat{\mathcal{A}}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
and for all
$\mathcal{S}{\subseteq}\widehat{\mathcal{A}}_{t}(X^{i}_{t}){\setminus}\\{X^{j}_{t}\\}$
(if $\tau=0$). Since PCMCI+ is sound, all adjacency information is correct and
since all CI tests are assumed correct, all information on separating sets is
correct. Furthermore, with the conservative rule those triples where only
Adjacency Faithfulness, but not standard Faithfulness, holds will be correctly
marked as ambiguous triples.
Under standard Faithfulness the completeness requires to prove that PCMCI+
without the conservative orientation rule yields correct separating set
information. By Lemma 2 also here we need to consider only separating sets
among subsets of contemporaneous neighbors of $X^{j}_{t}$. Algorithm 2 tests
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}~{}|~{}\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau})$
for all
$\mathcal{S}{\subseteq}\widehat{\mathcal{A}}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$.
And again, since PCMCI+ is sound, all adjacency information is correct and
since all CI tests are assumed correct, all information on separating sets is
correct, from which the completeness for this case follows. ∎
### S2.5 Proof of Theorem 3
###### Proof.
Order-independence follows straightforwardly from sticking to the PC algorithm
version in [Colombo and Maathuis, 2014]. In particular, Alg. 1 and Alg. 2 are
order-independent since they are based on PC stable where adjacencies are
removed only after each loop over conditions of cardinality $p$. Furthermore,
the collider phase (Alg. S2) and rule orientation phase (Alg. S3) are order-
independent by marking triples with inconsistent separating sets as ambiguous
and consistently marking conflicting link orientations by ${x\\!{\\--}\\!x}$.
∎
### S2.6 Proof of Theorem 4
###### Proof.
The theorem states that under Assumptions 1 the effect size for the PCMCI+
oracle case CI tests in Alg. 2 for $p=0$ for contemporaneous true links
$X^{i}_{t}\to X^{j}_{t}\in\mathcal{G}$ is greater than that of
PCMCI${}^{+}_{0}$:
$I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}),\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t}))>\min(I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})),\,I(X^{i}_{t};X^{j}_{t}|\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})))$
if both $X^{i}_{t}$ and $X^{j}_{t}$ have parents that are not shared with the
other. We will use an information-theoretic framework here and consider the
conditional mutual information.
To prove this statement, we denote by
$\mathcal{B}_{i}=\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})\setminus\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$
the lagged conditions of $X^{i}_{t}$ that are not already contained in those
of $X^{j}_{t}$ and, correspondingly,
$\mathcal{B}_{j}=\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\setminus\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})$.
Since both $X^{i}_{t}$ and $X^{j}_{t}$ have parents that are not shared with
the other and we assume the oracle case, both these sets are non-empty.
Further, we denote the common lagged conditions as
$\mathcal{B}_{ij}=\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})\cap\widehat{\mathcal{B}}^{-}_{t}(X^{i}_{t})$
and make use of the following conditional independencies, which hold by the
Markov assumption: (1) $\mathcal{B}_{i}\perp\\!\\!\\!\perp
X^{j}_{t}|\mathcal{B}_{j},\mathcal{B}_{ij},X^{i}_{t}$ and (2)
$\mathcal{B}_{j}\perp\\!\\!\\!\perp
X^{i}_{t}|\mathcal{B}_{i},\mathcal{B}_{ij}$. We first prove that, given a
contemporaneous true link $X^{i}_{t}\to X^{j}_{t}\in\mathcal{G}$,
$I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})>I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})$
by using the following two ways to apply the chain rule of conditional mutual
information:
$\displaystyle
I(X^{i}_{t},\mathcal{B}_{i};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij})=$
$\displaystyle=I(X^{i}_{t},\mathcal{B}_{i};\mathcal{B}_{j}|\mathcal{B}_{ij})+I(X^{i}_{t},\mathcal{B}_{i};X^{j}_{t}|\mathcal{B}_{ij}\mathcal{B}_{j})$
$\displaystyle=I(\mathcal{B}_{i};\mathcal{B}_{j}|\mathcal{B}_{ij})+\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i})}_{=0~{}~{}\text{(Markov)}}$
$\displaystyle\phantom{=}+I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})+\underbrace{I(\mathcal{B}_{i};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j},X^{i}_{t})}_{=0~{}~{}\text{(Markov)}}$
(S1)
and
$\displaystyle
I(X^{i}_{t},\mathcal{B}_{i};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij})=$
$\displaystyle=I(\mathcal{B}_{i};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij})+I(X^{i}_{t};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij}\mathcal{B}_{i})$
$\displaystyle=I(\mathcal{B}_{i};\mathcal{B}_{j}|\mathcal{B}_{ij})+\underbrace{I(\mathcal{B}_{i};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})}_{>0~{}~{}\text{since
$X^{i}_{t}\to X^{j}_{t}$}}$
$\displaystyle\phantom{=}+I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})+\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i},X^{j}_{t})}_{>0~{}~{}\text{since
$X^{i}_{t}\to X^{j}_{t}$}}$ (S2)
where (S1) and (S2) denote two different applications of the chain rule. From
this is follows that
$I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j})>I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})$.
Hence, it remains to prove that
$I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{j},\mathcal{B}_{i})>I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})$,
which we also do by the chain rule:
$\displaystyle
I(X^{i}_{t};X^{j}_{t},\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i})=$
$\displaystyle=I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i})+\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i},X^{j}_{t})}_{>0~{}~{}\text{since
$X^{i}_{t}\to X^{j}_{t}$}}$ (S3)
$\displaystyle=\underbrace{I(X^{i}_{t};\mathcal{B}_{j}|\mathcal{B}_{ij},\mathcal{B}_{i})}_{=0~{}~{}\text{(Markov)}}+I(X^{i}_{t};X^{j}_{t}|\mathcal{B}_{ij},\mathcal{B}_{i},\mathcal{B}_{j})$
(S4)
∎
## Appendix S3 Further pseudo code
Algorithms S2 and S3 detail the pseudo-code for the PCMCI+ / PCMCI${}^{+}_{0}$
/ PC collider phase with different collider rules and the orientation phase.
Algorithm S2 (Detailed PCMCI+ / PCMCI${}^{+}_{0}$ / PC collider phase with
different collider rules)
1:$\mathcal{G}$ and sepset from Alg. 2, rule $=\\{$’none’, ’conservative’,
’majority’$\\}$, time series dataset $\mathbf{X}=(X^{1},\,\ldots,X^{N})$,
significance threshold $\alpha_{\rm PC}$, ${\rm CI}(X,\,Y,\,\mathbf{Z})$,
PCMCI+ / PCMCI${}^{+}_{0}$: $\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t})$ for all
$X^{j}_{t}$ in $\mathbf{X}_{t}$
2:for all unshielded triples $X^{i}_{t-\tau}\to
X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ ($\tau>0$) or
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$
($\tau=0$) in $\mathcal{G}$ where $(X^{i}_{t-\tau},X^{j}_{t})$ are not
adjacent do
3: if rule $=$ ’none’ then
4: if $X^{k}_{t}$ is not in sepset$(X^{i}_{t-\tau},X^{j}_{t})$ then
5: Orient $X^{i}_{t-\tau}\to X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$
($\tau>0$) or
$X^{i}_{t-\tau}{\circ\\!{\\--}\\!\circ}X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$
($\tau=0$) as $X^{i}_{t-\tau}\to X^{k}_{t}\leftarrow X^{j}_{t}$
6: else
7: PCMCI+ / PCMCI${}^{+}_{0}$: Define contemporaneous adjacencies
$\widehat{\mathcal{A}}(X^{j}_{t})=\widehat{\mathcal{A}}_{t}(X^{j}_{t})=\\{X^{i}_{t}{\neq}X^{j}_{t}\in\mathbf{X}_{t}:X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}~{}\text{in
$\mathcal{G}$}\\}$
8: PC: Define full adjacencies $\widehat{\mathcal{A}}(X^{j}_{t})$ for all
(lagged and contemporaneous) links in $\mathcal{G}$
9: for all for all
$\mathcal{S}{\subseteq}\widehat{\mathcal{A}}(X^{j}_{t}){\setminus}\\{X^{i}_{t-\tau}\\}$
and for all
$\mathcal{S}{\subseteq}\widehat{\mathcal{A}}(X^{i}_{t}){\setminus}\\{X^{j}_{t}\\}$
(if $\tau=0$) do
10: Evaluate CI($X^{i}_{t{-}\tau},X^{j}_{t},\mathbf{Z})$ with
11: PCMCI+:
$\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\},\widehat{\mathcal{B}}^{-}_{t{-}\tau}(X^{i}_{t{-}\tau}))$
12: PCMCI${}^{+}_{0}$:
$\mathbf{Z}{=}(\mathcal{S},\widehat{\mathcal{B}}^{-}_{t}(X^{j}_{t}){\setminus}\\{X^{i}_{t{-}\tau}\\})$
13: PC: $\mathbf{Z}{=}\mathcal{S}$
14: Store all subsets $\mathcal{S}$ with $p$-value $>\alpha_{\rm PC}$ as
separating subsets
15: if no separating subsets are found then
16: Mark triple as ambiguous
17: else
18: Compute fraction $n_{k}$ of separating subsets that contain $X^{k}_{t}$
19: if rule $=$ ’conservative’ then
20: Orient triple as collider if $n_{k}{=}0$, leave unoriented if $n_{k}{=}1$,
and mark as ambiguous if $0{<}n_{k}{<}1$
21: else if rule $=$ ’majority’ then
22: Orient triple as collider if $n_{k}{<}0.5$, leave unoriented if
$n_{k}{>}0.5$, and mark as ambiguous if $n_{k}{=}0.5$
23: Mark links in $\mathcal{G}$ with conflicting orientations as
${x\\!{\\--}\\!x}$
24:return $\mathcal{G}$, sepset, ambiguous triples, conflicting links
Algorithm S3 (Detailed PCMCI+ / PCMCI${}^{+}_{0}$ / PC rule orientation phase)
1:$\mathcal{G}$, ambiguous triples, conflicting links
2:while any unambiguous triples suitable for rules R1-R3 are remaining do
3: Apply rule R1 (orient unshielded triples that are not colliders):
4: for all unambiguous triples $X^{i}_{t-\tau}\to
X^{k}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ where $(X^{i}_{t-\tau},X^{j}_{t})$
are not adjacent do
5: Orient as $X^{i}_{t-\tau}\to X^{k}_{t}\to X^{j}_{t}$
6: Mark links with conflicting orientations as ${x\\!{\\--}\\!x}$
7: Apply rule R2 (avoid cycles):
8: for all unambiguous triples $X^{i}_{t}\to X^{k}_{t}\to X^{j}_{t}$ with
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ do
9: Orient as $X^{i}_{t}\to X^{j}_{t}$
10: Mark links with conflicting orientations as ${x\\!{\\--}\\!x}$
11: Apply rule R3 (orient unshielded triples that are not colliders and avoid
cycles):
12: for all pairs of unambiguous triples
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{k}_{t}\to X^{j}_{t}$ and
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{l}_{t}\to X^{j}_{t}$ where
$(X^{k}_{t},X^{l}_{t})$ are not adjacent and
$X^{i}_{t}{\circ\\!{\\--}\\!\circ}X^{j}_{t}$ do
13: Orient as $X^{i}_{t}\to X^{j}_{t}$
14: Mark links with conflicting orientations as ${x\\!{\\--}\\!x}$
15:return $\mathcal{G}$, conflicting links
## Appendix S4 Implementation details
In the linear and nonlinear numerical experiments PCMCI+ is compared with the
PC algorithm, both implemented with the appropriate CI test (ParCorr for the
linear case, GPDC for the nonlinear case). For the linear numerical
experiments we additionally consider representatives from two further
frameworks: GCresPC, a combination of GC with PC applied to residuals, and an
autoregressive model version of LiNGAM [Hyvärinen et al., 2010], a
representative of the SCM framework. Their implementations are as follows.
### S4.1 LiNGAM
For LiNGAM the code was taken from https://github.com/cdt15/lingam which
provides a class VARLiNGAM. The method was called follows:
Input: data, tau_max
model = lingam.VARLiNGAM(lags=tau_max, criterion=None, prune=True)
model.fit(data)
val_matrix = model.adjacency_matrices_.transpose(2,1,0)
graph = (val_matrix != 0.).astype(’int’)
Output: graph
The causal graph `graph` encodes the causal relations in an array of shape
`(N, N, tau_max + 1)`. The option `criterion=None` just ignores the optional
automatic selection of `lags`, which is here set to the same `tau_max` for all
methods. I could not find a way to obtain p-values in the VARLiNGAM
implementation, but with the parameter setting `prune=True` the resulting
adjacency matrices are regularized with an adaptive LASSO approach using the
BIC criterion to find the optimal regularization hyper-parameter
(`sklearn.LassoLarsIC(criterion=’bic’)`). Non-zero adjacencies were then
evaluated as causal links. Note that all other methods can be intercompared at
different $\alpha_{\rm PC}$ levels while for comparison against LiNGAM we
focus on its relative changes rather than absolute performance.
### S4.2 GCresPC
There was no code available for the method proposed in [Moneta et al., 2011].
The present implementation first fits a VAR model up to $\tau_{\max}$ and
applies the PC algorithm on the residuals. To remove spurious lagged links
(due to contemporaneous paths), the PC algorithm was additionally run on
significant lagged and contemporaneous links, but the orientation phase was
restricted to contemporaneous links, as proposed in [Moneta et al., 2011]. The
following Python pseudo-code utilizes functionality from the tigramite
package, numpy, and statsmodels:
Input: data, tau_max, alpha
import functions/classes ParCorr, PCMCI, DataFrame from tigramite
graph = np.zeros((N, N, tau_max + 1))
# 1. Estimate lagged adjacencies (to be updated in step 3.)
tsamodel = tsa.var.var_model.VAR(data)
results = tsamodel.fit(maxlags=tau_max, trend=’nc’)
pvalues = results.pvalues
values = results.coefs
residuals = results.resid
lagged_parents = significant lagged links at alpha
# 2. Run PC algorithm on residuals (with tau_max=0)
pcmci = PCMCI(dataframe=DataFrame(residuals), cond_ind_test=ParCorr())
pcmcires = pcmci.run_pcalg(pc_alpha=alpha,
tau_min=0,
tau_max=0)
# Update contemporaneous graph
graph[:,:,0] = pcmcires[’graph’][:,:,0]
# 3. Run PC algorithm on significant lagged and contemporaneous adjacencies
# to remove spurious lagged links due to contemporaneous parents
selected_links = lagged_parents + significant contemporaneous adjacencies
pcmci = PCMCI(dataframe=DataFrame(data), cond_ind_test=ParCorr())
pcmcires = pcmci.run_pcalg(selected_links=selected_links,
pc_alpha=alpha,
tau_min=0,
tau_max=tau_max)
# Update lagged part of graph
graph[:,:,1:] = pcmcires[’graph’][:,:,1:]
Output: graph
Note that the contemporaneous graph structure in `graph` comes only from
applying the PC algorithm to the residuals and, hence, does not utilize
triples containing lagged adjacencies. Step 3 is necessary to remove spurious
lagged links due to contemporaneous parents. The output of GCresPC depends on
$\alpha_{\rm PC}$ as for PCMCI+ and the PC algorithm.
## Appendix S5 Further numerical experiments
Next to repeating the overview figure for the linear Gaussian model setup from
the main text in Fig. S1, in Fig. S2 we show the linear mixed noise setup, and
in Fig. S3 the nonlinear mixed noise setup. The remaining pages contain
results of further numerical experiments that evaluate different
$a,\,N,\,T,\,\tau_{\max}$ and $\alpha_{\rm PC}$ for the linear model setups.
All results and more will be contributed to the causality benchmark platform
`www.causeme.net` [Runge et al., 2019a] to facilitate a further expanded
method evaluation.
Figure S1: Numerical experiments with linear Gaussian setup for varying (A)
autocorrelation strength $a$ (B) number of variables $N$ (C) sample size $T$
and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters
indicated in the top right. Errorbars show std. errors or the 90% range (for
runtime). The insets show ANOVA statistics.
Figure S2: Numerical experiments with linear mixed noise setup for varying
(A) autocorrelation strength $a$ (B) number of variables $N$ (C) sample size
$T$ and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters
indicated in the top right. Errorbars show std. errors or the 90% range (for
runtime). The insets show ANOVA statistics.
Figure S3: Numerical experiments with nonlinear mixed noise setup for varying
(A) autocorrelation strength $a$ (B) number of variables $N$ (C) sample size
$T$ and (D) maximum time lag $\tau_{\max}$. All remaining setup parameters
indicated in the top right. Errorbars show std. errors or the 90% range (for
runtime). The insets show ANOVA statistics.
Figure S4: Numerical experiments with linear Gaussian setup for varying
autocorrelation $a$ and $T=200$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
$N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are
indicated in the upper right of each panel.
Figure S5: Numerical experiments with linear Gaussian setup for varying
autocorrelation $a$ and $T=500$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
$N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are
indicated in the upper right of each panel.
Figure S6: Numerical experiments with linear Gaussian setup for varying
autocorrelation $a$ and $T=1000$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
$N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are
indicated in the upper right of each panel.
Figure S7: Numerical experiments with linear Gaussian setup for varying
number of variables $N$ and $T=200$ . The left (right) column shows results
for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results
for increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S8: Numerical experiments with linear Gaussian setup for varying
number of variables $N$ and $T=500$ . The left (right) column shows results
for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results
for increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S9: Numerical experiments with linear Gaussian setup for varying
number of variables $N$ and $T=1000$ . The left (right) column shows results
for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results
for increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S10: Numerical experiments with linear Gaussian setup for varying
sample size $T$ for $N=5$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S11: Numerical experiments with linear Gaussian setup for varying
sample size $T$ for $N=10$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S12: Numerical experiments with linear Gaussian setup for varying
sample size $T$ for $N=20$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S13: Numerical experiments with linear Gaussian setup for varying
maximum time lag $\tau_{\max}$ and $T=200$ . The left (right) column shows
results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict
results for increasing autocorrelations $a$ (top to bottom). All model and
method parameters are indicated in the upper right of each panel.
Figure S14: Numerical experiments with linear Gaussian setup for varying
maximum time lag $\tau_{\max}$ and $T=500$ . The left (right) column shows
results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict
results for increasing autocorrelations $a$ (top to bottom). All model and
method parameters are indicated in the upper right of each panel.
Figure S15: Numerical experiments with linear Gaussian setup for varying
maximum time lag $\tau_{\max}$ and $T=1000$ . The left (right) column shows
results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict
results for increasing autocorrelations $a$ (top to bottom). All model and
method parameters are indicated in the upper right of each panel.
Figure S16: Numerical experiments with linear mixed noise setup for varying
autocorrelation $a$ and $T=200$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
$N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are
indicated in the upper right of each panel.
Figure S17: Numerical experiments with linear mixed noise setup for varying
autocorrelation $a$ and $T=500$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
$N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are
indicated in the upper right of each panel.
Figure S18: Numerical experiments with linear mixed noise setup for varying
autocorrelation $a$ and $T=1000$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
$N=2,\,3,\,5,\,10$ (top to bottom). All model and method parameters are
indicated in the upper right of each panel.
Figure S19: Numerical experiments with linear mixed noise setup for varying
number of variables $N$ and $T=200$ . The left (right) column shows results
for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results
for increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S20: Numerical experiments with linear mixed noise setup for varying
number of variables $N$ and $T=500$ . The left (right) column shows results
for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results
for increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S21: Numerical experiments with linear mixed noise setup for varying
number of variables $N$ and $T=1000$ . The left (right) column shows results
for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results
for increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S22: Numerical experiments with linear mixed noise setup for varying
sample size $T$ for $N=5$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S23: Numerical experiments with linear mixed noise setup for varying
sample size $T$ for $N=10$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S24: Numerical experiments with linear mixed noise setup for varying
sample size $T$ for $N=20$ . The left (right) column shows results for
significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict results for
increasing autocorrelations $a$ (top to bottom). All model and method
parameters are indicated in the upper right of each panel.
Figure S25: Numerical experiments with linear mixed noise setup for varying
maximum time lag $\tau_{\max}$ and $T=200$ . The left (right) column shows
results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict
results for increasing autocorrelations $a$ (top to bottom). All model and
method parameters are indicated in the upper right of each panel.
Figure S26: Numerical experiments with linear mixed noise setup for varying
maximum time lag $\tau_{\max}$ and $T=500$ . The left (right) column shows
results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict
results for increasing autocorrelations $a$ (top to bottom). All model and
method parameters are indicated in the upper right of each panel.
Figure S27: Numerical experiments with linear mixed noise setup for varying
maximum time lag $\tau_{\max}$ and $T=1000$ . The left (right) column shows
results for significance level $\alpha=0.01$ ($\alpha=0.05$). The rows depict
results for increasing autocorrelations $a$ (top to bottom). All model and
method parameters are indicated in the upper right of each panel.
|
2024-09-04T02:54:58.146594 | 2020-03-08T01:10:11 | 2003.03694 | {
"authors": "Yu-Hui Chen, Ronnie R. Tamming, Kai Chen, Zhepeng Zhang, Yanfeng\n Zhang, Justin M. Hodgkiss, Richard J. Blaikie, Boyang Ding, Min Qiu",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:26101",
"submitter": "Boyang Ding",
"url": "https://arxiv.org/abs/2003.03694"
} | arxiv-papers | ††thanks: These authors contributed equally††thanks: These authors contributed
equally
# Bandgap Control in Two-Dimensional Semiconductors via Coherent Doping of
Plasmonic Hot Electrons
Yu-Hui Chen School of Physics, Beijing Institute of Technology, Beijing
10081, China Ronnie R. Tamming MacDiarmid Institute for Advanced Materials
and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies,
School of Chemical and Physical Sciences, Victoria University of Wellington,
Wellington 6012, New Zealand Kai Chen MacDiarmid Institute for Advanced
Materials and Nanotechnology, Dodd-Walls Centre for Photonic and Quantum
Technologies, School of Chemical and Physical Sciences, Victoria University of
Wellington, Wellington 6012, New Zealand Zhepeng Zhang Department of
Materials Science and Engineering, College of Engineering, Center for
Nanochemistry (CNC), College of Chemistry and Molecular Engineering, Academy
for Advanced Interdisciplinary Studies, Peking University, Beijing 100871,
China Yanfeng Zhang Department of Materials Science and Engineering, College
of Engineering, Center for Nanochemistry (CNC), College of Chemistry and
Molecular Engineering, Academy for Advanced Interdisciplinary Studies, Peking
University, Beijing 100871, China Justin M. Hodgkiss MacDiarmid Institute for
Advanced Materials and Nanotechnology, Dodd-Walls Centre for Photonic and
Quantum Technologies, School of Chemical and Physical Sciences, Victoria
University of Wellington, Wellington 6012, New Zealand Richard J. Blaikie
MacDiarmid Institute for Advanced Materials and Nanotechnology, Dodd-Walls
Centre for Photonic and Quantum Technologies, Department of Physics,
University of Otago, PO Box 56, Dunedin 9016, New Zealand Boyang Ding
<EMAIL_ADDRESS>MacDiarmid Institute for Advanced Materials and
Nanotechnology, Dodd-Walls Centre for Photonic and Quantum Technologies,
Department of Physics, University of Otago, PO Box 56, Dunedin 9016, New
Zealand Min Qiu<EMAIL_ADDRESS>Key Laboratory of 3D Micro/Nano
Fabrication and Characterization of Zhejiang Province, School of Engineering,
Westlake University, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province,
China Institute of Advanced Technology, Westlake Institute for Advanced Study,
18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China
###### Abstract
Bandgap control is of central importance for semiconductor technologies. The
traditional means of control is to dope the lattice chemically, electrically
or optically with charge carriers. Here, we demonstrate for the first time a
widely tunable bandgap (renormalisation up to 650 meV at room-temperature) in
two-dimensional (2D) semiconductors by coherently doping the lattice with
plasmonic hot electrons. In particular, we integrate tungsten-disulfide (WS2)
monolayers into a self-assembled plasmonic crystal, which enables coherent
coupling between semiconductor excitons and plasmon resonances. Accompanying
this process, the plasmon-induced hot electrons can repeatedly fill the WS2
conduction band, leading to population inversion and a significant
reconstruction in band structures and exciton relaxations. Our findings
provide an innovative and effective measure to engineer optical responses of
2D semiconductors, allowing a great flexiblity in design and optimisation of
photonic and optoelectronic devices.
Two-dimensional (2D) semiconductors, such as transition metal dichalcogenides
(TMDCs)Mak _et al._ (2010); Splendiani _et al._ (2010), have direct bandgap
at their monolayer limit, exhibiting tremendous potential in development of
next-generation nanoscale devices. Like in their bulk counterparts, bandgap
control plays a vital role in 2D semiconductor technoglogies, since it enables
the creation of desirable optoelectronic properties that are required in
numerous applications, ranging from lasersYe _et al._ (2015) to modulatorsMak
and Shan (2016), photodetectorsLopez-Sanchez _et al._ (2013) and
photocatalysisVoiry _et al._ (2013). The traditional means of control is to
dope the lattice chemicallyKim _et al._ (2015), electricallyChernikov _et
al._ (2015a) or opticallyChernikov _et al._ (2015b) with charge carriers, the
practicality of which is, however, limited by many factors, e.g. the
irreversible bandgap modification, contact-type control and requirement of
ultrastrong pump.
Here we report that one can flexibly and effectively modify the electronic
band structures of 2D semiconductors by establishing coherent strong coupling
between the semiconductor excitons and a plasmonic resonatorEbbesen _et al._
(1998); Liu and Lalanne (2008). In particular, plasmonic resonators are
metallic nanostructures that support collective oscillation of electrons,
known as plasmons. The excitation of plasmons can produce hot electrons, i.e.
highly energetic electrons with non-equilibrium thermal distributionsClavero
(2014); Brongersma _et al._ (2015), which, in the strong coupling regime, can
repeatedly dope the lattice along with the coherent plasmon-exciton energy
exchange. As a result, the bandgap of 2D semiconductors is significantly
renormalised and the renormalisation can be easily altered through changing
the detuning between plasmons and excitons.
The schematic of our sample in Fig.1a demonstrates a WS2 monolayer (ML)
deposited onto a plasmonic crystal (PC)Ding _et al._ (2013, 2019), which
comprises of a periodic array of silver capped silica nanospheres that are
coated with an ultrathin Al2O3 spacer. This metal-insulator-semiconductor
configuration constitutes PC-WS2 hybrid systems, supporting plasmon lattice
modes propagating on the PC-WS2 interface. Here the top WS2 MLs belong to the
family of atomically thin TMDCs, having been extensively studiedYe _et al._
(2014); Sie _et al._ (2017); Ruppert _et al._ (2017); Cunningham _et al._
(2017); Steinhoff _et al._ (2017) for their unusual exciton-dominated optical
responses, such as high absorption and emission efficiency. These properties
make the PC-WS2 systems a suitable platform to study plasmon-exciton
interactionsDing _et al._ (2019).
The PC geometries were chosen to excite plasmon lattice modesEbbesen _et al._
(1998); Liu and Lalanne (2008); Ding _et al._ (2013, 2019) that can match the
frequency of exciton A in WS${}_{\text{2}}$ MLs at certain incident angles
$\theta$. The plasmon modes show red-shift dispersion at higher $\theta$
(yellow curve in Fig.1b), matching the frequency of exciton A ($E=2.061$ eV)
at $\theta=22^{\circ}$. In this case, plasmon modes can coherently couple with
excitons, leading to the formation of plasmon-exciton polaritons, i.e. half-
light half-matter quasiparticles that inherit properties from both the
plasmonic and excitonic components. As a result, the transmission maxima
exhibit pronounced splitting features that follow the dispersions of upper
polariton (UP) and lower polariton (LP), indicating the establishment of
strong coupling between plasmons and excitons. When the frequency of the
plasmon mode is tuned in resonance with exciton A ($\theta=22^{\circ}$), the
hybrid system is characterised by a vacuum Rabi splitting of
$\hbar\cdot\Omega_{\text{R}}\approx 136$ meV. More detailed analysis of strong
plasmon-exciton coupling in equilibrium states can be found in a previous
workDing _et al._ (2019) and Fig.S1 in the Supplementary Information (SI).
Upon photoexcitation, the transient optical responses of PC-WS2 samples can be
characterised using femtosecond transient absorption (TA) spectroscopy (Fig.2a
and Methods), which enables incident angle-resolved probes of the optical
properties and dynamics of WS2 MLs that are strongly coupled with plasmon
resonancesDarby _et al._ (2016). Fig.2b shows the transient transmission
spectra ($\Delta\text{T}/\text{T}$) with a pump fluence of
$12\mu\text{J/cm}^{2}$ as a function of time delay and energy at the tuned
state ($\theta=22^{\circ}$), which displays two split relaxation traces
flanking the spectral position of exciton A ($E=2.061$ eV), corresponding to
UP and LP. This sharply contrasts with the single-trace relaxations of exciton
B ($E=2.471$ eV, Fig.2b) and uncoupled exciton A in bare WS2 MLs (Fig.S2 in
SI). When the PC is detuned from exciton A, e.g. at $\theta=30^{\circ}$
(Fig.2c), a single relaxation trace appears, highly resembling the trace of
bare exciton A. These ultrashort timescale results confirm again the strong
coupling nature of our PC-WS2 systems.
It is worth noting that the photoinduced absorption minimum associated with
tuned polaritons appears at the 1 to 10 ps range (blue area centred at
$E=1.946$ eV in Fig. 2b and the corresponding $\Delta\text{T}/\text{T}$
transient with negative magnitudes in Fig.2f), obviously delayed compared to
the minimum near exciton B (Fig.2b) and its counterpart in the detuned
polaritons (Fig. 2c), which all emerge simultaneously after the arrival of the
pump pulse. Similar postponed minima have been found in transient spectra of
bare TMDC MLs, which typically arise from enhanced exciton-exciton and/or
exciton-electron interactions under high-power pump that can populate high-
density carriers in the latticeCeballos _et al._ (2016); Ruppert _et al._
(2017); Cunningham _et al._ (2017); Sie _et al._ (2017) (see Section 2 in SI
for detailed discussions). What is different is that, in our hybrid systems,
the delayed minima appear under much lower pump intensity than that in the
reference experiments for bare WS2 MLs and are only associated with tuned
polaritons.
More importantly, it is noted that a $\Delta\text{T}/\text{T}$ maximum lasting
for $\sim 1$ ps in $E=1.6$ to $1.8$ eV arise in the tuned polariton spectra
(Fig.2b), which, in contrast, is remarkably weaker in the detuned state
(Fig.2c) and is completely absent in bare WS2 MLs (Fig.S2 in SI). The
integrated $\Delta\text{T}/\text{T}$ spectrum near zero probe delay (Fig.2d)
shows that the broad maximum has positive magnitudes, which indicates negative
optical absorption or positive gain, being a clear evidence of bandgap
renormalisation accompanied by population inversion.Chernikov _et al._
(2015b) Such phenomena are typically induced by the population of high-density
carriers in 2D semiconductor latticeMeckbach _et al._ (2018), which leads to
the non-equilibrium occupation of electron and/or hole states that can induce
the formation of new quasiparticle bandgaps. This process can be decribed
byPeyghambarian _et al._ (1993):
$\Delta E_{\text{g}}=-\underset{q\neq
0}{\sum}V_{\text{s}}(q)\,[f_{\text{e}}(q)+f_{\text{h}}(q)]-\underset{q\neq
0}{\sum}[V_{\text{s}}(q)-V(q)]$ (1)
where $V_{\text{s}}(q)$ and $V(q)$ represent fourier transforms of screened
and unscreened Coulomb potentials, while $f_{\text{e}}(q)$ and
$f_{\text{h}}(q)$ are occupation probabilities of electron and hole with
momentum $q$. The onset of the new bandgap can be extracted from the low-
energy end of the broad maximum. It means that in our experiments, the
renormalised bandgap starts at $E_{\text{g}}\approx 1.60$ eV, lying $\sim 400$
meV below LP and $\sim 650$ meV below the initial bandgap of WS2 MLs (given
that the binding energy of exciton A is $\sim 200$ meVCunningham _et al._
(2017)). This is, to the best of our knowledge, the largest bandgap
renormalisation in 2D semiconductors under such a low pump intensity
(12$\mu$J$/$cm2) to date, which, in the meanwhile, results in the inversion of
carrier population near the newly formed band edgeChernikov _et al._ (2015b);
Meckbach _et al._ (2018), presenting as optical gains, i.e. the broad maximum
in Fig.2b and 2d.
These unusual spectral and transient features are broadly understood as the
presence of high-density carriers, which, in our tuned PC-WS2 systems,
surprisingly have been achieved under room temperature and extremely low pump
intensity ($12\mu$J$/$cm2). This sharply contrasts with similar
observationsChernikov _et al._ (2015b) in bare WS2 single/bi-layers with
ultrastrong photoexcitation ($840\mu$J$/$cm2 at 70 K or $3400\mu$J$/$cm2 at
room temperature). In their study, the population of high-density carriers are
a result of Mott-transition, which are induced by enhanced exciton-exciton
interactions under high-power pump, reducing exciton binding energy, finally
breaking excitons into unbound electron-hole plasmaChernikov _et al._
(2015b); Steinhoff _et al._ (2017). In our experiments, the pump power is too
low to develop a Mott-transition, suggesting that there must be other sources
that can provide large numbers of additional carriers.
To understand the origin of these carriers, we turn to discuss one unique
property of plasmon-exciton polaritons, i.e. the generation of hot electrons
that are inherited from the polaritons’ plasmon root. In particular, hot
electrons are electrons with non-equilibrium thermal distributions, generated
by plasmon dephasing from wave-like states through non-radiative
decayBrongersma _et al._ (2015), which can electrically dope adjacent
semiconductorsFang _et al._ (2012), modifying their photovoltaic and
photocatalytic performanceClavero (2014). When plasmons are coupled to
exciton-like resonances in semiconductors, the hot electron density can be
highly enhanced in the lattice through direct electron tunnelingGarcía De
Arquer _et al._ (2013) or dipole-dipole interactionCushing _et al._ (2012).
Therefore it is very likely that the high-density carriers in tuned PC-WS2
systems are the hot electrons introduced during strong coupling process. (See
Section 3 in SI for detailed discussions)
The analyses of relaxation dynamics of tuned and detuned polaritons support
the hot electron model. We note that both the UP and LP in Fig.2f demonstrate
slower decays than that of detuned states in Fig.2g (Table S2 in SI for
fitting parameters). This observation coincides with a previous studyBoulesbaa
_et al._ (2016), clearly indicating the involvement of plasmonic hot electrons
in the strong plasmon-exciton coupling process. Specifically, as the system
sits in the strong coupilng regime, after photoexcitation, excitons and
plasmons coherently exchange energy at the Rabi frequency ($\sim 136$ meV)Vasa
_et al._ (2013), while the plasmon-to-exciton process is accompanied by hot
electron population in the lattice. Such a charge population runs at an
ultrashort period of $\sim 30$ fs ($T_{\text{R}}=2\pi/\Omega_{\text{R}}$),
which is too short to be caught by our equipment, also greatly shorter than
the exciton formation ($<1$ ps)Ceballos _et al._ (2016), the non-radiative
decay (at scales of $10$ ps) and the radiative decay process (up to few-
hundred ps) in WS2 MLsRuppert _et al._ (2017); Sie _et al._ (2017). This
means that during exciton relaxation, there is frequent tunneling/generation
of hot electrons that can repeatedly fill the unoccupied states in conduction
band of WS2 monolayers, which slow down the exciton bleaching via Pauli
blocking and lead to the extended lifetimes (Section 4 in SI for more
details).
Given that there is little evidence for other possible carrier sources, e.g.
polariton condensates Byrnes _et al._ (2014), we conclude that coherent
doping of plasmonic hot electrons is the origin of the spectral and transient
features that require high-density population. In particular, the hot electron
population repeatedly takes place throughout the whole relaxation process,
while the Al2O3 spacer can form a Schottky-like barrier that prevents charges
from returning back to the metalsCushing _et al._ (2012); García De Arquer
_et al._ (2013). As a result, hot electrons can be accumulated in the lattice
before they decay (within 1 psBrongersma _et al._ (2015)), which
simultaneously competes with rapid exciton relaxations, transiently converting
the intrinsic WS2 monolayers to ”n-doped” ones. This leads to the giant
bandgap renormalisation with population inversion that peak at few-hundred
femtoseconds (Fig.S10 in SI), and also induces the delayed absorption minima
in Fig.2b and 2f (Section 5 in SI).
To confirm our observations, we have performed meaurements under $\sim 10$
times higher pump fluence ($100\mu$J$/$cm2) (Fig. 3a). Apart from the
pronounced broad maxima at low energies, we can see large spectral shift as
well as remarkably delayed occurance of UP and LP maxima, revealing that the
accumulation of hot electrons competes with the relaxation dynamics, which
significantly enhances the systems’ nonlinear responses on ultrashort
timescales. (Detailed discussions in Section 6 of SI). Similar to the low-
power case, the transient variation of the broad maximum (Fig. 3c) under
intense photoexcitation takes $\sim 1.5$ ps from initial excitation to fading.
Fig. 3d shows the evolution of population inversion, where the magnitude and
width of the maximum is highly dependent on pump intensity. Under
$100\mu$J$/$cm2 pump fluence, the full-width at half-maximum can reach at
$\sim 200$ meV with highly enhanced magnitudes as compared to the maximum
under $5\mu$J$/$cm2 pump, also contrasting the unchanged flat spectral
features in bare WS2 MLs. But, even in this case, the pump fluence is still
sigfinicantly lower than that in Ref.Chernikov _et al._ (2015b).
As discussed above, the strong plasmon-exction coupling dramatically modifies
the electronic band structures of WS2 monolayers, which are induced, to a
large degree, by plasmonic hot electron doping via strong coupling. This
effect is extremely hard to observe in traditional exciton-polaritonsByrnes
_et al._ (2014), being a non-trivial factor that has to be considered when
studying light-matter interactions using plasmonic resonators, which, on the
other hand, provides new and effective measures to engineer bandgap of 2D
semiconductors.
## Acknowledgments
The authors acknowledge the New Idea Research Funding 2018 (Dodd-Walls Centre
for photonic and quantum technologies), the Marsden Fast-start Fund by Royal
Society of New Zealand through contract MFP-UOO1827 and the Smart Ideas Fund
by Ministry of Business, Innovation and Employment, New Zealand through
contract UOOX1802. In addition, this work was supported in part by the
National Key Research and Development Program of China (no. 2017YFA0205700)
and the National Natural Science Foundation of China (nos. 61425023, 61235007,
61575177 and 51861135201). The authors also acknowledge the visiting
Fellowship awarded by New Zealand Centre at Peking University. We thank Dr. M.
Yan and Dr. F. Hong for their help with thin-film deposition, AFM, and SEM
measurements.
## Author Contributions
B.D. and Y.-H.C. conceived the project; Z.Z, and B.D. prepared the samples;
R.T., K.C., Y.-H.C. and B.D. carried out the optical and other
characterization; Y.-H.C. and B.D. performed the simulation; Y.Z., M.Q.,
R.J.B., and B.D. supervised the projects; Y.-H.C. and B.D. prepared the
manuscript; all authors discussed and analyzed the results.
## References
* Mak _et al._ (2010) K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Phys. Rev. Lett. 105, 136805 (2010), arXiv:1004.0546 .
* Splendiani _et al._ (2010) A. Splendiani, L. Sun, Y. Zhang, T. Li, J. Kim, C. Y. Chim, G. Galli, and F. Wang, Nano Lett. 10, 1271 (2010), arXiv:1308.1834 [cond-mat.mtrl-sci] .
* Ye _et al._ (2015) Y. Ye, Z. J. Wong, X. Lu, X. Ni, H. Zhu, X. Chen, Y. Wang, and X. Zhang, Nat. Photonics 9, 733 (2015), arXiv:1503.06141 .
* Mak and Shan (2016) K. F. Mak and J. Shan, Nat. Photonics 10, 216 (2016).
* Lopez-Sanchez _et al._ (2013) O. Lopez-Sanchez, D. Lembke, M. Kayci, A. Radenovic, and A. Kis, Nat. Nanotechnol. 8, 497 (2013).
* Voiry _et al._ (2013) D. Voiry, H. Yamaguchi, J. Li, R. Silva, D. C. Alves, T. Fujita, M. Chen, T. Asefa, V. B. Shenoy, G. Eda, and M. Chhowalla, Nat. Mater. 12, 850 (2013), arXiv:1212.1513 .
* Kim _et al._ (2015) J. Kim, S. S. Baik, S. H. Ryu, Y. Sohn, S. Park, B.-G. Park, J. Denlinger, Y. Yi, H. J. Choi, and K. S. Kim, Science (80-. ). 349, 723 (2015).
* Chernikov _et al._ (2015a) A. Chernikov, A. M. Van Der Zande, H. M. Hill, A. F. Rigosi, A. Velauthapillai, J. Hone, and T. F. Heinz, Phys. Rev. Lett. 115, 1 (2015a).
* Chernikov _et al._ (2015b) A. Chernikov, C. Ruppert, H. M. Hill, A. F. Rigosi, and T. F. Heinz, Nat. Photonics 9, 466 (2015b).
* Ebbesen _et al._ (1998) T. W. Ebbesen, H. J. Lezec, H. F. Ghaemi, T. Thio, and P. A. Wolf, Nature 391, 667 (1998).
* Liu and Lalanne (2008) H. Liu and P. Lalanne, Nature 452, 728 (2008).
* Clavero (2014) C. Clavero, Nat. Photonics 8, 95 (2014).
* Brongersma _et al._ (2015) M. L. Brongersma, N. J. Halas, and P. Nordlander, Nat. Nanotechnol. 10, 25 (2015).
* Ding _et al._ (2013) B. Ding, C. Hrelescu, N. Arnold, G. Isic, and T. A. Klar, Nano Lett. 13, 378 (2013).
* Ding _et al._ (2019) B. Ding, Z. Zhang, Y.-H. Chen, Y. Zhang, R. J. Blaikie, and M. Qiu, ACS Nano 13, 1333 (2019).
* Ye _et al._ (2014) Z. Ye, T. Cao, K. O’Brien, H. Zhu, X. Yin, Y. Wang, S. G. Louie, and X. Zhang, Nature 513, 214 (2014), arXiv:1403.5568 .
* Sie _et al._ (2017) E. J. Sie, A. Steinhoff, C. Gies, C. H. Lui, Q. Ma, M. R?sner, G. Sch?nhoff, F. Jahnke, T. O. Wehling, Y.-H. Lee, J. Kong, P. Jarillo-Herrero, and N. Gedik, Nano Lett. 17, 4210 (2017).
* Ruppert _et al._ (2017) C. Ruppert, A. Chernikov, H. M. Hill, A. F. Rigosi, and T. F. Heinz, Nano Lett. 17, 644 (2017).
* Cunningham _et al._ (2017) P. D. Cunningham, A. T. Hanbicki, K. M. McCreary, and B. T. Jonker, ACS Nano 11, 12601 (2017).
* Steinhoff _et al._ (2017) A. Steinhoff, M. Florian, M. Rösner, G. Schönhoff, T. O. Wehling, and F. Jahnke, Nat. Commun. 8, 1166 (2017), arXiv:1705.05202 .
* Darby _et al._ (2016) B. L. Darby, B. Auguié, M. Meyer, A. E. Pantoja, and E. C. L. Ru, Nat. Photonics 10, 40 (2016), arXiv:1509.07216 .
* Ceballos _et al._ (2016) F. Ceballos, Q. N. Cui, M. Z. Bellus, and H. Zhao, Nanoscale 8, 11681 (2016), arXiv:1607.04856 .
* Meckbach _et al._ (2018) L. Meckbach, T. Stroucken, and S. W. Koch, Appl. Phys. Lett. 112 (2018), 10.1063/1.5017069.
* Peyghambarian _et al._ (1993) N. Peyghambarian, S. W. Koch, and A. Mysyrowicz, _Introduction to semiconductor optics_ (Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1993).
* Fang _et al._ (2012) Z. Fang, Y. Wang, Z. Liu, A. Schlather, P. M. Ajayan, F. H. Koppens, P. Nordlander, and N. J. Halas, ACS Nano 6, 10222 (2012).
* García De Arquer _et al._ (2013) F. P. García De Arquer, A. Mihi, D. Kufer, and G. Konstantatos, ACS Nano 7, 3581 (2013).
* Cushing _et al._ (2012) S. K. Cushing, J. Li, F. Meng, T. R. Senty, S. Suri, M. Zhi, M. Li, A. D. Bristow, and N. Wu, J. Am. Chem. Soc. 134, 15033 (2012).
* Boulesbaa _et al._ (2016) A. Boulesbaa, V. E. Babicheva, K. Wang, I. I. Kravchenko, M. W. Lin, M. Mahjouri-Samani, C. B. Jacobs, A. A. Puretzky, K. Xiao, I. Ivanov, C. M. Rouleau, and D. B. Geohegan, ACS Photonics 3, 2389 (2016).
* Vasa _et al._ (2013) P. Vasa, W. Wang, R. Pomraenke, M. Lammers, M. Maiuri, C. Manzoni, G. Cerullo, and C. Lienau, Nat. Photon. 7, 128 (2013).
* Byrnes _et al._ (2014) T. Byrnes, N. Y. Kim, and Y. Yamamoto, Nat. Phys. 10, 803 (2014).
Figure 1: Structures of a PC-WS${}_{\text{2}}$ sample and steady-state optical
properties. a, schematic of polariton formation in a WS${}_{\text{2}}$ ML that
is supported on a self-assembled plasmonic crystal. The Al2O3 spacer is not
depicted for similicity. right insets: side and top-view scanning electron
microscope (SEM) images; b, angle-resolved transmission spectra under
p-polarised illumination and their projection (top x-y plane), in which the
spectral positions of exciton A (X${}_{\text{A}}$) and B (X${}_{\text{B}}$),
calculated dispersions of plasmon lattice modes (yellow curve), and upper and
lower branches of polaritons (orange curves) are indicated. The tuned angle
($\theta=22^{\circ}$) is marked with a blacked dahsed line. Refer to Section 1
in the SI for detailed discussion of the strong plasmon-exciton coupling and
its dispersion. Figure 2: Transient optical responses. a, schematic of angle-
resolved ultrafast pump-probe spectroscopy; b, d and f refer to normalised
differential transmission spectra ($\Delta\text{T}/\text{T}$) at the tuned
angle ($\theta=22^{\circ}$), while c, e and g refer to
$\Delta\text{T}/\text{T}$ at the detuned angle ($\theta=30^{\circ}$); b and c
are intensity plots of $\Delta\text{T}/\text{T}$ as function of time delay and
probe photon energy, using the same colour bar (which is also used by Fig.3a);
d and e are $\Delta\text{T}/\text{T}$ spectra averaged within the time span
from 0.1 to 0.7 ps after pump; f and g are $\Delta\text{T}/\text{T}$ transient
at specific energies (labelled with different colours), in which scatter
symbols and solid curves represent measured and fitted data, respectively.
Dashed frames in panel b, d and e mark the spectral region of the broad maxima
(see main text). All measurements were carried out using 400 nm ($E=3.1$ eV)
pump pulses that have 100 fs duration and pump fluence of 12 $\mu$J/cm2 at
room temperature. The instrument-response-function is shown as the grey area
in panel g Figure 3: Bandgap renormalisation and evloution of population
inversion. a, intensity plot of $\Delta\text{T}/\text{T}$ spectra of PC-WS2
under $100\mu$J$/$cm2 pump fluence at $\theta=22^{\circ}$, where orange (blue)
colour represents the maximum (minimum) value. b, delay time dependent spectra
($\Delta\text{T}/\text{T}$) at energies of UP, LP and exciton B extracted from
panel a. Solid curves are plotted only for visual guidance c,
$\Delta\text{T}/\text{T}$ spectra at different delay times, extracted from the
white dashed frame in panel a; red dashed vertical line indicates the onset of
renormalised bandgap. d, comparison of $\Delta\text{T}/\text{T}$ spectra at
delay of 0.96 ps between PC-WS2 (left) and WS2 MLs (right) under gradually
increasing pump fluence.
|
2024-09-04T02:54:58.158122 | 2020-03-08T07:12:20 | 2003.03734 | {
"authors": "Qingxia Liu, Gong Cheng, Kalpa Gunaratna, Yuzhong Qu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26102",
"submitter": "Gong Cheng",
"url": "https://arxiv.org/abs/2003.03734"
} | arxiv-papers | 11institutetext: National Key Laboratory for Novel Software Technology,
Nanjing University, China
11email<EMAIL_ADDRESS><EMAIL_ADDRESS>22institutetext:
Samsung Research America, Mountain View CA, USA
22email<EMAIL_ADDRESS>
# ESBM: An Entity Summarization BenchMark
Qingxia Liu 11 Gong Cheng 11 Kalpa Gunaratna 22 Yuzhong Qu 11
###### Abstract
Entity summarization is the problem of computing an optimal compact summary
for an entity by selecting a size-constrained subset of triples from RDF data.
Entity summarization supports a multiplicity of applications and has led to
fruitful research. However, there is a lack of evaluation efforts that cover
the broad spectrum of existing systems. One reason is a lack of benchmarks for
evaluation. Some benchmarks are no longer available, while others are small
and have limitations. In this paper, we create an Entity Summarization
BenchMark (ESBM) which overcomes the limitations of existing benchmarks and
meets standard desiderata for a benchmark. Using this largest available
benchmark for evaluating general-purpose entity summarizers, we perform the
most extensive experiment to date where 9 existing systems are compared.
Considering that all of these systems are unsupervised, we also implement and
evaluate a supervised learning based system for reference.
###### Keywords:
Entity summarization Triple ranking Benchmarking.
## 1 Introduction
RDF data describes entities with triples representing property values. In an
RDF dataset, the description of an entity comprises all the RDF triples where
the entity appears as the subject or the object. An example entity description
is shown in Fig. 1. Entity descriptions can be large. An entity may be
described in dozens or hundreds of triples, exceeding the capacity of a
typical user interface. A user served with all of those triples may suffer
information overload and find it difficult to quickly identify the small set
of triples that are truly needed. To solve the problem, an established
research topic is _entity summarization_ [15], which aims to compute an
optimal compact summary for the entity by selecting a size-constrained subset
of triples. An example entity summary under the size constraint of 5 triples
is shown in the bottom right corner of Fig. 1.
Figure 1: Description of entity Tim Berners-Lee and a summary thereof.
Entity summarization supports a multiplicity of applications [6, 21]. Entity
summaries constitute entity cards displayed in search engines [9], provide
background knowledge for enriching documents [26], and facilitate research
activities with humans in the loop [3, 4]. This far-reaching application has
led to fruitful research as reviewed in our recent survey paper [15]. Many
entity summarizers have been developed, most of which generate summaries for
general purposes.
Research Challenges. However, two challenges face the research community.
First, there is a _lack of benchmarks_ for evaluating entity summarizers. As
shown in Table 1, some benchmarks are no longer available. Others are
available [22, 7, 8] but they are small and have limitations. Specifically,
[22] has a task-specific nature, and [7, 8] exclude classes and/or literals.
These benchmarks could not support a comprehensive evaluation of general-
purpose entity summarizers. Second, there is a _lack of evaluation efforts_
that cover the broad spectrum of existing systems to compare their performance
and assist practitioners in choosing solutions appropriate to their
applications.
Contributions. We address the challenges with two contributions. First, we
create an Entity Summarization BenchMark (ESBM) which overcomes the
limitations of existing benchmarks and meets the desiderata for a successful
benchmark [18]. ESBM has been published on GitHub with extended documentation
and a permanent identifier on w3id.org111https://w3id.org/esbm under the ODC-
By license. As the largest available benchmark for evaluating general-purpose
entity summarizers, ESBM contains 175 heterogeneous entities sampled from two
datasets, for which 30 human experts create 2,100 general-purpose ground-truth
summaries under two size constraints. Second, using ESBM, we evaluate 9
existing general-purpose entity summarizers. It represents the most extensive
evaluation effort to date. Considering that existing systems are unsupervised,
we also implement and evaluate a supervised learning based entity summarizer
for reference.
In this paper, for the first time we comprehensively describe the creation and
use of ESBM. We report ESBM v1.2—the latest version, while early versions have
successfully supported the entity summarization shared task at the EYRE 2018
workshop222https://sites.google.com/view/eyre18/sharedtasks and the EYRE 2019
workshop.333https://sites.google.com/view/eyre19/sharedtasks We will also
educate on the use of ESBM at an ESWC 2020 tutorial on entity
summarization444https://sites.google.com/view/entity-summarization-
tutorials/eswc2020.
The remainder of the paper is organized as follows. Section 2 reviews related
work and limitations of existing benchmarks. Section 3 describes the creation
of ESBM, which is analyzed in Section 4. Section 5 presents our evaluation. In
Section 6 we discuss limitations of our study and perspectives for future
work.
Table 1: Existing benchmarks for evaluating entity summarization. | Dataset | Number of entities | Availability
---|---|---|---
WhoKnows?Movies! [22] | Freebase | 60 | Available111http://yovisto.com/labs/iswc2012
Langer et al. [13] | DBpedia | 14 | Unavailable
FRanCo [1] | DBpedia | 265 | Unavailable
Benchmark for evaluating RELIN [2] | DBpedia | 149 | Unavailable
Benchmark for evaluating DIVERSUM [20] | IMDb | 20 | Unavailable
Benchmark for evaluating FACES [7] | DBpedia | 50 | Available222http://wiki.knoesis.org/index.php/FACES
Benchmark for evaluating FACES-E [8] | DBpedia | 80 | Available222http://wiki.knoesis.org/index.php/FACES
## 2 Related Work
We review methods and evaluation efforts for entity summarization.
Methods for Entity Summarization. In a recent survey [15] we have categorized
the broad spectrum of research on entity summarization. Below we briefly
review _general-purpose_ entity summarizers which mainly rely on generic
technical features that can apply to a wide range of domains and applications.
We will not address methods that are domain-specific (e.g., for movies [25] or
timelines [5]), task-specific (e.g., for facilitating entity resolution [3] or
entity linking [4]), or context-aware (e.g., contextualized by a document [26]
or a query [9]).
RELIN [2] uses a weighted PageRank model to rank triples according to their
statistical informativeness and relatedness. DIVERSUM [20] ranks triples by
property frequency and generates a summary with a strong constraint that
avoids selecting triples having the same property. SUMMARUM [24] and LinkSUM
[23] mainly rank triples by the PageRank scores of property values that are
entities. LinkSUM also considers backlinks from values. FACES [7], and its
extension FACES-E [8] which adds support for literals, cluster triples by
their bag-of-words based similarity and choose top-ranked triples from as many
different clusters as possible. Triples are ranked by statistical
informativeness and property value frequency. CD [28] models entity
summarization as a quadratic knapsack problem that maximizes the statistical
informativeness of the selected triples and in the meantime minimizes the
string, numerical, and logical similarity between them. In ES-LDA [17], ES-
LDAext [16], and MPSUM [27], a Latent Dirichlet Allocation (LDA) model is
learned where properties are treated as topics, and each property is a
distribution over all the property values. Triples are ranked by the
probabilities of properties and values. MPSUM further avoids selecting triples
having the same property. BAFREC [12] categorizes triples into meta-level and
data-level. It ranks meta-level triples by their depths in an ontology and
ranks data-level triples by property and value frequency. Triples having
textually similar properties are penalized to improve diversity. KAFCA [11]
ranks triples by the depths of properties and values in a hierarchy
constructed by performing the Formal Concept Analysis (FCA). It tends to
select triples containing infrequent properties but frequent values, where
frequency is computed at the word level.
Limitations of Existing Benchmarks. For evaluating entity summarization,
compared with task completion based _extrinsic evaluation_ , ground truth
based _intrinsic evaluation_ is more popular because it is easy to perform and
the results are reproducible. Its idea is to create a benchmark consisting of
human-made ground-truth summaries, and then compute how much a machine-
generated summary is close to a ground-truth summary.
Table 1 lists known benchmarks, including dedicated benchmarks [22, 13, 1] and
those created for evaluating a particular entity summarizer [2, 20, 7, 8]. It
is not surprising that these benchmarks are not very large since it is
expensive to manually create high-quality summaries for a large set of
entities. Unfortunately, some of these benchmarks are not publicly available
at this moment. Three are available [22, 7, 8] but they are relatively small
and have limitations. Specifically, WhoKnows?Movies! [22] is not a set of
ground-truth summaries but annotates each triple with the ratio of movie
questions that were correctly answered based on that triple, as an indicator
of its importance. This kind of task-specific ground truth may not be suitable
for evaluating general-purpose entity summarizers. The other two available
benchmarks were created for evaluating FACES/-E [7, 8]. Classes and/or
literals are not included because they could not be processed by FACES/-E and
hence were filtered out. Such benchmarks could not comprehensively evaluate
most of the existing entity summarizers [2, 20, 28, 27, 12, 11] that can
handle classes and literals. These limitations of available benchmarks
motivated us to create a new ground truth consisting of _general-purpose
summaries_ for a _larger set of entities_ involving _more comprehensive
triples_ where property values can be entities, classes, or literals.
## 3 Creating ESBM
To overcome the above-mentioned limitations of existing benchmarks, we created
a new benchmark called ESBM. To date, it is the largest available benchmark
for evaluating general-purpose entity summarizers. In this section, we will
first specify our design goals. Then we describe the selection of entity
descriptions and the creation of ground-truth summaries. We partition the data
to support cross-validation for parameter fitting. Finally we summarize how
our design goals are achieved and how ESBM meets standard desiderata for a
benchmark.
### 3.1 Design Goals
The creation of ESBM has two main design goals. First, a successful benchmark
should meet seven desiderata [18]: accessibility, affordability, clarity,
relevance, solvability, portability, and scalability, which we will detail in
Section 3.5. Our design of ESBM aims to satisfy these basic requirements.
Second, in Section 2 we discussed the limitations of available benchmarks,
including task specificness, small size, and triple incomprehensiveness.
Besides, all the existing benchmarks use a single dataset and hence may weaken
the generalizability of evaluation results. We aim to overcome these
limitations when creating ESBM. In Section 3.5 we will summarize how our
design goals are achieved.
### 3.2 Entity Descriptions
To choose entity descriptions to summarize, we sample entities from selected
datasets and filter their triples. The process is detailed below.
Datasets. We sample entities from two datasets of different kinds: an
encyclopedic dataset and a domain-specific dataset. For the encyclopedic
dataset we choose DBpedia [14], which has been used in other benchmarks [13,
1, 2, 7, 8]. We use the English version of DBpedia
2015-10555http://wiki.dbpedia.org/dbpedia-dataset-version-2015-10—the latest
version when we started to create ESBM. For the domain-specific dataset we
choose LinkedMDB [10], which is a popular movie database. The movie domain is
also the focus of some existing benchmarks [22, 20] possibly because this
domain is familiar to the lay audience so that it would be easy to find
qualified human experts to create ground-truth summaries. We use the latest
available version of
LinkedMDB.666http://www.cs.toronto.edu/~oktie/linkedmdb/linkedmdb-latest-
dump.zip
Entities. For DBpedia we sample entities from five large classes: Agent,
Event, Location, Species, and Work. They collectively contain 3,501,366
entities (60%) in the dataset. For LinkedMDB we sample from Film and Person,
which contain 159,957 entities (24%) in the dataset. Entities from different
classes are described by very different properties as we will see in Section
4.3, and hence help to assess the generalizability of an entity summarizer.
According to the human efforts we could afford, from each class we randomly
sample 25 entities. The total number of selected entities is 175. Each
selected entity should be described in at least 20 triples so that
summarization would not be a trivial task. This requirement follows common
practice in the literature [1, 2, 20, 7] where a minimum constraint in the
range of 10–20 was posed.
(a) Average number of triples describing an entity.
(b) Average number of distinct properties describing an entity.
Figure 2: Composition of entity descriptions (the left bar in each group),
top-5 ground-truth summaries (the middle bar), and top-10 ground-truth
summaries (the right bar), grouped by class in DBpedia (D) and LinkedMDB (L).
Triples. For DBpedia, entity descriptions comprise triples in the following
dump files: _instance types_ , _instance types transitive_ , _YAGO types_ ,
_mappingbased literals_ , _mappingbased objects_ , _labels_ , _images_ ,
_homepages_ , _persondata_ , _geo coordinates mappingbased_ , and _article
categories_. We do not import dump files that provide metadata about Wikipedia
articles such as _page links_ and _page length_. We do not import _short
abstracts_ and _long abstracts_ as they provide handcrafted textual entity
summaries; it would be inappropriate to include them in a benchmark for
evaluating entity summarization. For LinkedMDB we import all the triples in
the dump file except sameAs links which do not express facts about entities
but are of more technical nature. Finally, as shown in Fig. 2a (the left bar
in each group), the mean number of triples in an entity description is in the
range of 25.88–52.44 depending on the class, and the overall mean value is
37.62.
### 3.3 Ground-Truth Summaries
We invite 30 researchers and students to create ground-truth summaries for
entity descriptions. All the participants are familiar with RDF.
Task Assignment. Each participant is assigned 35 entities consisting of 5
entities randomly selected from each of the 7 classes in ESBM. The assignment
is controlled to ensure that each entity in ESBM is processed by 6
participants. A participant creates two summaries for each entity description
by selecting different numbers of triples: a _top-5 summary_ containing 5
triples, and a _top-10 summary_ containing 10 triples. Therefore, we will be
able to evaluate entity summarizers under different size constraints. The
choice of these two numbers follows previous work [2, 7, 8]. Participants work
independently and they may create different summaries for an entity. It is not
feasible to ask participants to reach an agreement. It is also not reasonable
to merge different summaries into a single version. So we keep different
summaries and will use all of them in the evaluation. The total number of
ground-truth summaries is $175\cdot 6\cdot 2=2,100$.
Figure 3: User interface for creating ground-truth entity summaries.
Procedure. Participants are instructed to create _general-purpose summaries_
that are not specifically created for any particular task. They read and
select triples using a Web-based user interface shown in Fig. 3. All the
triples in an entity description are listed in random order but those having a
common property are placed together for convenient reading and comparison. For
IRIs, their human-readable labels (rdfs:label) are shown if available. To help
participants understand a property value that is an unfamiliar entity, a click
on it will open a pop-up showing a short textual description extracted from
the first paragraph of its Wikipedia/IMDb page. Any triple can be selected
into the top-5 summary, the top-10 summary, or both. The top-5 summary is not
required to be a subset of the top-10 summary.
### 3.4 Training, Validation, and Test Sets
Some entity summarizers need to tune hyperparameters or fit models. To make
their evaluation results comparable with each other, we specify a split of our
data into training, validation, and test sets. We provide a partition of the
175 entities in ESBM into 5 equally sized subsets $P_{0},\ldots,P_{4}$ to
support 5-fold cross-validation. Entities of each class are partitioned evenly
among the subsets. For $0\leq i\leq 4$, the $i$-th fold uses
$P_{i},P_{i+1\text{ mod }5},P_{i+2\text{ mod }5}$ as the training set (e.g.,
for model fitting), uses $P_{i+3\text{ mod }5}$ for validation (e.g., tuning
hyperparameters), and retains $P_{i+4\text{ mod }5}$ as the test set.
Evaluation results are averaged over the 5 folds.
### 3.5 Conclusion
ESBM overcomes the limitations of available benchmarks discussed in Section 2.
It contains 175 entities which is 2–3 times as large as available benchmarks
[22, 7, 8]. In ESBM, property values are not filtered as in [7, 8] but can be
any entity, class, or literal. Different from the task-specific nature of
[22], ESBM provides general-purpose ground-truth summaries for evaluating
general-purpose entity summarizers.
Besides, ESBM meets the seven desiderata proposed in [18] as follows.
* •
Accessibility. ESBM is publicly available and has a permanent identifier on
w3id.org.
* •
Affordability. ESBM is with an open-source program and example code for
evaluation. The cost of using ESBM is minimized.
* •
Clarity. ESBM is documented clearly and concisely.
* •
Relevance. ESBM samples entities from two real datasets that have been widely
used. The summarization tasks are natural and representative.
* •
Solvability. An entity description in ESBM has at least 20 triples and a mean
number of 37.62 triples, from which 5 or 10 triples are to be selected. The
summarization tasks are not trivial and not too difficult.
* •
Portability. ESBM can be used to evaluate any general-purpose entity
summarizer that can process RDF data.
* •
Scalability. ESBM samples 175 entities from 7 classes. It is reasonably large
and diverse to evaluate mature entity summarizers but is not too large to
evaluate research prototypes.
However, ESBM has its own limitations, which we will discuss in Section 6.
## 4 Analyzing ESBM
In this section, we will first characterize ESBM by providing some basic
statistics and analyzing the triple composition and heterogeneity of entity
descriptions. Then we compute inter-rater agreement to show how much consensus
exists in the ground-truth summaries given by different participants.
### 4.1 Basic Statistics
The 175 entity descriptions in ESBM collectively contain 6,584 triples, of
which 37.44% are selected into at least one top-5 summary and 58.15% appear in
at least one top-10 summary, showing a wide selection by the participants.
However, many of them are selected only by a single participant; 20.46% and
40.23% are selected by different participants into top-5 and top-10 summaries,
respectively. We will further analyze inter-rater agreement in Section 4.4.
We calculate the overlap between the top-5 and the top-10 summaries created by
the same participant for the same entity. The mean overlap is in the range of
4.80–4.99 triples depending on the class, and the overall mean value is 4.91,
showing that the top-5 summary is usually a subset of the top-10 summary.
### 4.2 Triple Composition
In Fig. 2 we present the composition of entity descriptions (the left bar in
each group) and their ground-truth summaries (the middle bar for top-5 and the
right bar for top-10) in ESBM, in terms of the average number of triples
describing an entity (Fig. 2a) and in terms of the average number of distinct
properties describing an entity (Fig. 2b). Properties are divided into
literal-valued, class-valued, and entity-valued. Triples are divided
accordingly.
In Fig. 2a, both class-valued and entity-valued triples occupy a considerable
proportion of the entity descriptions in DBpedia. Entity-valued triples
predominate in LinkedMDB. Literal-valued triples account for a small
proportion in both datasets. However, they constitute 30% in top-5 ground-
truth summaries and 25% in top-10 summaries. Entity summarizers that cannot
process literals [24, 23, 7, 17] have to ignore these notable proportions,
thereby significantly influencing their performance.
In Fig. 2b, in terms of distinct properties, entity-valued and literal-valued
triples have comparable numbers in entity descriptions since many entity-
valued properties are multi-valued. Specifically, an entity is described by
13.24 distinct properties, including 5.31 literal-valued (40%) and 6.93
entity-valued (52%). Multi-valued properties appear in every entity
description and they constitute 35% of the triples. However, in top-5 ground-
truth summaries, the average number of distinct properties is 4.70 and is very
close to 5, indicating that the participants are not inclined to select
multiple values of a property. Entity summarizers that prefer diverse
properties [20, 7, 8, 28, 27, 12] may exhibit good performance.
Figure 4: Jaccard similarity between property sets describing different
classes. Table 2: Popular properties in ground-truth summaries.
In top-5 summaries | In top-10 summaries
---|---
Agent | Event | Location | Species | Work | Film | Person | Agent | Event | Location | Species | Work | Film | Person
type | type | type | type | type | director | type | type | type | type | family | type | director | type
birthDate | date | country | family | | type | actor | subject | subject | country | type | subject | actor | actor
| | | | | | | birthDate | date | subject | order | genre | type | label
| | | | | | | | label | | class | | writer | page
| | | | | | | | | | genus | | producer |
| | | | | | | | | | subject | | date |
| | | | | | | | | | kingdom | | language |
### 4.3 Entity Heterogeneity
Entities from different classes are described by different sets of properties.
For each class we identify the set of properties describing at least one
entity from the class. The Jaccard similarity between properties sets for each
pair of classes is very low, as shown in Fig. 4. Such heterogeneous entity
descriptions help to assess the generalizability of an entity summarizer.
Table 2 shows popular properties that appear in at least 50% of the ground-
truth summaries for each class. Some universal properties like rdf:type and
dct:subject are popular for most classes. We also see class-specific
properties, e.g., dbo:birthDate for Agent, dbo:family for Species. However,
the results suggest that it would be unrealistic to generate good summaries by
manually selecting properties for each class. For example, among 13.24
distinct properties describing an entity, only 1–2 are popular in top-5
ground-truth summaries. The importance of properties is generally
contextualized by concrete entities.
### 4.4 Inter-Rater Agreement
Recall that each entity in ESBM has six top-5 ground-truth summaries and six
top-10 summaries created by different participants. We calculate the average
overlap between these summaries in terms of the number of common triples they
contain. As shown in Table 3, the results are generally comparable with those
reported for other benchmarks in the literature. There is a moderate degree of
agreement between the participants.
Table 3: Inter-rater agreement. | ESBM | [2] | [7] | [8]
---|---|---|---|---
Overlap between top-5 summaries | 1.99 (39.8$\%$) | 2.91 (58.2$\%$) | 1.92 (38.4$\%$) | 2.12 (42.4$\%$)
Overlap between top-10 summaries | 5.42 (54.2$\%$) | 7.86 (78.6$\%$) | 4.64 (46.4$\%$) | 5.44 (54.4$\%$)
Ground-truth summaries per entity | 6 | 4.43 | $\geq$ 7 | $\geq$ 4
## 5 Evaluating with ESBM
We used ESBM to perform the most extensive evaluation of general-purpose
entity summarizers to date. In this section, we will first describe evaluation
criteria. Then we introduce the entity summarizers that we evaluate. Finally
we present evaluation results.
### 5.1 Evaluation Criteria
Let $S_{m}$ be a machine-generated entity summary. Let $S_{h}$ be a human-made
ground-truth summary. To compare $S_{m}$ with $S_{h}$ and assess the quality
of $S_{m}$ based on how much $S_{m}$ is close to $S_{h}$, it is natural to
compute precision (P), recall (R), and F1. The results are in the range of
0–1:
$\text{P}=\frac{|S_{m}\cap S_{h}|}{|S_{m}|}\,,\quad\text{R}=\frac{|S_{m}\cap
S_{h}|}{|S_{h}|}\,,\quad\text{F1}=\frac{2\cdot\text{P}\cdot\text{R}}{\text{P}+\text{R}}\,.$
(1)
In the experiments we configure entity summarizers to output at most $k$
triples and we set $k=|S_{h}|$, i.e., $k=5$ and $k=10$ are our two settings
corresponding to the sizes of ground-truth summaries. We will trivially have
P$=$R$=$F1 if $|S_{m}|=|S_{h}|$. However, some entity summarizers may output
less than $k$ triples. For example, DIVERSUM [20] disallows an entity summary
to contain triples having the same property. It is possible that an entity
description contains less than $k$ distinct properties and hence DIVERSUM has
to output less than $k$ triples. In this case, P$\neq$R and one should rely on
F1.
In the evaluation, for each entity in ESBM, we compare a machine-generated
summary with each of the 6 ground-truth summaries by calculating F1, and take
their aggregation value. Finally we report the mean F1 over all the entities.
For aggregation function, we report the results of average, to show an overall
match with all the different ground truths; on the website we also give the
results of maximum, to show the best match with each individual ground truth.
### 5.2 Participating Entity Summarizers
We not only evaluate existing entity summarizers but also compare them with
two special entity summarizers we create: an oracle entity summarizer which is
used to show the best possible performance on ESBM, and a new supervised
learning based entity summarizer.
Existing Entity Summarizers. We evaluate 9 out of the 12 general-purpose
entity summarizers reviewed in Section 2. We re-implement RELIN [2], DIVERSUM
[20], LinkSUM [23], FACES [7], FACES-E [8], and CD [28], while MPSUM [27],
BAFREC [12], and KAFCA [11] are open source. We exclude SUMMARUM [24], ES-LDA
[17], and ES-LDAext [16] because LinkSUM represents an extension of SUMMARUM,
and MPSUM represents an extension of ES-LDA and ES-LDAext.
We follow the original implementation and suggested configuration of existing
entity summarizers as far as possible. However, for RELIN, we replace its
Google-based relatedness measure with a string metric [19] because Google’s
search API is no longer free. We also use this metric to replace the
unavailable UMBC’s SimService used in FACES-E. For DIVERSUM, we ignore its
witness count measure since it does not apply to ESBM. For LinkSUM, we obtain
backlinks between entities in LinkedMDB via their corresponding entities in
DBpedia.
RELIN, CD, and LinkSUM compute a weighted combination of two scoring
components. We tune these hyperparameters in the range of 0–1 in 0.01
increments. Since these summarizers are unsupervised, we use both the training
set and the validation set described in Section 3.4 for tuning
hyperparameters.
Oracle Entity Summarizer. We implement an entity summarizer denoted by ORACLE
to approximate the best possible performance on ESBM and form a reference
point used for comparisons. ORACLE simply outputs $k$ triples that are
selected by the most participants into ground-truth summaries.
Supervised Learning Based Entity Summarizer. Existing general-purpose entity
summarizers are unsupervised. We implement a supervised learning based entity
summarizer with features that are used by existing entity summarizers. A
triple with property $p$ and value $v$ describing entity $e$ is represented by
the following features:
* •
$\mathtt{gf}_{\mathbb{T}}$: the number of triples in the dataset where $p$
appears [23, 12],
* •
$\mathtt{lf}$: the number of triples in the description of $e$ where $p$
appears [20, 23],
* •
$\mathtt{vf}_{\mathbb{T}}$: the number of triples in the dataset where $v$
appears [7, 8, 12], and
* •
$\mathtt{si}$: the self-information of the triple [2, 7, 8, 28].
We also add three binary features:
* •
$\mathtt{isC}$: whether $v$ is a class,
* •
$\mathtt{isE}$: whether $v$ is an entity, and
* •
$\mathtt{isL}$: whether $v$ is a literal.
Based on the training and validation sets described in Section 3.4, we
implement and tune 6 pointwise learning to rank models provided by Weka:
SMOreg, LinearRegression, MultilayerPerceptron, AdditiveRegression, REPTree,
and RandomForest. Each model outputs $k$ top-ranked triples as a summary.
### 5.3 Evaluation Results
We first report the overall evaluation results to show which entity summarizer
generally performs better. Then we break down the results into different
entity types (i.e., classes) for detailed comparison. Finally we present and
analyze the performance of our supervised learning based entity summarizer.
Table 4: Average F1 over all the entities in a dataset. For the nine existing
entity summarizers, significant improvements and losses over each other are
indicated by $\blacktriangle$ and $\blacktriangledown$ ($p<0.05$),
respectively. Insignificant differences are indicated by $\circ$.
| DBpedia | LinkedMDB
---|---|---
| $k=5$ | $k=10$ | $k=5$ | $k=10$
RELIN | 0.242 ${}^{\text{-}\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.455 ${}^{\text{-}\blacktriangledown\circ\circ\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.203 ${}^{\text{-}\circ\circ\blacktriangledown\circ\blacktriangle\blacktriangledown\circ\blacktriangledown}$ | 0.258 ${}^{\text{-}\blacktriangledown\circ\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$
DIVERSUM | 0.249 ${}^{\circ\text{-}\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.507 ${}^{\blacktriangle\text{-}\blacktriangle\circ\circ\circ\circ\circ\circ}$ | 0.207 ${}^{\circ\text{-}\circ\blacktriangledown\circ\blacktriangle\blacktriangledown\circ\blacktriangledown}$ | 0.358 ${}^{\blacktriangle\text{-}\blacktriangle\circ\circ\blacktriangle\blacktriangledown\circ\blacktriangledown}$
FACES | 0.270 ${}^{\circ\circ\text{-}\circ\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.428 ${}^{\circ\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.169 ${}^{\circ\circ\text{-}\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.263 ${}^{\circ\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\blacktriangledown}$
FACES-E | 0.280 ${}^{\blacktriangle\circ\circ\text{-}\circ\circ\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.488 ${}^{\circ\circ\blacktriangle\text{-}\circ\circ\circ\circ\circ}$ | 0.313 ${}^{\blacktriangle\blacktriangle\blacktriangle\text{-}\blacktriangle\blacktriangle\blacktriangledown\blacktriangle\circ}$ | 0.393 ${}^{\blacktriangle\circ\blacktriangle\text{-}\blacktriangle\blacktriangle\circ\circ\circ}$
CD | 0.283 ${}^{\blacktriangle\blacktriangle\circ\circ\text{-}\circ\blacktriangledown\circ\circ}$ | 0.513 ${}^{\blacktriangle\circ\blacktriangle\circ\text{-}\circ\circ\circ\circ}$ | 0.217 ${}^{\circ\circ\blacktriangle\blacktriangledown\text{-}\blacktriangle\blacktriangledown\circ\blacktriangledown}$ | 0.331 ${}^{\blacktriangle\circ\blacktriangle\blacktriangledown\text{-}\blacktriangle\blacktriangledown\blacktriangledown\blacktriangledown}$
LinkSUM | 0.287 ${}^{\blacktriangle\blacktriangle\circ\circ\circ\text{-}\blacktriangledown\circ\circ}$ | 0.486 ${}^{\circ\circ\blacktriangle\circ\circ\text{-}\circ\circ\circ}$ | 0.140 ${}^{\blacktriangledown\blacktriangledown\circ\blacktriangledown\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\blacktriangledown}$ | 0.279 ${}^{\circ\blacktriangledown\circ\blacktriangledown\blacktriangledown\text{-}\blacktriangledown\blacktriangledown\blacktriangledown}$
BAFREC | 0.335 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\text{-}\circ\circ}$ | 0.503 ${}^{\blacktriangle\circ\blacktriangle\circ\circ\circ\text{-}\circ\circ}$ | 0.360 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\blacktriangle\text{-}\blacktriangle\blacktriangle}$ | 0.402 ${}^{\blacktriangle\blacktriangle\blacktriangle\circ\blacktriangle\blacktriangle\text{-}\circ\circ}$
KAFCA | 0.314 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\circ\circ\circ\text{-}\circ}$ | 0.509 ${}^{\blacktriangle\circ\blacktriangle\circ\circ\circ\circ\text{-}\circ}$ | 0.244 ${}^{\circ\circ\blacktriangle\blacktriangledown\circ\blacktriangle\blacktriangledown\text{-}\circ}$ | 0.397 ${}^{\blacktriangle\circ\blacktriangle\circ\blacktriangle\blacktriangle\circ\text{-}\circ}$
MPSUM | 0.314 ${}^{\blacktriangle\blacktriangle\blacktriangle\blacktriangle\circ\circ\circ\circ\text{-}}$ | 0.512 ${}^{\blacktriangle\circ\blacktriangle\circ\circ\circ\circ\circ\text{-}}$ | 0.272 ${}^{\blacktriangle\blacktriangle\blacktriangle\circ\blacktriangle\blacktriangle\blacktriangledown\circ\text{-}}$ | 0.423 ${}^{\blacktriangle\blacktriangle\blacktriangle\circ\blacktriangle\blacktriangle\circ\circ\text{-}}$
ORACLE | 0.595 | 0.713 | 0.619 | 0.678
SMOreg | 0.279 | 0.543 | 0.403 | 0.472
LinearRegression | 0.319 | 0.556 | 0.401 | 0.471
MultilayerPerceptron | 0.340 | 0.560 | 0.390 | 0.477
AdditiveRegression | 0.345 | 0.558 | 0.415 | 0.510
REPTree | 0.392 | 0.570 | 0.455 | 0.538
RandomForest | 0.399 | 0.576 | 0.449 | 0.506
Overall Results of Existing Entity Summarizers. Table 4 presents the results
of all the participating entity summarizers on two datasets under two size
constraints. We compare nine existing summarizers using one-way ANOVA post-hoc
LSD and we show whether the difference between each pair of them is
statistical significant at the 0.05 level. Among existing summarizers, BAFREC
achieves the highest F1 under $k=5$. It significantly outperforms six existing
summarizers on DBpedia and outperforms all the eight ones on LinkedMDB. It is
also among the best under $k=10$. MPSUM follows BAFREC under $k=5$ but
performs slightly better under $k=10$. Other top-tier results belong to KAFCA
on DBpedia and FACES-E on LinkedMDB.
The F1 scores of ORACLE are in the range of 0.595–0.713. It is impossible for
ORACLE or any other summarizer to reach $\text{F1}=1$, because for each entity
in ESBM there are six ground-truth summaries which are often different and
hence cannot simultaneously match a machine-generated summary. However, the
gap between the results of ORACLE and the best results of existing summarizers
is still as large as 0.20–0.26, suggesting that there is much room for
improvement.
Results on Different Entity Types. We break down the results of existing
entity summarizers into 7 entity types (i.e., classes). When $k=5$ in Fig. 5,
there is no single winner on every class, but BAFREC and MPSUM are among top
three on 6 classes, showing relatively good generalizability over different
entity types. Some entity summarizers have limited generalizability and they
perform not well on certain classes. For example, RELIN and CD mainly rely on
the self-information of a triple, while for Location entities their latitudes
and longitudes are often unique in DBpedia but such triples with large self-
information rarely appear in ground-truth summaries. Besides, most summarizers
generate low-quality summaries for Agent, Film, and Person entities. This is
not surprising since these entities are described in more triples and/or by
more properties according to Fig. 2. Their summarization is inherently more
difficult. When $k=10$ in Fig. 6, MPSUM is still among top three on 6 classes.
KAFCA also shows relatively good generalizability—among top three on 5
classes.
Figure 5: Average F1 over all the entities in each class under $k=5$. Figure
6: Average F1 over all the entities in each class under $k=10$.
Results of Supervised Learning. As shown in Table 4, among the six supervised
learning based methods, RandomForest and REPTree achieve the highest F1 on
DBpedia and LinkedMDB, respectively. Four methods (MultilayerPerceptron,
AdditiveRegression, REPTree, and RandomForest) outperform all the existing
entity summarizers on both datasets under both size constraints, and two
methods (SMOreg and LinearRegression) only fail to outperform in one setting.
The results demonstrate the powerfulness of supervised learning for entity
summarization. Further, recall that these methods only use standard models and
rely on features that are used by existing entity summarizers. It would be
reasonable to predict that better results can be achieved with specialized
models and more advanced features. However, creating a large number of ground-
truth summaries for training is expensive, and the generalizability of
supervised methods for entity summarization still needs further exploration.
Moreover, we are interested in how much the seven features contribute to the
good performance of supervised learning. Table 5 shows the results of
RandomForest after removing each individual feature. Considering statistical
significance at the 0.05 level, two features $\mathtt{gf}_{\mathbb{T}}$ and
$\mathtt{lf}$ show effectiveness on both datasets under both size constraints,
and two features $\mathtt{vf}_{\mathbb{T}}$ and $\mathtt{si}$ are only
effective on LinkedMDB. The usefulness of the three binary features
$\mathtt{isC}$, $\mathtt{isE}$, and $\mathtt{isL}$ is not statistically
significant.
Table 5: F1 of RandomForest after removing each individual feature, its
difference from using all features ($\Delta\%$), and the significance level
for the difference ($p$).
DBpedia | LinkedMDB
---|---
$k=5$ | $k=10$ | $k=5$ | $k=10$
| F1 | $\Delta\%$ | $p$ | | F1 | $\Delta\%$ | $p$ | | F1 | $\Delta\%$ | $p$ | | F1 | $\Delta\%$ | $p$
All | 0.399 | — | — | All | 0.576 | — | — | All | 0.449 | — | — | All | 0.506 | — | —
-$\mathtt{gf}_{\mathbb{T}}$ | 0.346 | $-$5.360 | 0.000 | -$\mathtt{lf}$ | 0.546 | $-$0.030 | 0.000 | -$\mathtt{gf}_{\mathbb{T}}$ | 0.383 | $-$0.066 | 0.000 | -$\mathtt{lf}$ | 0.473 | $-$0.033 | 0.008
-$\mathtt{lf}$ | 0.366 | $-$3.307 | 0.000 | -$\mathtt{gf}_{\mathbb{T}}$ | 0.551 | $-$0.025 | 0.000 | -$\mathtt{lf}$ | 0.413 | $-$0.036 | 0.025 | -$\mathtt{vf}_{\mathbb{T}}$ | 0.477 | $-$0.029 | 0.010
-$\mathtt{isC}$ | 0.392 | $-$0.720 | 0.261 | -$\mathtt{vf}_{\mathbb{T}}$ | 0.569 | $-$0.007 | 0.198 | -$\mathtt{vf}_{\mathbb{T}}$ | 0.414 | $-$0.035 | 0.022 | -$\mathtt{gf}_{\mathbb{T}}$ | 0.479 | $-$0.027 | 0.007
-$\mathtt{isE}$ | 0.397 | $-$0.267 | 0.720 | -$\mathtt{isE}$ | 0.570 | $-$0.006 | 0.262 | -$\mathtt{si}$ | 0.442 | $-$0.007 | 0.574 | -$\mathtt{si}$ | 0.486 | $-$0.020 | 0.009
-$\mathtt{si}$ | 0.400 | $+$0.027 | 0.973 | -$\mathtt{isC}$ | 0.571 | $-$0.005 | 0.303 | -$\mathtt{isE}$ | 0.455 | $+$0.005 | 0.651 | -$\mathtt{isL}$ | 0.491 | $-$0.015 | 0.079
-$\mathtt{isL}$ | 0.401 | $+$0.160 | 0.816 | -$\mathtt{si}$ | 0.572 | $-$0.004 | 0.402 | -$\mathtt{isL}$ | 0.456 | $+$0.007 | 0.504 | -$\mathtt{isE}$ | 0.492 | $-$0.014 | 0.148
-$\mathtt{vf}_{\mathbb{T}}$ | 0.407 | $+$0.720 | 0.346 | -$\mathtt{isL}$ | 0.578 | $+$0.002 | 0.683 | -$\mathtt{isC}$ | 0.463 | $+$0.013 | 0.281 | -$\mathtt{isC}$ | 0.514 | $+$0.008 | 0.396
Conclusion. Among existing entity summarizers, BAFREC generally shows the best
performance on ESBM while MPSUM seems more robust. However, none of them are
comparable with our straightforward implementation of supervised learning,
which in turn is still far away from the best possible performance represented
by ORACLE. Therefore, entity summarization on ESBM is a non-trivial task. We
invite researchers to experiment with new ideas on ESBM.
## 6 Discussion and Future work
We identify the following limitations of our work to be addressed in future
work.
Evaluation Criteria. We compute F1 score in the evaluation, which is based on
common triples but ignores semantic overlap between triples. A triple $t$ in a
machine-generated summary $S$ may partially cover the information provided by
some triple $t^{\prime}$ in the ground-truth summary. It may be reasonable to
not completely penalize $S$ for missing $t^{\prime}$ but give some reward for
the presence of $t$. However, it is difficult to quantify the extent of
penalization for all possible cases, particularly when multiple triples
semantically overlap with each other. In future work, we will explore more
proper evaluation criteria.
Representativeness of Ground Truth. The ground-truth summaries in ESBM are not
supposed to represent the view of the entire user population. They are
intrinsically biased towards their creators. Besides, these ground-truth
summaries are created for general purposes. Accordingly, we use them to
evaluate general-purpose entity summarizers. However, for a specific task,
these summaries may not show optimality, and the participating systems may not
represent the state of the art. Still, we believe it is valuable to evaluate
general-purpose systems not only because of their wide range of applications
but also because their original technical features have been reused by task-
specific systems. In future work, we will extend ESBM to a larger scale, and
will consider benchmarking task-specific entity summarization.
Form of Ground Truth. ESBM provides ground-truth summaries, whereas some other
benchmarks offer ground-truth scores of triples [22, 13, 1]. Scoring-based
ground truth may more comprehensively evaluate an entity summarizer than our
set-based ground truth because it not only considers the triples in a machine-
generated summary but also assesses the rest of the triples. However, on the
other hand, a set of top-scored triples may not equal an optimal summary
because they may cover limited aspects of an entity and show redundancy.
Therefore, both methods have their advantages and disadvantages. In future
work, we will conduct scoring-based evaluation to compare with the current
results.
## Acknowledgments
This work was supported in part by the NSFC under Grant 61772264 and in part
by the Qing Lan Program of Jiangsu Province.
## References
* [1] Bobic, T., Waitelonis, J., Sack, H.: FRanCo - A ground truth corpus for fact ranking evaluation. In: SumPre 2015 & HSWI 2015 (2015)
* [2] Cheng, G., Tran, T., Qu, Y.: RELIN: relatedness and informativeness-based centrality for entity summarization. In: ISWC 2011, Part I. pp. 114–129 (2011). https://doi.org/10.1007/978-3-642-25073-6_8
* [3] Cheng, G., Xu, D., Qu, Y.: C3D+P: A summarization method for interactive entity resolution. J. Web Sem. 35, 203–213 (2015). https://doi.org/10.1016/j.websem.2015.05.004
* [4] Cheng, G., Xu, D., Qu, Y.: Summarizing entity descriptions for effective and efficient human-centered entity linking. In: WWW 2015. pp. 184–194 (2015). https://doi.org/10.1145/2736277.2741094
* [5] Gottschalk, S., Demidova, E.: EventKG - the hub of event knowledge on the web \- and biographical timeline generation. Semantic Web 10(6), 1039–1070 (2019). https://doi.org/10.3233/SW-190355
* [6] Gunaratna, K.: Semantics-based Summarization of Entities in Knowledge Graphs. Ph.D. thesis, Wright State University (2017)
* [7] Gunaratna, K., Thirunarayan, K., Sheth, A.P.: FACES: diversity-aware entity summarization using incremental hierarchical conceptual clustering. In: AAAI 2015. pp. 116–122 (2015)
* [8] Gunaratna, K., Thirunarayan, K., Sheth, A.P., Cheng, G.: Gleaning types for literals in RDF triples with application to entity summarization. In: ESWC 2016. pp. 85–100 (2016). https://doi.org/10.1007/978-3-319-34129-3_6
* [9] Hasibi, F., Balog, K., Bratsberg, S.E.: Dynamic factual summaries for entity cards. In: SIGIR 2017. pp. 773–782 (2017). https://doi.org/10.1145/3077136.3080810
* [10] Hassanzadeh, O., Consens, M.P.: Linked movie data base. In: LDOW 2009 (2009)
* [11] Kim, E.K., Choi, K.S.: Entity summarization based on formal concept analysis. In: EYRE 2018 (2018)
* [12] Kroll, H., Nagel, D., Balke, W.T.: BAFREC: Balancing frequency and rarity for entity characterization in linked open data. In: EYRE 2018 (2018)
* [13] Langer, P., Schulze, P., George, S., Kohnen, M., Metzke, T., Abedjan, Z., Kasneci, G.: Assigning global relevance scores to DBpedia facts. In: ICDE Workshops 2014. pp. 248–253 (2014). https://doi.org/10.1109/ICDEW.2014.6818334
* [14] Lehmann, J., Isele, R., Jakob, M., Jentzsch, A., Kontokostas, D., Mendes, P.N., Hellmann, S., Morsey, M., van Kleef, P., Auer, S., Bizer, C.: DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web 6(2), 167–195 (2015). https://doi.org/10.3233/SW-140134
* [15] Liu, Q., Cheng, G., Gunaratna, K., Qu, Y.: Entity summarization: State of the art and future challenges. CoRR abs/1910.08252 (2019), http://arxiv.org/abs/1910.08252
* [16] Pouriyeh, S.A., Allahyari, M., Kochut, K., Cheng, G., Arabnia, H.R.: Combining word embedding and knowledge-based topic modeling for entity summarization. In: ICSC 2018. pp. 252–255 (2018). https://doi.org/10.1109/ICSC.2018.00044
* [17] Pouriyeh, S.A., Allahyari, M., Kochut, K., Cheng, G., Arabnia, H.R.: ES-LDA: entity summarization using knowledge-based topic modeling. In: IJCNLP 2017, Volume 1. pp. 316–325 (2017)
* [18] Sim, S.E., Easterbrook, S.M., Holt, R.C.: Using benchmarking to advance research: A challenge to software engineering. In: ICSE 2003. pp. 74–83 (2003). https://doi.org/10.1109/ICSE.2003.1201189
* [19] Stoilos, G., Stamou, G.B., Kollias, S.D.: A string metric for ontology alignment. In: ISWC 2005. pp. 624–637 (2005). https://doi.org/10.1007/11574620_45
* [20] Sydow, M., Pikula, M., Schenkel, R.: The notion of diversity in graphical entity summarisation on semantic knowledge graphs. J. Intell. Inf. Syst. 41(2), 109–149 (2013). https://doi.org/10.1007/s10844-013-0239-6
* [21] Thalhammer, A.: Linked Data Entity Summarization. Ph.D. thesis, Karlsruher Institut für Technologie (2017)
* [22] Thalhammer, A., Knuth, M., Sack, H.: Evaluating entity summarization using a game-based ground truth. In: ISWC 2012, Part II. pp. 350–361 (2012). https://doi.org/10.1007/978-3-642-35173-0_24
* [23] Thalhammer, A., Lasierra, N., Rettinger, A.: LinkSUM: Using link analysis to summarize entity data. In: ICWE 2016. pp. 244–261 (2016). https://doi.org/10.1007/978-3-319-38791-8_14
* [24] Thalhammer, A., Rettinger, A.: Browsing DBpedia entities with summaries. In: ESWC 2014 Satellite Events. pp. 511–515 (2014). https://doi.org/10.1007/978-3-319-11955-7_76
* [25] Thalhammer, A., Toma, I., Roa-Valverde, A.J., Fensel, D.: Leveraging usage data for linked data movie entity summarization. In: USEWOD 2012 (2012)
* [26] Tonon, A., Catasta, M., Prokofyev, R., Demartini, G., Aberer, K., Cudré-Mauroux, P.: Contextualized ranking of entity types based on knowledge graphs. J. Web Sem. 37-38, 170–183 (2016). https://doi.org/10.1016/j.websem.2015.12.005
* [27] Wei, D., Gao, S., Liu, Y., Liu, Z., Huang, L.: MPSUM: Entity summarization with predicate-based matching. In: EYRE 2018 (2018)
* [28] Xu, D., Zheng, L., Qu, Y.: CD at ENSEC 2016: Generating characteristic and diverse entity summaries. In: SumPre 2016 (2016)
|
2024-09-04T02:54:58.169588 | 2020-03-08T07:15:48 | 2003.03736 | {
"authors": "Qingxia Liu, Gong Cheng, Yuzhong Qu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26103",
"submitter": "Gong Cheng",
"url": "https://arxiv.org/abs/2003.03736"
} | arxiv-papers | 11institutetext: National Key Laboratory for Novel Software Technology,
Nanjing University, China
11email<EMAIL_ADDRESS><EMAIL_ADDRESS>
# DeepLENS: Deep Learning for Entity Summarization
Qingxia Liu Gong Cheng Yuzhong Qu
###### Abstract
Entity summarization has been a prominent task over knowledge graphs. While
existing methods are mainly unsupervised, we present DeepLENS, a simple yet
effective deep learning model where we exploit textual semantics for encoding
triples and we score each candidate triple based on its interdependence on
other triples. DeepLENS significantly outperformed existing methods on a
public benchmark.
## 1 Introduction
Entity summarization is the task of computing a compact summary for an entity
by selecting an optimal size-constrained subset of entity-property-value
triples from a knowledge graph such as an RDF graph [7]. It has found a wide
variety of applications, for example, to generate a compact entity card from
Google’s Knowledge Graph where an entity may be described in dozens or
hundreds of triples. Generating entity summaries for general purposes has
attracted much research attention, but existing methods are mainly
unsupervised [2, 9, 3, 4, 13, 10, 6, 5, 11]. One research question that
naturally arises is _whether deep learning can much better solve this task_.
To the best of our knowledge, ESA [12] is the only supervised method in the
literature for this task. ESA encodes triples using graph embedding (TransE),
and employs BiLSTM with supervised attention mechanism. Although it
outperformed unsupervised methods, the improvement reported in [12] was rather
marginal, around $+7\%$ compared with unsupervised FACES-E [4] on the ESBM
benchmark [8]. It inspired us to explore more effective deep learning models
for the task of general-purpose entity summarization.
In this short paper, we present DeepLENS,111https://github.com/nju-
websoft/DeepLENS a novel Deep Learning based approach to ENtity Summarization.
DeepLENS uses a simple yet effective model which addresses the following two
limitations of ESA, and thus achieved significantly better results in the
experiments.
1. 1.
Different from ESA which encodes a triple using graph embedding, we use word
embedding because we consider textual semantics more useful than graph
structure for the entity summarization task.
2. 2.
Whereas ESA encodes a set of triples as a sequence and its performance is
sensitive to the chosen order, our aggregation-based representation satisfies
permutation invariance and hence more suitable for entity summarization.
In the remainder of the paper, Section 2 details DeepLENS, Section 3 presents
experiment results, and Section 4 concludes the paper.
## 2 Approach
#### 2.0.1 Problem Statement
An RDF graph $T$ is a set of triples. The _description_ of entity $e$ in $T$,
denoted by $\mathtt{Desc}(e)\subseteq T$, comprises triples where $e$ is the
subject or object. Each triple $t\in\mathtt{Desc}(e)$ describes a property
$\mathtt{prop}(t)$ which is the predicate of $t$, and gives a value
$\mathtt{val}(t)$ which is the object or subject of $t$ other than $e$. For a
size constraint $k$, a _summary_ of $e$ is a subset of triples
$S\subseteq\mathtt{Desc}(e)$ with $|S|\leq k$. We aim to generate an optimal
summary for general purposes.
#### 2.0.2 Overview of DeepLENS
Our approach DeepLENS generates an optimal summary by selecting $k$ most
salient triples. As a supervised approach, it learns salience from labeled
entity summaries. However, two issues remain unsolved. First, knowledge graph
like RDF graph is a mixture of graph structure and textual content. The
effectiveness of a learning-based approach to entity summarization relies on a
_proper representation of entity descriptions of such mixed nature_. Second,
the salience of a triple is not absolute but dependent on the context, i.e.,
the set of other triples in the entity description. It is essential to
_represent their independence_. DeepLENS addresses these issues with the
scoring model presented in Fig. 1. It has three modules which we will detail
below: triple encoding, entity description encoding, and triple scoring.
Finally, the model scores each candidate triple $t\in\mathtt{Desc}(e)$ in the
context of $\mathtt{Desc}(e)$.
Figure 1: Model of DeepLENS.
#### 2.0.3 Triple Encoding
For entity $e$, a triple $t\in\mathtt{Desc}(e)$ provides a property-value pair
$\langle\mathtt{prop}(t),\mathtt{val}(t)\rangle$ of $e$. Previous research
[12] leverages graph embedding to encode the structural features of
$\mathtt{prop}(t)$ and $\mathtt{val}(t)$. By contrast, for the task of entity
summarization we consider textual semantics more important than graph
structure, and we _solely exploit textual semantics_ for encoding $t$.
Specifically, for RDF resource $r$, we obtain its _textual form_ as follows.
For an IRI or a blank node, we retrieve its rdfs:label if it is available,
otherwise we have to use its local name; for a literal, we take its lexical
form. We represent each word in the textual form by a pre-trained word
embedding vector, and we average these vectors over all the words to represent
$r$, denoted by $\text{Embedding}(r)$. For triple $t\in\mathtt{Desc}(e)$, we
generate and concatenate such vector representations for $\mathtt{prop}(t)$
and $\mathtt{val}(t)$ to form $\boldsymbol{t}$, the _initial representation_
of $t$. Then $\boldsymbol{t}$ is fed into a multi-layer perceptron (MLP) to
generate $\boldsymbol{h}$, the _final representation_ of $t$:
$\boldsymbol{t}=\left[\text{Embedding}(\mathtt{prop}(t));~{}\text{Embedding}(\mathtt{val}(t))\right]\,,\quad\boldsymbol{h}=\text{MLP}_{\text{C}}(\boldsymbol{t})\,.\\\
$ (1)
#### 2.0.4 Entity Description Encoding
To score a candidate triple in the context of other triples in the entity
description, previous research [12] captures the independence between triples
in $\mathtt{Desc}(e)$ using BiLSTM to pass information. Triples are fed into
BiLSTM as a sequence. However, $\mathtt{Desc}(e)$ is a set and the triples
lack a natural order. The performance of this model is unfavourably sensitive
to the order of input triples. Indeed, as we will show in the experiments,
different orders could lead to considerably different performance.
To generate a representation for $\mathtt{Desc}(e)$ that is _permutation
invariant_ , we perform aggregation. Specifically, let
$\boldsymbol{t_{1}},\ldots,\boldsymbol{t_{n}}$ be the initial representations
of triples in $\mathtt{Desc}(e)$ computed by Eq. (1). We feed a MLP with each
$\boldsymbol{t_{i}}$ for $1\leq i\leq n$ and generate their final
representations $\boldsymbol{g_{1}},\ldots,\boldsymbol{g_{n}}$, which in turn
are weighted using attention mechanism from $\boldsymbol{h}$ computed by Eq.
(1), the final representation of the candidate triple $t$ to be scored. We
calculate the sum of these weighted representations of triples to represent
$\mathtt{Desc}(e)$, denoted by $\boldsymbol{d}$:
$\boldsymbol{g_{i}}=\text{MLP}_{\text{D}}(\boldsymbol{t_{i}})\,,\quad
a_{i}=\frac{\exp(\cos(\boldsymbol{h},\boldsymbol{g_{i}}))}{\sum_{j}\exp(\cos(\boldsymbol{h},\boldsymbol{g_{j}}))}\,,\quad\boldsymbol{d}=\sum_{i=1}^{n}{a_{i}\boldsymbol{g_{i}}}\,.\\\
$ (2)
The result of summation is not sensitive to the order of triples in
$\mathtt{Desc}(e)$.
#### 2.0.5 Triple Scoring
For each candidate triple $t\in\mathtt{Desc}(e)$ to be scored, we concatenate
its final representation $\boldsymbol{h}$ and the representation
$\boldsymbol{d}$ for $\mathtt{Desc}(e)$. We feed the result into a MLP to
compute the context-based salience score of $t$:
$\mathtt{score}(t|\mathtt{Desc}(e))=\text{MLP}_{\text{S}}(\left[\boldsymbol{h};~{}\boldsymbol{d}\right])\,.$
(3)
Parameters of the entire model are jointly trained based on the mean squared
error loss, supervised by labeled entity summaries.
## 3 Experiments
### 3.1 Datasets
We used ESBM v1.2, the largest available benchmark for evaluating general-
purpose entity summarization.222https://w3id.org/esbm For each of 125 entities
in DBpedia and 50 entities in LinkedMDB, this benchmark provided 6 ground-
truth summaries created by different human experts under $k=5$, and another 6
ground-truth summaries under $k=10$. We used the train-valid-test split
specified in the benchmark to perform five-fold cross-validation.
### 3.2 Participating Methods
We compared DeepLENS with 10 baseline methods.
Unsupervised Methods. We compared with 9 unsupervised methods that had been
tested on ESBM: RELIN [2], DIVERSUM [9], FACES [3], FACES-E [4], CD [13],
LinkSUM [10], BAFREC [6], KAFCA [5], and MPSUM [11]. We directly presented
their results reported on the ESBM website.
Supervised Methods. We compared with ESA [12], the only supervised method in
the literature to our knowledge. We reused its open-source implementation and
configuration.333https://github.com/WeiDongjunGabriel/ESA We fed it with
triples sorted in alphabetical order.
For our approach DeepLENS, we used 300-dimensional fastText [1] word embedding
vectors trained on Wikipedia to generate initial representations of triples.
The numbers of hidden units in $\text{MLP}_{\text{C}}$,
$\text{MLP}_{\text{D}}$, and $\text{MLP}_{\text{S}}$ were [64, 64], [64, 64],
and [64, 64, 64], respectively. All hidden layers used ReLU as activation
function. The final output layer of $\text{MLP}_{\text{S}}$ consisted of one
linear unit. We trained the model using Adam optimizer with learning rate
0.01.
For both ESA and DeepLENS, we performed early stopping on the validation set
to choose the number of training epochs from 1–50.
Oracle Method. ORACLE approximated the best possible performance on ESBM and
formed a reference point used for comparisons. It outputted $k$ triples that
most frequently appeared in ground-truth summaries.
### 3.3 Results
Following ESBM, we compared machine-generated summaries with ground-truth
summaries by calculating F1 score, and reported the mean F1 achieved by each
method over all the test entities in a dataset.
Table 1: Average F1 over all the test entities. Significant and insignificant
differences ($p<0.01$) between DeepLENS and each baseline are indicated by
$\blacktriangle$ and $\circ$, respectively.
| DBpedia | LinkedMDB
---|---|---
| $k=5$ | $k=10$ | $k=5$ | $k=10$
RELIN [2] | 0.242 | 0.455 | 0.203 | 0.258
DIVERSUM [9] | 0.249 | 0.507 | 0.207 | 0.358
FACES [3] | 0.270 | 0.428 | 0.169 | 0.263
FACES-E [4] | 0.280 | 0.488 | 0.313 | 0.393
CD [13] | 0.283 | 0.513 | 0.217 | 0.331
LinkSUM [10] | 0.287 | 0.486 | 0.140 | 0.279
BAFREC [6] | 0.335 | 0.503 | 0.360 | 0.402
KAFCA [5] | 0.314 | 0.509 | 0.244 | 0.397
MPSUM [11] | 0.314 | 0.512 | 0.272 | 0.423
ESA [12] | 0.331 | 0.532 | 0.350 | 0.416
DeepLENS | 0.402 ▲▲▲▲▲▲▲▲▲▲ | 0.574 ▲▲▲▲▲▲▲▲▲▲ | 0.474 ▲▲▲▲▲▲▲▲▲▲ | 0.493 ▲▲▲▲▲▲▲▲▲▲
ORACLE | 0.595 | 0.713 | 0.619 | 0.678
Comparison with Baselines. As shown in Table 1, supervised methods were
generally better than unsupervised methods. Our DeepLENS outperformed all the
baselines including ESA. Moreover, two-tailed t-test showed that all the
differences were statistically significant ($p<0.01$) in all the settings.
DeepLENS achieved new state-of-the-art results on the ESBM benchmark. However,
the notable gaps between DeepLENS and ORACLE suggested room for improvement
and were to be closed by future research.
Table 2: Average F1 over all the test entities achieved by different variants of ESA. | DBpedia | LinkedMDB
---|---|---
| $k=5$ | $k=10$ | $k=5$ | $k=10$
ESA | 0.331 | 0.532 | 0.350 | 0.416
ESA-text | 0.379 | 0.558 | 0.390 | 0.418
ESA-rnd | 0.116$\pm$0.008 | 0.222$\pm$0.007 | 0.113$\pm$0.015 | 0.219$\pm$0.011
Ablation Study. Compared with ESA, we attributed the better performance of
DeepLENS to two improvements in our implementation: the exploitation of
textual semantics, and the permutation invariant representation of triple set.
They were demonstrated by the following ablation study of ESA.
First, we compared two variants of ESA by encoding triples in different ways.
For triple $t$, the original version of ESA encoded the structural features of
$\mathtt{prop}(t)$ and $\mathtt{val}(t)$ using TransE. We implemented ESA-
text, a variant that encoded both $\mathtt{prop}(t)$ and $\mathtt{val}(t)$
using fastText as in our approach. As shown in Table 2, ESA-text slightly
outperformed ESA, showing the usefulness of textual semantics compared with
graph structure used by ESA.
Second, we compared two variants of ESA by feeding with triples in different
orders. The default version of ESA was fed with triples sorted in alphabetical
order for both training and testing. We implemented ESA-rnd, a variant that
was fed with triples in alphabetical order for training but in random order
for testing. We tested ESA-rnd 20 times and reported its mean F1 with standard
deviation. In Table 2, the notable falls from ESA to ESA-rnd showed the
unfavourable sensitivity of BiLSTM used by ESA to the order of input triples.
## 4 Conclusion
We presented DeepLENS, a simple yet effective deep learning model for general-
purpose entity summarization. It has achieved new state-of-the-art results on
the ESBM benchmark, significantly outperforming existing methods. Thus, entity
summarization becomes another research field where a combination of deep
learning and knowledge graph is likely to shine. However, in DeepLENS we only
exploit textual semantics. In future work, we will incorporate ontological
semantics into our model. We will also revisit the usefulness of structural
semantics.
## Acknowledgments
This work was supported by the National Key R&D Program of China under Grant
2018YFB1005100 and by the Qing Lan Program of Jiangsu Province.
## References
* [1] Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. TACL 5, 135–146 (2017)
* [2] Cheng, G., Tran, T., Qu, Y.: RELIN: relatedness and informativeness-based centrality for entity summarization. In: ISWC 2011, Part I. pp. 114–129 (2011)
* [3] Gunaratna, K., Thirunarayan, K., Sheth, A.P.: FACES: diversity-aware entity summarization using incremental hierarchical conceptual clustering. In: AAAI 2015. pp. 116–122 (2015)
* [4] Gunaratna, K., Thirunarayan, K., Sheth, A.P., Cheng, G.: Gleaning types for literals in RDF triples with application to entity summarization. In: ESWC 2016. pp. 85–100 (2016)
* [5] Kim, E.K., Choi, K.S.: Entity summarization based on formal concept analysis. In: EYRE 2018 (2018)
* [6] Kroll, H., Nagel, D., Balke, W.T.: BAFREC: Balancing frequency and rarity for entity characterization in linked open data. In: EYRE 2018 (2018)
* [7] Liu, Q., Cheng, G., Gunaratna, K., Qu, Y.: Entity summarization: State of the art and future challenges. CoRR abs/1910.08252 (2019)
* [8] Liu, Q., Cheng, G., Gunaratna, K., Qu, Y.: ESBM: An entity summarization benchmark. In: ESWC 2020 (2020)
* [9] Sydow, M., Pikula, M., Schenkel, R.: The notion of diversity in graphical entity summarisation on semantic knowledge graphs. J. Intell. Inf. Syst. 41(2), 109–149 (2013)
* [10] Thalhammer, A., Lasierra, N., Rettinger, A.: LinkSUM: Using link analysis to summarize entity data. In: ICWE 2016. pp. 244–261 (2016)
* [11] Wei, D., Gao, S., Liu, Y., Liu, Z., Huang, L.: MPSUM: Entity summarization with predicate-based matching. In: EYRE 2018 (2018)
* [12] Wei, D., Liu, Y., Zhu, F., Zang, L., Zhou, W., Han, J., Hu, S.: ESA: Entity summarization with attention. In: EYRE 2019. pp. 40–44 (2019)
* [13] Xu, D., Zheng, L., Qu, Y.: CD at ENSEC 2016: Generating characteristic and diverse entity summaries. In: SumPre 2016 (2016)
|
2024-09-04T02:54:58.183852 | 2020-03-08T09:26:26 | 2003.03751 | {
"authors": "Nathan Bowler, Ting Su",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26104",
"submitter": "Ting Su",
"url": "https://arxiv.org/abs/2003.03751"
} | arxiv-papers | # Classification of Doubly Distributive skew Hyperfields and Stringent
hypergroups
Nathan Bowler and Ting Su<EMAIL_ADDRESS>Department of
Mathematics, Universität Hamburg, Germany<EMAIL_ADDRESS>Department
of Mathematics, Universität Hamburg, Germany
###### Abstract.
A hypergroup is stringent if $a\boxplus b$ is a singleton whenever $a\neq-b$.
A hyperfield is stringent if the underlying additive hypergroup is. Every
doubly distributive skew hyperfield is stringent, but not vice versa. We
present a classification of stringent hypergroups, from which a classification
of doubly distributive skew hyperfields follows. It follows from our
classification that every such hyperfield is a quotient of a skew field.
###### Key words and phrases:
hypergroup, hyperring, hyperfield, double distributivity
## 1\. Introduction
The notion of hyperfield was first introduced by Krasner in [Kra57, Kra83]. It
is an algebraic structure similar to a field except that its addition
$\boxplus$ is multivalued. In [Vir10], Viro provided an excellent introduction
to and motivation for hyperfields and introduced several good examples of
hyperfields, including the tropical hyperfield $\mathbb{T}_{+}$, the tropical
real hyperfield $\mathbb{TR}$ and the ultratriangle hyperfield
$\mathbb{T}\triangle$. Viro has also illustrated the utility of
$\mathbb{T}_{+}$ for the foundations of tropical geometry in several
interesting papers (cf. [Vir10, Vir11]).
In [BB16], Baker and Bowler presented an algebraic framework which
simultaneously generalizes the notion of linear subspaces, matroids, oriented
matroids, and valuated matroids, and called the resulting objects matroids
over hyperfields. A matroid over a field $F$ corresponds to a subspace of some
$F^{n}$. A $\mathbb{K}$-matroid is just a matroid. An $\mathbb{S}$-matroid is
an oriented matroid. And a $\mathbb{T}\triangle$-matroid is a valuated
matroid, as defined in [DW92].
Baker and Bowler also provided two natural notions of matroids over a
hyperfield $F$, weak $F$-matroids and strong $F$-matroids, and showed that the
two notions coincide when $F$ has a property called double distributivity. A
hyperfield $F$ is doubly distributive if $(a\boxplus b)(c\boxplus
d)=ac\boxplus ad\boxplus bc\boxplus bd$ for any $a,b,c,d\in F$. Fields,
$\mathbb{K}$, $\mathbb{S}$ and $\mathbb{T}\triangle$ are all doubly
distributive. So too are the other two hyperfields mentioned above,
$\mathbb{T}_{+}$ and $\mathbb{TR}$.
It is these the results in tropical geometry and matroid theory which motivate
our interest in doubly distributive hyperfields. More generally, we are also
interested in doubly distributive hyperrings, which were also analysed by
Baker and Bowler. In fact, rather than just hyperfields, they worked with a
more general kind of algebraic object known as tracts (cf. [BB19]). The other
important example of tracts other than hyperfields is given by partial fields,
which have also been the subject of much fruitful study. Baker and Bowler
defined a special class of tracts called partial hyperfields, objects based on
hyperrings which generalize both hyperfields and partial fields in a natural
way. The property of double distributivity also extends to hyperrings and thus
to partial hyperfields.
We will classify the doubly distributive skew hyperfields in Section 5. The
classification itself will be described in Section 4, but has the following
important consequence:
###### Definition 1.1.
A valuation $\nu$ of a skew hyperfield $F$ is a map from $F$ to
$G\cup\\{-\infty\\}$, where $(G,<)$ is a linearly ordered group, satisfying
1. (1)
$\nu(x)=-\infty$ if and only if $x=0$.
2. (2)
$\nu(xy)=\nu(x)\cdot\nu(y)$.
3. (3)
$\nu(x)>\nu(y)$ implies $x\boxplus y=\\{x\\}$.
###### Theorem 1.2.
For every doubly distributive skew hyperfield $F$, there is always a valuation
$\nu$ of $F$ such that $\nu^{-1}(1_{G})$ is either the Krasner hyperfield, or
the sign hyperfield, or a skew field.
This compact description is from the paper [BP19].
In particular, since any nontrivial ordered group is infinite, it follows from
our results that the only finite doubly distributive hyperfields are the
Krasner hyperfield, the sign hyperfield and the finite fields.
This classification has a number of applications. For example, we use it in
Section 7 to show that any doubly distributive skew hyperfield is a quotient
of a skew field. Bowler and Pendavingh used it in [BP19] to show that any
doubly distributive skew hyperfield is perfect and to provide vector axioms
for matroids over such skew hyperfields.
Our classification uses a property of the underlying hypergroup which we call
stringency. A hyperfield $F$ is stringent, if $a\boxplus b$ is a singleton
whenever $a\neq-b$.
###### Proposition 1.3.
Every doubly distributive skew hyperfield is stringent.
###### Proof.
Let $F$ be a doubly distributive skew hyperfield. Let $a,b\in F^{\times}$ be
such that $a\neq-b$. Let $x,y\in F^{\times}$ be such that $x,y\in a\boxplus
b$. By double distributivity, we have
$(a\boxplus b)(x^{-1}\boxplus-y^{-1})=(a\boxplus b)\cdot
x^{-1}\boxplus(a\boxplus b)\cdot(-y^{-1})\supseteq x\cdot x^{-1}\boxplus
y\cdot(-y^{-1})=1\boxplus-1\ni 0.$
As $a\neq-b$, then $x^{-1}=y^{-1}$, and so $x=y$. So $a\boxplus b$ is a
singleton if $a\neq-b$. ∎
However, not every stringent skew hyperfield is doubly distributive. The
following is a counterexample.
###### Example 1.4.
Let $F:=\mathbb{Z}\cup\\{-\infty\\}$ be the stringent hyperfield with
multiplication given by $a\odot b=a+b$ and multiplicative identity $0$.
Hyperaddition is given by
$a\boxplus b=\begin{cases}\\{\max(a,b)\\}&\text{ if $a\neq b$,}\\\
\\{c\,|\,c<a\\}&\text{ if $a=b$,}\end{cases}$
so that the additive identity is $-\infty$. Here we use the standard total
order on $\mathbb{Z}$ and set $-\infty<x$ for all $x\in\mathbb{Z}$.
$F$ is not doubly distributive because
$\displaystyle(0\boxplus 0)\odot(0\boxplus 0)$
$\displaystyle=\\{z\,|\,z<0\\}\odot\\{z\,|\,z<0\\}=\\{z\,|\,z<-1\\},$
$\displaystyle 0\boxplus 0\boxplus 0\boxplus 0$
$\displaystyle=\\{z\,|\,z<0\\}\boxplus\\{z\,|\,z<0\\}=\\{z\,|\,z<0\\}.$
We use our classification of stringent skew hyperfields to derive a
classification of stringent skew hyperrings in Section 6. However, this does
not give a classification of doubly distributive skew hyperrings, since not
every doubly distributive skew hyperring is stringent (see Example 6.2).
In fact, we classify all stringent hypergroups, and our classification of
doubly distributive skew hyperfields follows from this.
###### Definition 1.5.
Let $(G,<)$ be a totally ordered set, let $(F_{g}\,|\,g\in G)$ be a family of
hypergroups with a common identity element $0$ in each $F_{g}$ but otherwise
disjoint, and let $\psi$ be the surjective function from $\bigcup_{g\in
G}F_{g}^{\times}$ to $G$ sending $f$ in $F_{g}^{\times}$ to $g$. We denote the
hyperaddition of $F_{g}$ by $\boxplus_{g}$. For any $g\in G$ we denote by
$g\downarrow$ the set of $h\in G$ with $h<g$.
Then the wedge sum $F=\bigvee_{g\in G}{F_{g}}$ is the hypergroup with ground
set $\bigcup_{g\in G}F_{g}$ and hyperaddition given by
$\displaystyle x\boxplus 0$ $\displaystyle=0\boxplus x=\\{x\\},$
$\displaystyle x\boxplus y$ $\displaystyle=\begin{cases}\\{x\\}&\text{if
$\psi(x)>\psi(y)$,}\\\ \\{y\\}&\text{if $\psi(x)<\psi(y)$,}\\\
x\boxplus_{\psi(x)}y&\text{if $\psi(x)=\psi(y)$ and $0\not\in
x\boxplus_{\psi(x)}y$,}\\\
(x\boxplus_{\psi(x)}y)\cup\psi^{-1}(\psi(x)\downarrow)&\text{if
$\psi(x)=\psi(y)$ and $0\in x\boxplus_{\psi(x)}y$. }\end{cases}$
We can also define $\bigcup_{g\in G}F_{g}$ up to isomorphism if the
$F_{g}^{\prime}s$ don’t have the same identity or aren’t otherwise disjoint,
by replacing the $F_{g}^{\prime}s$ with suitably chosen isomorphic copies.
We will show in Section 3 that this construction always yields a hypergroup,
and we classify the stringent hypergroups as follows:
###### Theorem 1.6.
Every stringent hypergroup is a wedge sum $\bigvee_{g\in G}{F_{g}}$ where each
$F_{g}$ is either a copy of the Krasner hypergroup, or a copy of the sign
hypergroup, or a group.
This classification of hypergroups is used to derive the classification of
doubly distributive skew hyperfields discussed above.
### 1.1. Structure of the paper
After the classification of stringent hypergroups in Section 3, we show in
Section 4 that every stringent skew hyperfield arises from a short exact
sequence of groups, where the first group in the sequence is the
multiplicative group of either the Krasner hyperfield or the sign hyperfield
or a skew field, and the last group in the sequence is a totally ordered
group. The underlying additive hypergroup is a wedge sum of isomorphic copies
of hypergroups. Then we present the classification of doubly distributive skew
hyperfields in Section 5 following from the classification of stringent skew
hyperfields. We show the surprising result that every stringent skew hyperring
is either a skew ring or a stringent skew hyperfield in Section 6. We use our
classification to show that every stringent skew hyperfield is a quotient of a
skew field by some normal subgroup in Section 7. In Appendix A we present a
proof that a construction really gives a skew field and in Appendix B we talk
about the semirings associated to doubly distributive hyperfields.
### Acknowledgements
We thank Matthew Baker and Laura Anderson (second author’s PhD advisor) for
introducing the two authors to each other. We thank Laura Anderson and Tom
Zaslavsky, who gave us important comments on early versions of the work.
Thanks also to Pascal Gollin for asking whether our classification might hold
for all stringent hypergroups.
## 2\. Background
###### Notation 2.1.
Throughout $G$ and $H$ denotes groups.
For a hypergroup (or skew hyperring) $S$, $S^{\times}$ denotes $S-\\{0\\}$.
For a function $f$ from a hypergroup (or skew hyperring) $A$ to a hypergroup
(or skew hyperring) $B$, $\operatorname{supp}(f)$ denotes the set of support
of $f$ (the elements of $A$ where the function value is not zero).
### 2.1. Hypergroups, hyperrings and hyperfields
###### Definition 2.2.
A hyperoperation on a set $S$ is a map $\boxplus$ from $S\times S$ to the
collection of non-empty subsets of $S$.
If $A$, $B$ are non-empty subsets of $S$, we define
$A\boxplus B:=\bigcup_{a\in A,b\in B}a\boxplus b$
and we say that $\boxplus$ is associative if $a\boxplus(b\boxplus
c)=(a\boxplus b)\boxplus c$ for all $a,b,c\in S$.
All hyperoperations in this paper will be associative.
###### Definition 2.3.
[Vir10] A hypergroup is a tuple $(G,\boxplus,0)$ where $\boxplus$ is an
associative hyperoperation on $G$ such that:
1. (1)
$0\boxplus x=x\boxplus 0=\\{x\\}$ for all $x\in G$.
2. (2)
For every $x\in G$ there is a unique element $x^{\prime}$ of $G$ such that
$0\in x\boxplus x^{\prime}$ and there is a unique element $x^{\prime\prime}$
of $G$ such that $0\in x^{\prime\prime}\boxplus x$. Furthermore,
$x^{\prime}=x^{\prime\prime}$. This element is denoted by $-x$ and called the
hyperinverse of $x$.
3. (3)
(Invertibility of sums) $x\in y\boxplus z$ if and only if $-x\in-z\boxplus-y$.
A hypergroup is said to be commutative if
4. (4)
$x\in y\boxplus z$ if and only if $x\in z\boxplus y$.
###### Theorem 2.4.
[Vir10] In Definition 2.3, the axiom (3) can be replaced by
(Reversibility property) $x\in y\boxplus z$ implies $y\in x\boxplus-z$ and
$z\in-y\boxplus x$.
The Reversibility property was introduced by Marshall in [Mar06].
###### Definition 2.5.
A skew hyperring is a tuple $(R,\odot,\boxplus,1,0)$ such that:
1. (1)
$(R,\odot,1)$ is a monoid.
2. (2)
$(R,\boxplus,0)$ is a commutative hypergroup.
3. (3)
(Absorption rule) $x\odot 0=0\odot x=0$ for all $x\in R$.
4. (4)
(Distributive Law) $a\odot(x\boxplus y)=(a\odot x)\boxplus(a\odot y)$ and
$(x\boxplus y)\odot a=(x\odot a)\boxplus(y\odot a)$ for all $a,x,y\in R$.
A hyperring is a skew hyperring with commutative multiplication.
A skew hyperring $F$ is called a skew hyperfield if $0\neq 1$ and every non-
zero element of $F$ has a multiplicative inverse.
A hyperfield is then a skew hyperfield with commutative multiplication.
###### Definition 2.6.
Let $F$ and $G$ be skew hyperrings. We may define a skew hyperring $F\times G$
with $(x_{1},y_{1})\boxplus(x_{2},y_{2})$ defined as
$(x_{1}\boxplus_{F}x_{2})\times(y_{1}\boxplus_{G}y_{2})$ and multiplication
defined pointwise. Its additive identity is $(0_{F},0_{G})$ and its
multiplicative identity is $(1_{F},1_{G})$. We call $F\times G$ the product of
$F$ and $G$.
Let $x,y\in F$, we will sometimes write $xy$ instead of $x\odot y$ if there is
no risk of confusion.
###### Example 2.7.
In [Vir10], Viro provided a good introduction to hyperfields. Several of the
following hyperfields were first introduced there.
1. (1)
If $F$ is a field, then $F$ is a hyperfield with $a\odot b=a\cdot b$ and
$a\boxplus b=\\{a+b\\}$, for any $a,b\in F$.
2. (2)
The Krasner hyperfield $\mathbb{K}:=\\{0,1\\}$ has the usual multiplication
rule and hyperaddition is defined by $0\boxplus x=\\{x\\}$ for
$x\in\mathbb{K}$ and $1\boxplus 1=\\{0,1\\}$.
3. (3)
The sign hyperfield $\mathbb{S}:=\\{0,1,-1\\}$ has the usual multiplication
rule and hyperaddition is defined by $0\boxplus x=\\{x\\},x\boxplus x=\\{x\\}$
for $x\in\mathbb{S}$, and $1\boxplus-1=\\{0,1,-1\\}$.
4. (4)
The triangle hyperfield $\triangle:=\mathbb{R}_{\geq 0}$ has the usual
multiplication rule and hyperaddition is defined by $x\boxplus
y=\\{z\,|\,|x-y|\leq z\leq x+y\\}$.
5. (5)
The tropical hyperfield $\mathbb{T}_{+}:=\mathbb{R}\cup\\{-\infty\\}$ has
multiplication defined by $x\odot y=x+y$ (with $-\infty$ as an absorbing
element), for $x,y\in\mathbb{T}_{+}$. Hyperaddition is defined by
$x\boxplus y=\begin{cases}\\{\max(x,y)\\}&\text{ if $x\neq y$,}\\\
\\{z\,|\,z\leq x\\}&\text{ if $x=y$.}\end{cases}$
Here we use the standard total order on $\mathbb{R}$ and set $-\infty<x$ for
all $x\in\mathbb{R}$. The additive identity is $-\infty$ and the
multiplicative identity is $0$.
6. (6)
The tropical phase hyperfield $\Phi:=S^{1}\cup\\{0\\}$ has the usual
multiplication rule and hyperaddition is defined by $0\boxplus x=\\{x\\}$,
$x\boxplus-x=S^{1}\cup\\{0\\}$ and $x\boxplus
y=\\{\frac{ax+by}{|ax+by|}\,|\,a,b\in\mathbb{R}_{\geq 0},a+b\neq 0\\}$ for
$x,y\in S^{1}$ with $y\neq-x$.111This is called phase hyperfield in Viro’s
paper, but more recent papers have often worked with the phase hyperfield (7)
described next. The confusion on this point is exacerbated by the fact that
Viro incorrectly claims that his phase hyperfield is the same as the quotient
hyperfield of the complex numbers by the positive real numbers, but this
construction actually gives the hyperfield (7).
7. (7)
The phase hyperfield $\mathbb{P}:=S^{1}\cup\\{0\\}$ has the usual
multiplication rule and hyperaddition is defined by $0\boxplus x=\\{x\\}$,
$x\boxplus-x=\\{x,-x,0\\}$ and $x\boxplus
y=\\{\frac{ax+by}{|ax+by|}\,|\,a,b\in\mathbb{R}_{>0}\\}$ for $x,y\in S^{1}$
with $y\neq-x$.
8. (8)
The tropical real hyperfield $\mathbb{TR}:=\mathbb{R}$ has the usual
multiplication rule and hyperaddition is defined by
$x\boxplus y=\begin{cases}\\{x\\}&\text{ if $|x|>|y|$,}\\\ \\{y\\}&\text{ if
$|x|<|y|$,}\\\ \\{x\\}&\text{ if $x=y$,}\\\ \\{z\,|\,|z|\leq|x|\\}&\text{ if
$x=-y$.}\end{cases}$
9. (9)
The tropical complex hyperfield $\mathbb{TC}:=\mathbb{C}$ has the usual
multiplication rule and hyperaddition is defined by
$x\boxplus y=\begin{cases}\\{x\\}&\text{if $|x|>|y|$,}\\\ \\{y\\}&\text{if
$|x|<|y|$,}\\\ \\{|x|\dfrac{ax+by}{|ax+by|}\,|\,a,b\in\mathbb{R}_{\geq
0},a+b\neq 0\\}&\text{if $|x|=|y|$ and $x\neq-y$,}\\\
\\{z\,|\,|z|\leq|x|\\}&\text{if $x=-y$.}\end{cases}$
10. (10)
The ultratriangle hyperfield $\mathbb{T}\triangle:=\mathbb{R}_{\geq 0}$
(denoted by $\mathbb{Y}_{\times}$ in [Vir10] and $\mathbb{T}$ in [BB19]) has
the usual multiplication rule and hyperaddition is defined by
$x\boxplus y=\begin{cases}\\{\max(x,y)\\}&\text{ if $x\neq y$,}\\\
\\{z\,|\,z\leq x\\}&\text{ if $x=y$.}\end{cases}$
###### Definition 2.8.
[Vir10, BB19] A skew hyperring $R$ is said to be doubly distributive if for
any $a$, $b$, $c$ and $d$ in $R$, we have $(a\boxplus b)(c\boxplus
d)=ac\boxplus ad\boxplus bc\boxplus bd.$
###### Example 2.9.
Fields, $\mathbb{K}$, $\mathbb{S}$, $\mathbb{T}_{+}$, $\mathbb{TR}$,
$\mathbb{T}\triangle$ are all doubly distributive, but $\triangle$,
$\mathbb{P}$, $\Phi$ and $\mathbb{TC}$ are not doubly distributive.
###### Definition 2.10.
A hypergroup $G$ is said to be stringent if for any $a,b\in G$ the set
$a\boxplus b$ is a singleton whenever $a\neq-b$.
A skew hyperring is said to be stringent if the underlying hypergroup $F$ is
stringent.
### 2.2. Homomorphism
###### Definition 2.11.
[BB16, Pen18] A hypergroup homomorphism is a map $f:G\rightarrow H$ such that
$f(0)=0$ and $f(x\boxplus y)\subseteq f(x)\boxplus f(y)$ for all $x,y\in G$.
A skew hyperring homomorphism is a map $f:R\rightarrow S$ which is a
homomorphism of additive commutative hypergroups as well as a homomorphism of
multiplicative monoids (i.e., $f(1)=1$ and $f(x\odot y)=f(x)\odot f(y)$ for
$x,y\in R$).
A skew hyperfield homomorphism is a homomorphism of the underlying skew
hyperrings.
A hypergroup (resp. skew hyperring, skew hyperfield) isomorphism is a
bijection $f:G\rightarrow H$ which is a hypergroup (resp. skew hyperring, skew
hyperfield) homomorphism and whose inverse is also a hypergroup (resp. skew
hyperring, skew hyperfield) homomorphism.
###### Example 2.12.
The map $\exp:\mathbb{T}_{+}\rightarrow\mathbb{T}\triangle$ is a hyperfield
isomorphism.
## 3\. Classification of stringent hypergroups
Our aim in this section is to prove Theorem 1.6, the Classification Theorem
for stringent hypergroups. We will work with the definition of wedge sums
given as Definition 1.5. First we will show that $F:=\bigvee_{g\in G}F_{g}$ is
indeed a hypergroup.
###### Lemma 3.1.
$F$ is again a hypergroup. If every hypergroup in $(F_{g}\,|\,g\in G)$ is
stringent, then so is $F$. If every hypergroup in $(F_{g}\,|\,g\in G)$ is
commutative, then so is $F$.
###### Proof.
For associativity, suppose we have $x_{1},x_{2},x_{3}\in F$. If any of them is
0, then associativity is clear, so suppose that each $x_{i}$ is in $H$. If one
of the elements $\psi(x_{i})$ of $G$, say $\psi(x_{i_{0}})$, is bigger than
the others, then $x_{1}\boxplus(x_{2}\boxplus
x_{3})=\\{x_{i_{0}}\\}=(x_{1}\boxplus x_{2})\boxplus x_{3}$. If one of the
$\psi(x_{i})$ is smaller than the others, then both
$x_{1}\boxplus(x_{2}\boxplus x_{3})$ and $(x_{1}\boxplus x_{2})\boxplus x_{3}$
evaluate to the sum of the other two $x_{j}$. So we may suppose that all
$\psi(x_{i})$ are equal, taking the common value $g$. If $0\not\in
x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3}$, then both
$x_{1}\boxplus(x_{2}\boxplus x_{3})$ and $(x_{1}\boxplus x_{2})\boxplus x_{3}$
evaluate to $x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3}$, whereas if $0\in
x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3}$, then both evaluate to
$(x_{1}\boxplus_{g}x_{2}\boxplus_{g}x_{3})\cup\psi^{-1}(g\downarrow)$. The
hyperinverse of 0 is 0 and the hyperinverse of any other $x$ is its
hyperinverse in $F_{\psi(x)}$, and $0$ is the additive identity.
For invertibility of sums, suppose we have $x,y,z\in F$. We would like to show
that $x\in y\boxplus z$ if and only if $-x\in-z\boxplus-y$. It suffices to
prove one direction, say if $x\in y\boxplus z$, then $-x\in-z\boxplus-y$. If
$\psi(y)<\psi(z)$, then $x\in y\boxplus z=\\{z\\}$ and $\psi(-y)<\psi(-z)$. So
$-z\boxplus-y=\\{-z\\}=\\{-x\\}$. Similarly, we have if $\psi(y)>\psi(z)$,
then $-z\boxplus-y=\\{-x\\}$. If $\psi(x)=\psi(y)=\psi(z)$, then the statement
holds by the reversibility of the hypergroup
$(F_{\psi(y)},\boxplus_{\psi(y)},0)$. Otherwise we have
$\psi(x)<\psi(y)=\psi(z)$, and so $y=-z$. Then $\psi(-x)<\psi(-y)=\psi(-z)$
and so $-x\in-z\boxplus-y$.
Then, we would like to show that $F$ is stringent if every hypergroup $F_{g}$
in $(F_{g}\,|\,g\in G)$ is. By definition of $F$, we just need to show that
for any $x,y\in F$ with $\psi(x)=\psi(y)$ and $0\not\in x\boxplus_{\psi(x)}y$,
$x\boxplus y$ is a singleton. As $F_{\psi(x)}$ is stringent and $0\not\in
x\boxplus_{\psi(x)}y$, then $x\boxplus_{\psi(x)}y$ is a singleton. So
$x\boxplus y=x\boxplus_{\psi(x)}y$ is also a singleton.
Finally, it is clear that $\boxplus$ is commutative if each $\boxplus_{g}$ is
commutative. ∎
Now we begin the proof of the Classification Theorem. We first introduce a
useful lemma. Note that this lemma automatically holds for stringent
commutative hypergroups, so readers only interested in that case may skip the
proof.
###### Lemma 3.2.
Let $F$ be a stringent hypergroup. If $y\in x\boxplus y$, then $y\in y\boxplus
x$.
###### Proof.
We will divide the proof into four cases.
_Case 1:_ If $x=y$, this is immediate.
_Case 2:_ If $x=-y$, then by reversibility we get
$y\in x\boxplus y\Rightarrow y\in-y\boxplus y\Rightarrow y\in y\boxplus
y\Rightarrow y\in y\boxplus-y\Rightarrow y\in y\boxplus x.$
_Case 3:_ If $y=-y$, then by reversibility and case 2 we get
$y\in x\boxplus y\Rightarrow x\in y\boxplus-y\Rightarrow x\in-y\boxplus
y\Rightarrow y\in y\boxplus x.$
_Case 4:_ Now we suppose $x\notin\\{y,-y\\}$ and $y\neq-y$. Let $z\in
F^{\times}$ be such that $y\boxplus x=\\{z\\}$ and let $t\in F^{\times}$ be
such that $-y\boxplus-y=\\{t\\}$. Then by associativity we get
$z\boxplus y\boxplus t=(y\boxplus x)\boxplus
y\boxplus(-y\boxplus-y)=y\boxplus(x\boxplus y)\boxplus-y\boxplus-y=y\boxplus
y\boxplus-y\boxplus-y\ni 0.$
So we get $0\in z\boxplus y\boxplus t$, thus $-z\in y\boxplus t$. As
$t\in-y\boxplus-y$, we have $-y\in y\boxplus t$. So $-z,-y\in y\boxplus t$.
Then by stringency we get either $z=y$ or $t=-y$. If $z=y$, then we are done.
Now assume $t=-y$. Thus $-z\in y\boxplus t=y\boxplus-y$, and so
$-y\in-y\boxplus z$. As $y\boxplus x=\\{z\\}$, we have $x\in-y\boxplus z$. So
$-y,x\in-y\boxplus z$. Then by stringency we get either $x=-y$ or $z=y$. By
case 2, the statement holds. ∎
Now we define a relation on $F^{\times}$ which roughly corresponds to the
ordering of $G$.
###### Definition 3.3.
We define a relation $<_{F}$ on $F^{\times}$ by $x<_{F}y$ if $x\boxplus
y=y\boxplus x=\\{y\\}$ but $x\neq y$.
###### Lemma 3.4.
$<_{F}$ is a strict partial order on $F^{\times}$.
###### Proof.
Irreflexivity is built into the definition, so it remains to check
transitivity. Suppose that $x<_{F}y<_{F}z$. Then $x\boxplus z=x\boxplus
y\boxplus z=y\boxplus z=\\{z\\}$. Similarly, $z\boxplus x=\\{z\\}$. We cannot
have $x=z$, since then $\\{y\\}=y\boxplus x=y\boxplus z=\\{z\\}$, so $y=z$,
which is a contradiction. ∎
###### Lemma 3.5.
If $x<_{F}y$, then
1. (1)
$\pm x<_{F}\pm y$.
2. (2)
for any $z\in F^{\times}$ we have either $x<_{F}z$ or $z<_{F}y$.
###### Proof.
1. (1)
It suffices to prove that $-x<_{F}y$ by invertibility of sums. As $x<_{F}y$,
then $x\neq-y$ since $0\in-y\boxplus y$. As $x\boxplus y=\\{y\\}$, then
$y\in-x\boxplus y$. By stringency, $-x\boxplus y=\\{y\\}$. Similarly,
$y\boxplus-x=\\{y\\}$. So $-x<_{F}y$.
2. (2)
By (1), we have $\pm x<_{F}\pm y$. Suppose that $z\not<_{F}y$. If
$z\in\\{y,-y\\}$, then we have $x<_{F}z$. Otherwise, $y\not\in z\boxplus y$
and $y\notin y\boxplus z$ by Lemma 3.2. Then $0\not\in z\boxplus y\boxplus-y$
and $0\not\in-y\boxplus y\boxplus z$. So by stringency, we have $z\boxplus
y\boxplus-y=\\{z\\}$ and $-y\boxplus y\boxplus z=\\{z\\}$. However, $x\in
y\boxplus-y$ and $x\in-y\boxplus y$, since $x<_{F}y$. So $z\boxplus x=\\{z\\}$
and $x\boxplus z=\\{z\\}$. Now if $z\neq x$ this implies that $x<_{F}z$, but
if $z=x$ then we have $z<_{F}y$.
∎
Now we define a relation $\sim_{F}$ on $F^{\times}$ by $x\sim_{F}y$ if and
only if both $x\not<_{F}y$ and $y\not<_{F}x$.
###### Lemma 3.6.
$\sim_{F}$ is an equivalence relation.
###### Proof.
$\sim_{F}$ is clearly reflexive and symmetric. For transitivity, suppose that
$x\sim_{F}y$ and $y\sim_{F}z$. If $x<_{F}z$ then either $x<_{F}y$,
contradicting $x\sim_{F}y$, or else $y<_{F}z$, contradicting $y\sim_{F}z$, so
this is impossible. Similarly we have $z\not<_{F}x$. So $x\sim_{F}z$. ∎
The following results are obvious and we will put them together.
###### Lemma 3.7.
1. (1)
If $x\sim_{F}y<_{F}z$ or $x<_{F}y\sim_{F}z$, then $x<_{F}z$.
2. (2)
The relation $<_{F}$ could lift to a relation (denoted by $<_{F}^{\prime}$) on
the set $G$ of $\sim_{F}$-equivalence classes and $(G,<_{F}^{\prime})$ is a
totally ordered set.
3. (3)
For every $x\in F^{\times}$, $-x\sim_{F}x$.
4. (4)
Let $x,y,z\in F^{\times}$ with $x\neq-y$, $y\neq-z$ and $z\neq-x$. If $0\in
x\boxplus y\boxplus z$, then $x\sim_{F}y\sim_{F}z$.
###### Proof.
For (1), the proof is trivial. (1) implies (2).
(3) As $0\in x\boxplus-x$, we have $x\not<_{F}-x$ and $-x\not<_{F}x$. So
$-x\sim_{F}x$.
(4) If not, then without loss of generality we have $x<_{F}y$, and so $-z\in
x\boxplus y=\\{y\\}$, giving $y=-z$, contradicting our assumptions. ∎
###### Lemma 3.8.
Let $(x_{i}\,|\,i\in I)$ be a finite family of elements of $F$, and $z\in F$
with $x_{i}<_{F}z$ for all $i\in I$. Then for any $y\in\boxplus_{i\in I}x_{i}$
we have $y<_{F}z$.
###### Proof.
It suffices to prove this when $I$ has just two elements, say $x_{1}$ and
$x_{2}$, since the general result then follows by induction. Suppose
$x_{1},x_{2}<_{F}z$ and $y\in x_{1}\boxplus x_{2}$, then we have
$y\boxplus z\subseteq x_{1}\boxplus x_{2}\boxplus z=x_{1}\boxplus z=\\{z\\}$
and
$z\boxplus y\subseteq z\boxplus x_{1}\boxplus x_{2}=z\boxplus x_{2}=\\{z\\}.$
So $y\boxplus z=\\{z\\}$ and $z\boxplus y=\\{z\\}$. If $z\in x_{1}\boxplus
x_{2}$ then $-x_{1}\in x_{2}\boxplus-z=\\{-z\\}$, contradicting $x_{1}<_{F}z$.
So $z\not\in x_{1}\boxplus x_{2}$, and so $z\neq y$. So $y<_{F}z$. ∎
It follows from the above results that the sum $x\boxplus y$ is given by
$\\{x\\}$ if $x>_{F}y$, by $\\{y\\}$ if $x<_{F}y$, by $\\{z\\}$ for some $z$
in the $\sim_{F}$-equivalence class of $x$ and $y$ if $x\sim_{F}y$ but
$x\neq-y$, and by some subset of that class together with
$\\{t\,|\,t<_{F}x\\}\cup\\{0\\}$ if $x=-y$. This looks very similar to the
hyperaddition given in Definition 1.5.
We now want to consider the structure of the equivalence classes. Let $g$ be
an equivalence class in $G$ and let $F_{g}$ be the set $g\cup\\{0\\}$. We can
define a multivalued binary operation $\boxplus_{g}$ on $F_{g}$ by
$x\boxplus_{g}y=(x\boxplus y)\cap F_{g}$.
###### Lemma 3.9.
For any element $g$ in $G$, $F_{g}$ is again a hypergroup, with hyperaddition
given by $\boxplus_{g}$.
###### Proof.
For every $x\in F_{g}$, we have $0\boxplus_{g}x=\\{x\\}\cap F_{g}=\\{x\\}$.
Suppose $0\in x\boxplus_{g}y$, then $0\in x\boxplus y$, and so $y=-x$.
Similarly, if $0\in y\boxplus_{g}x$, then $y=-x$.
For invertibility of sums, let $x,y,z\in F_{g}$ with $x\in y\boxplus_{g}z$.
Then we have $x\in y\boxplus z$. By invertibility of sums of $F$,
$-x\in-z\boxplus-y$. So $-x\in-z\boxplus_{g}-y$.
For associativity, suppose we have $x,y,z\in F_{g}$. We would like to show
that
$(x\boxplus_{g}y)\boxplus_{g}z=x\boxplus_{g}(y\boxplus_{g}z).$
Let $t\in F_{g}$. Let us first show that $t\in x\boxplus_{g}(y\boxplus_{g}z)$
if and only if $t\in x\boxplus(y\boxplus z)$. It is clear that
$x\boxplus_{g}(y\boxplus_{g}z)\subseteq x\boxplus(y\boxplus z)$. So it
suffices to prove the other direction. We suppose that $t\in
x\boxplus(y\boxplus z)$. Then there exists $k\in F$ such that $k\in y\boxplus
z$ and $t\in x\boxplus k$. If $k\in F_{g}$, then we are done. If not, we have
$y=-z$ and $k<_{F}y$. So we also have $k<_{F}x$, and so $t=x\in
x\boxplus_{g}0\subseteq x\boxplus_{g}(y\boxplus_{g}z)$. Similarly, we can also
get $t\in(x\boxplus_{g}y)\boxplus_{g}z$ if and only if $t\in(x\boxplus
y)\boxplus z$. By associativity of $F$, $(x\boxplus y)\boxplus
z=x\boxplus(y\boxplus z)$. So
$(x\boxplus_{g}y)\boxplus_{g}z=x\boxplus_{g}(y\boxplus_{g}z).$ ∎
###### Lemma 3.10.
For any element $g$ in $G$, $F_{g}$ is either isomorphic to $\mathbb{K}$ or
isomorphic to $\mathbb{S}$ or is a group.
###### Proof.
For any $y$ and any $x$ with $x\in y\boxplus-y$, we have $y\in x\boxplus y$
and so $x<_{F}y$ unless $x\in\\{-y,0,y\\}$. So for any $y\in F_{g}$ we have
$y\boxplus_{g}-y\subseteq\\{-y,0,y\\}$. Now suppose that there is some $y\in
F_{g}$ with $y\boxplus_{g}-y\neq\\{0\\}$. Then $y$ is nonzero and
$y,-y\in-y\boxplus_{g}y$. Suppose for a contradiction that there is some $z\in
F_{g}\setminus\\{-y,0,y\\}$, and let $t$ be the unique element of $-y\boxplus
z$. Then by Lemma 3.5, $t\notin\\{y,-y\\}$, since $z\not<_{F}-y$. So
$y\boxplus t=\\{z\\}$. Thus
$y\in
y\boxplus_{g}0\subseteq(-y\boxplus_{g}y)\boxplus_{g}(t\boxplus_{g}-t)=-y\boxplus_{g}(y\boxplus_{g}t)\boxplus_{g}-t=-y\boxplus_{g}z\boxplus_{g}-t=t\boxplus_{g}-t,$
and so $y\in\\{-t,0,t\\}$, which is the desired contradiction.
So if there is any $y$ with $y\boxplus_{g}-y\neq\\{0\\}$, then
$F_{g}=\\{-y,0,y\\}$. It is now not hard to check that in this case if $y=-y$
then $F_{g}\cong\mathbb{K}$, and if $y\neq-y$ then $F_{g}\cong\mathbb{S}$. On
the other hand, if there is no such $y$ then the hyperaddition on $F_{g}$ is
single-valued, and so $F_{g}$ is a group. ∎
We can finally prove the Classification Theorem.
###### Proof of Theorem 1.6.
Let $H$ be $F^{\times}$, let $G$ be given as above and let $\psi$ be the map
sending an element $h$ of $H$ to its equivalence class in $G$. For any $x$ and
$y$ in $H$, if $\psi(x)>_{F}^{\prime}\psi(y)$ then $x>_{F}y$ and so $x\boxplus
y=\\{x\\}$. Similarly if $\psi(x)<_{F}^{\prime}\psi(x)$ then $x\boxplus
y=\\{y\\}$. If $\psi(x)=\psi(y)$ then $x\boxplus_{\psi(x)}y=(x\boxplus y)\cap
F_{\psi(x)}$. So by the remarks following Lemma 3.8 we have that the
hyperaddition of $F$ agrees with that of $\bigvee_{g\in G}F_{g}$ in this case
as well. ∎
## 4\. Classification of stringent skew hyperfields
In this section, we will present the classification of stringent skew
hyperfields. We will first introduce a construction of skew hyperfields
arising from short exact sequences.
###### Definition 4.1.
Let $F$ be a skew hyperfield and let $G$ be a totally ordered group. Suppose
that we have a short exact sequence of groups
$1\to F^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,.$
Since $\varphi$ is injective, by replacing $H$ with an isomorphic copy if
necessary we may (and shall) suppose that $\varphi$ is the identity. As usual,
we define $x^{h}$ to be $h^{-1}\cdot x\cdot h$ for $x,h\in H$. We extend this
operation by setting $0_{F}^{h}:=0_{F}$. We say that the short exact sequence
has stable sums if for each $h\in H$ the operation $x\mapsto x^{h}$ is an
automorphism of $F$ (as a skew hyperfield). Since this operation clearly
preserves the multiplicative structure, this is equivalent to the condition
that it is always an automorphism of the underlying additive hypergroup.
Furthermore, any short exact sequence as above with $H$ abelian automatically
has stable sums.
Suppose now that we have a short exact sequence with stable sums as above.
Then we may define a hyperfield with multiplicative group $H$ as follows. We
begin by choosing some object not in $H$ to serve as the additive identity,
and we denote this object by $0$. For each $g$ in $G$, let $A_{g}$ be
$\psi^{-1}(g)\cup\\{0\\}$. For any $h$ in $\psi^{-1}(g)$ there is a bijection
$\lambda_{h}$ from $F$ to $A_{g}$ sending $0_{F}$ to $0$ and $x$ to $h\cdot x$
for $x\in F^{\times}$, and so there is a unique hypergroup structure on
$A_{g}$ making $\lambda_{h}$ an isomorphism of hypergroups. Furthermore, this
structure is independent of the choice of $h$ since for
$h_{1},h_{2}\in\psi^{-1}(g)$ the map
$\lambda_{h_{1}}^{-1}\cdot\lambda_{h_{2}}$ is just left multiplication by
$h_{1}^{-1}\cdot h_{2}$, which is an automorphism of the additive hypergroup
of $F$. In this way we obtain a well defined hypergroup structure on $A_{g}$,
whose hyperaddition we denote by $\boxplus_{g}$.
Then the $G$-layering $F\rtimes_{H,\psi}G$ of $F$ along this short exact
sequence has as ground set $H\cup\\{0\\}$. Multiplication is given by $x\cdot
y=0$ if $x$ or $y$ is $0$ and by the multiplication of $H$ otherwise.
$H\cup\\{0\\}$ is the underlying set of the hypergroup $\bigvee_{g\in
G}A_{g}$, and we take the hyperaddition of $F\rtimes_{H,\psi}G$ to be given by
that of this hypergroup. Explicitly; the hyperaddition is given by taking 0 to
be the additive identity and setting
$x\boxplus y=\begin{cases}\\{x\\}&\text{if }\psi(x)>\psi(y),\\\
\\{y\\}&\text{if }\psi(x)<\psi(y),\\\ x\boxplus_{\psi(x)}y&\text{if
}\psi(x)=\psi(y)\text{ and }0\not\in x\boxplus_{\psi(x)}y,\\\
(x\boxplus_{\psi(x)}y)\cup\psi^{-1}(\psi(x)\downarrow)&\text{if
}\psi(x)=\psi(y)\text{ and }0\in x\boxplus_{\psi(x)}y.\end{cases}$
###### Lemma 4.2.
$F\rtimes_{H,\psi}G$ is again a skew hyperfield. If $F$ is stringent, then so
is $F\rtimes_{H,\psi}G$.
###### Proof.
As shown in Lemma 3.1, $\bigvee_{g\in G}A_{g}$ is a commutative hypergroup. So
it suffices to show that $\cdot$ distributes over $\boxplus$. For left
distributivity, we must prove an equation of the form
$x_{1}\cdot(x_{2}\boxplus x_{3})=x_{1}\cdot x_{2}\boxplus x_{1}\cdot x_{3}$.
As usual, if any of the $x_{i}$ is 0, then this is trivial, so we suppose that
each $x_{i}$ is in $H$. If $\psi(x_{2})>\psi(x_{3})$, then both sides are
equal to $x_{1}\cdot x_{2}$. If $\psi(x_{2})<\psi(x_{3})$, then both sides are
equal to $x_{1}\cdot x_{3}$. So we may assume that $\psi(x_{2})=\psi(x_{3})$
and we call their common value $g$. Then
$x_{2}\boxplus_{g}x_{3}=\lambda_{x_{2}}(1\boxplus_{F}x_{2}^{-1}\cdot x_{3})$
and $(x_{1}\cdot x_{2})\boxplus_{\psi(x_{1})\cdot g}(x_{1}\cdot
x_{3})=\lambda_{x_{1}\cdot x_{2}}(1\boxplus_{F}x_{2}^{-1}\cdot x_{3})$. So if
$0\not\in x_{2}\boxplus_{g}x_{3}$, then also $0\not\in(x_{1}\cdot
x_{2})\boxplus_{\psi(x_{1})\cdot g}(x_{1}\cdot x_{3})$, and so both sides of
the equation are equal to $x_{1}\cdot(x_{2}\boxplus_{g}x_{3})$. If $0\in
x_{2}\boxplus_{g}x_{3}$, then also $0\in(x_{1}\cdot
x_{2})\boxplus_{\psi(x_{1})\cdot g}(x_{1}\cdot x_{3})$, and so both sides of
the equation are equal to $x_{1}\cdot(x_{2}\boxplus_{g}x_{3})\cup
x_{1}\cdot\psi^{-1}(g\downarrow)$.
For the right distributivity, we need to consider bijections
$\lambda_{h}^{\prime}\colon F\to A_{\psi(h)}$ similar to the $\lambda_{h}$. We
take $\lambda_{h}^{\prime}(x)$ to be $x\cdot h$ for $x\in F^{\times}$ and to
be $0$ for $x=0_{F}$. Then since $\lambda_{h}^{\prime}(x)=\lambda_{h}(x)^{h}$
for any $x$ and the short exact sequence has stable sums, the
$\lambda_{h}^{\prime}$ are also hyperfield isomorphisms. So we may argue as
above but with the $\lambda_{h}^{\prime}$ in place of the $\lambda_{h}$.
Finally, we must show that $F\rtimes_{H,\psi}G$ is stringent if $F$ is. By
definition of $F\rtimes_{H,\psi}G$, we just need to show that for $x,y\in
F\rtimes_{H,\psi}G$ with $\psi(x)=\psi(y)$ and $0\not\in
x\boxplus_{\psi(x)}y$, $x\boxplus y$ is a singleton. As $F$ is stringent and
$0\not\in x\boxplus_{\psi(x)}y$, then $x\boxplus_{\psi(x)}y$ is a singleton.
So $x\boxplus y=x\boxplus_{\psi(x)}y$ is also a singleton. ∎
Now let us see some interesting examples of hyperfields constructed in this
way.
###### Example 4.3.
If $F$ is the Krasner hyperfield, $G$ and $H$ are both the additive group of
real numbers, and $\psi$ is the identity, then $F\rtimes_{H,\psi}G$ is the
tropical hyperfield.
###### Example 4.4.
$F:=\mathbb{Z}\cup\\{-\infty\\}$ in Example 1.4 can arise from the short exact
sequence of groups
$0\to
GF(2)^{\times}\xrightarrow{\varphi}\mathbb{Z}\xrightarrow{\psi}\mathbb{Z}\to
0.$
###### Example 4.5.
In [AD19], Anderson and Davis drew a diagram encoding many popular and
important hyperfields and homomorphisms, see as follows.
$\mathbb{TR}$$\mathbb{TC}$$\mathbb{T}\triangle$$\mathbb{S}$$\mathbb{K}$$\Phi$$\mathbb{P}$$\mathbb{R}$$\mathbb{C}$$\triangle$$|\,\,|$$|\,\,|$$|\,\,|$$|\,\,|$phphphphphph
The diagram with the solid arrows commutes. The four dashed arrows are
inclusions giving sections (one-sided inverses). Here ph is the _phase map_
ph$(x)=x/|x|$ if $x=0$ and ph$(0)=0$. In each of the ten hyperfields, the
underlying set is a subset of the complex numbers closed under multiplication.
And in each hyperfield, multiplication, the additive identity, and the
multiplicative identity coincides with that of the complex numbers.
Our classification gives a good relationship between the hyperfields in each
column and we can construct each hyperfield in the bottom row from the
corresponding element of the row just above the bottom and the ordered group
$\mathbb{R}_{>0}$.
1. (1)
From the short exact sequence of groups
$1\to\mathbb{S}^{\times}\rightarrow\mathbb{R}^{\times}\rightarrow\mathbb{R}_{>0}\to
1,$
we can get the tropical real hyperfield
$\mathbb{TR}=\mathbb{S}\rtimes\mathbb{R}_{>0}$.
2. (2)
From the short exact sequence of groups
$1\to\Phi^{\times}\rightarrow\mathbb{C}^{\times}\rightarrow\mathbb{R}_{>0}\to
1,$
we can get the tropical complex hyperfield
$\mathbb{TC}=\Phi\rtimes\mathbb{R}_{>0}$.
3. (3)
From the short exact sequence of groups
$1\to\mathbb{K}^{\times}\rightarrow\mathbb{R}_{>0}\rightarrow\mathbb{R}_{>0}\to
1,$
we can get the ultratriangle hyperfield
$\mathbb{T}\triangle=\mathbb{K}\rtimes\mathbb{R}_{>0}$.
Since in each column the second element is obtained as a quotient of the first
by a $\mathbb{R}_{>0}-$subgroup, this operation of putting back the factor of
$\mathbb{R}_{>0}$ yields a hyperfield on the same ground set as the top
element of the column.
Our aim is to show that every stringent skew hyperfield is of the form
$F\rtimes_{H,\psi}G$ with $F$ either the Krasner hyperfield or the sign
hyperfield or a skew field. Let’s start with a stringent skew hyperring.
Let $R$ be a stringent skew hyperring. By Theorem 1.6, we can classify $R$ to
be the wedge sum $\bigvee_{g\in G}R_{g}$ with a surjective mapping $\psi$ from
$R^{\times}$ to the set $G$ defined in last section and an ordering
$<_{R}^{\prime}$ on $G$ by $\psi(x)<_{R}^{\prime}\psi(y)$ if and only if
$x\boxplus y=\\{y\\}$ but $x\neq y$, where the hypergroup $R_{g}$ is either
isomorphic to $\mathbb{K}$ or isomorphic to $\mathbb{S}$ or is a group. Thus
by distributivity of $R$, $\psi(x)<_{R}^{\prime}\psi(y)$ if and only if
$\psi(ax)<_{R}^{\prime}\psi(ay)$ if and only if
$\psi(xa)<_{R}^{\prime}\psi(ya)$ for $a\in R^{\times}$. So the multiplication
of $R$ lifts to a multiplication on $G$ respecting the ordering, with identity
$\psi(1):=1_{G}$. By Lemma 3.7(2), we can easily get the following lemma.
###### Lemma 4.6.
$(G,\cdot,<_{R}^{\prime})$ is a totally ordered monoid. If $R$ is a skew
hyperfield, then $G$ is a totally ordered group.
Now we want to consider the structure of $R_{g}$.
###### Lemma 4.7.
$R_{1_{G}}$ is again a skew hyperring, with hyperaddition given by
$\boxplus_{1_{G}}$ and multiplication by that of $R$.
###### Proof.
By Lemma 3.9, it suffices to check the distributivity. To prove left
distributivity we must show that any element $t\in R_{1_{G}}$ of
$x\cdot(y\boxplus z)$ is also an element of the same expression evaluated in
$R_{1_{G}}$. So let $w$ be an element of $y\boxplus z$ with $x\cdot w=t$. This
second equation implies that the equivalence class of $w$ is $1_{G}$, as
desired. The right distributivity is similar. ∎
###### Lemma 4.8.
If $R$ is a skew hyperfield, $R_{1_{G}}$ is either the Krasner hyperfield or
the sign hyperfield or a skew field.
###### Proof.
By Lemma 3.10 and Lemma 4.7, we can get $R_{1_{G}}$ is either the Krasner
hyperfield or the sign hyperfield or a skew ring.
Since $\sim_{R}$ respects the multiplication, the multiplicative inverse of
anything equivalent to $1_{R}$ is again equivalent to $1_{R}$, so that
$R_{1_{G}}$ is a skew field if it is a skew ring. ∎
###### Lemma 4.9.
For every $g\in G$, the hypergroup of $R_{g}$ is isomorphic to the hypergroup
of $R_{1_{G}}$.
###### Proof.
Let $a\in R_{g}^{\times}$. Define $f:R_{g}\rightarrow R_{1_{G}}$ by sending
$0$ to $0$ and $x$ in $R_{g}^{\times}$ to $a^{-1}\cdot x$. Since $f$ has an
inverse operation, namely left multiplication by $a$, this is a bijection. Now
we would like to show $f(x\boxplus_{g}y)=f(x)\boxplus_{1_{G}}f(y)$.
$\displaystyle f(x\boxplus_{g}y)$ $\displaystyle=a^{-1}\cdot(x\boxplus_{g}y)$
$\displaystyle=a^{-1}\cdot\big{(}(x\boxplus y)\cap R_{g}\big{)}$
$\displaystyle=\big{(}a^{-1}\cdot(x\boxplus y)\big{)}\cap(a^{-1}\cdot R_{g})$
$\displaystyle=\big{(}(a^{-1}\cdot x)\boxplus(a^{-1}\cdot y)\big{)}\cap
R_{1_{G}}$ $\displaystyle=(a^{-1}\cdot x)\boxplus_{1_{G}}(a^{-1}\cdot y)$
$\displaystyle=f(x)\boxplus_{1_{G}}f(y)$
∎
Now using the above results, we can classify the stringent skew hyperfields as
follows.
###### Theorem 4.10.
Any stringent skew hyperfield $R$ has the form $F\rtimes_{H,\psi}G$, where $F$
is either the Krasner hyperfield or the sign hyperfield or a skew field.
###### Proof.
Let $F$ be $R_{1_{G}}$, let $H$ be $R^{\times}$ and let $G$ be given as above.
Let $\varphi$ be the injection of $F^{\times}$ as a subgroup of $H$ and let
$\psi$ be the map sending an element $h$ of $H$ to its equivalence class in
$G$. Then
$1\to F^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1$
is a short exact sequence. For any $x$ and $y$ in $H$, if
$\psi(x)>_{R}^{\prime}\psi(y)$, then $x>_{R}y$ and so $x\boxplus y=\\{x\\}$.
Similarly if $\psi(x)<_{R}^{\prime}\psi(x)$, then $x\boxplus y=\\{y\\}$. If
$\psi(x)=\psi(y)$, then $x\boxplus_{\psi(x)}y=x\cdot((1\boxplus x^{-1}\cdot
y)\cap R_{1_{G}})=(x\boxplus y)\cap R_{\psi(x)}$. So by the remarks following
Lemma 3.8 we have that the hyperaddition of $R$ agrees with that of
$F\rtimes_{H,\psi}G$ in this case as well. ∎
Using results of Marshall’s paper [Mar06], we can show that the structure is
even more constrained if the multiplication of $R$ is commutative (so that $R$
is a stringent hyperfield) and $R_{1_{G}}$ is the Krasner or the sign
hyperfield.
###### Proposition 4.11.
Let $R$ be a stringent skew hyperfield with $R_{1_{G}}=\mathbb{S}$ and let
$a\in R^{\times}-\\{1,-1\\}$. Then $a^{2}\notin\\{1,-1\\}$.
###### Proof.
As $a\notin\\{1,-1\\}$, then $a\not\sim_{R}1$. So $\psi(a)\neq 1$. Then
$\psi(a^{2})=(\psi(a))^{2}\neq 1$, since $G$ is a totally ordered group. So
$a^{2}\not\sim_{R}1$. That is $a^{2}\notin\\{1,-1\\}$. ∎
Following are some useful Lemmas in Marshall’s paper (cf. section 3,[Mar06]).
###### Definition 4.12.
[Mar06] Let $R$ be a hyperfield. A subset $P$ of $R$ is called an ordering if
$P\boxplus P\subseteq P,P\odot P\subseteq P,P\cup-P=R\text{ and
}P\cap-P=\\{0\\}.$
###### Definition 4.13.
[Mar06] A hyperfield $R$ is said to be real if $-1\notin R^{2}\boxplus R^{2}$
where $R^{2}:=\\{a^{2}\,|\,a\in R\\}$.
###### Lemma 4.14.
[Mar06, Lemma 3.3] Let $R$ be a hyperfield. $R$ has an ordering if and only if
$R$ is real.
###### Lemma 4.15.
[Mar06, Lemma 3.2, 3.3] Let $R$ be a hyperfield with $1\neq-1$. If $R$ has an
ordering $P$, then $-1\notin P$.
Based on above lemmas, we get the following.
###### Proposition 4.16.
If $R$ is a stringent hyperfield with $R_{1_{G}}=\mathbb{S}$, then $R$ has an
ordering.
###### Proof.
By Lemma 4.14, we just need to show that $R$ is real.
Suppose that $-1\in R^{2}\boxplus R^{2}$. Then there exist $a,b\in R$ such
that $-1\in a^{2}\boxplus b^{2}$. By Proposition 4.11, $a^{2}\neq-1$ and
$b^{2}\neq-1$. Thus $a\neq 0$ and $b\neq 0$. And by reversibility, $-b^{2}\in
1\boxplus a^{2}$. As $a^{2}\neq-1$, then $1\boxplus
a^{2}\subseteq\\{1,a^{2}\\}$. Thus $-b^{2}=a^{2}$. Then
$-1=a^{2}b^{-2}=(ab^{-1})^{2}$, a contradiction to Proposition 4.11.
So $R$ is real, and therefore has an ordering. ∎
###### Theorem 4.17.
If $R$ is a stringent hyperfield with
$R_{1_{G}}\in\\{\mathbb{K},\mathbb{S}\\}$, then $R$ arises from a short exact
sequence
$1\to R_{1_{G}}^{\times}\xrightarrow{\varphi}R_{1_{G}}^{\times}\times
G\xrightarrow{\psi}G\to 1.$
###### Proof.
If $R_{1_{G}}=\mathbb{K}$, this is trivial.
If $R_{1_{G}}=\mathbb{S}$, by Theorem 4.10, we may suppose
$R=\mathbb{S}\rtimes_{H,\psi}G=H\cup\\{0\\}$ with a short exact sequence of
groups
$1\to\mathbb{S}^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1.$
By Proposition 4.16, we know that $R$ has an ordering $P$. Let $O=P-\\{0\\}$.
As $P\cup-P=R$ and $P\cap-P=\\{0\\}$, then $R=O\mbox{\hskip
1.49994pt$\cup$$\cdot$\hskip 3.99994pt}-O\cup\\{0\\}$. By Lemma 4.15,
$-1\notin P$. Then $-1\notin O$, thus $1\in O$. And as $P\odot P\subseteq P$,
then $O\odot O\subseteq O$. For any $a\in O$, $a^{-1}\in O$. Otherwise
$-a^{-1}\in O$. Then $a\odot-a^{-1}=-1\in O$, which is a contradiction. So $O$
is a multiplicative group with $1\in O$ and $R=O\mbox{\hskip
1.49994pt$\cup$$\cdot$\hskip 3.99994pt}-O\cup\\{0\\}$, and $\psi\restriction
O$ is an isomorphism from $O$ to $G$.
Now we can identify $x\in H$ with $(1,\psi(x))$ if $x\in O$, and with
$(-1,\psi(x))$ if $x\notin O$, giving a bijection from $H$ to
$\mathbb{S}^{\times}\times G$.
So $R\cong(\mathbb{S}^{\times}\times G)\cup\\{0\\}$.
∎
It is not clear whether this result extends to stringent skew hyperfields.
## 5\. Classification of Doubly Distributive Skew Hyperfields
In this section, we will present the classification of doubly distributive
skew hyperfields.
###### Proposition 5.1.
The doubly distributive skew hyperfields are precisely those of the form
$F\rtimes_{H,\psi}G$ of exactly one of the following types:
1. (1)
$F$ is the Krasner hyperfield,
2. (2)
$F$ is the sign hyperfield,
3. (3)
$F$ is a skew field and $G$ satisfies
$\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}.$
The following example is a doubly distributive hyperfield of type (3) in
Proposition 5.1.
###### Example 5.2.
Let $F:=\mathbb{R}$ be the hyperfield with usual multiplication and
hyperaddition given by
$x\boxplus y=\begin{cases}x&\text{ if }|x|>|y|,\\\ y&\text{ if }|x|<|y|,\\\
\\{z\,|\,|z|<|x|\\}&\text{ if }|x|=|y|.\end{cases}$
This hyperfield is stringent and arises from the short exact sequence of
groups
$1\to
GF(3)^{\times}\xrightarrow{\varphi}\mathbb{R}^{\times}\xrightarrow{\psi}\mathbb{R}_{>0}\to
1\,.$
Another natural example of doubly distributive skew hyperfields of type (3) in
Proposition 5.1 can be found in [Pen18], which is built around a
(noncommutative) such hyperfield (the one called $L^{\sigma}$).
Before showing the proof of Proposition 5.1, We’ll first introduce a useful
lemma.
###### Lemma 5.3.
Let $R$ be a stringent skew hyperfield. $R$ is doubly distributive if and only
if
$(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$
###### Proof.
By Definition 2.8, $R$ is doubly distributive if and only if $(a\boxplus
b)(c\boxplus d)=ac\boxplus ad\boxplus bc\boxplus bd$, for any $a,b,c,d\in R$.
As $R$ is stringent, we have $u\boxplus v$ is a singleton if $u\neq-v$. So if
either $a\neq-b$ or $c\neq-d$, then the equation above is just about
distributivity. It already holds.
If both $a=-b$ and $c=-d$, then
$(a\boxplus b)(c\boxplus
d)=(a\boxplus-a)(c\boxplus-c)=a(1\boxplus-1)(1\boxplus-1)c$
and
$ac\boxplus ad\boxplus bc\boxplus bd=ac\boxplus-ac\boxplus-ac\boxplus
ac=a(1\boxplus-1\boxplus 1\boxplus-1)c.$
So $R$ is doubly distributive if and only if
$(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$
∎
Now we will present the proof of Proposition 5.1.
###### Proof of Proposition 5.1.
By Proposition 1.3 and Theorem 4.10, we know that a doubly distributive skew
hyperfield $R$ also has the form $F\rtimes_{H,\psi}G$, where $F$ is either the
Krasner hyperfield or the sign hyperfield or a skew field, with a short exact
sequence of groups
$1\to F^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,.$
So it suffices to show that the hyperfields of type (1) and (2) are doubly
distributive and the hyperfields with $F$ a skew field are doubly distributive
if and only if they are of type (3).
_Case 1:_ When $F=\mathbb{K}=\\{1,0\\}$, hyperaddition is defined by
$\displaystyle x\boxplus 0$ $\displaystyle=\\{x\\},$ $\displaystyle x\boxplus
y$ $\displaystyle=\begin{cases}\\{x\\}&\text{ if $\psi(x)>\psi(y)$,}\\\
\\{y\\}&\text{ if $\psi(x)<\psi(y)$,}\\\
\\{z\,|\,\psi(z)\leq\psi(x)\\}\cup\\{0\\}&\text{ if $\psi(x)=\psi(y)$, that is
$x=y$.}\end{cases}$
By Lemma 5.3, $R$ is doubly distributive if and only if
$(1\boxplus 1)(1\boxplus 1)=1\boxplus 1\boxplus 1\boxplus 1.$
$(1\boxplus 1)(1\boxplus 1)=(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})\cdot(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\},$
and
$1\boxplus 1\boxplus 1\boxplus 1=(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})\boxplus(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\}.$
So $R$ is doubly distributive when $F=\mathbb{K}$.
_Case 2:_ When $F=\mathbb{S}=\\{1,-1,0\\}$, hyperaddition is defined by
$\displaystyle x\boxplus 0$ $\displaystyle=\\{x\\},$ $\displaystyle x\boxplus
y$ $\displaystyle=\begin{cases}\\{x\\}&\text{ if $\psi(x)>\psi(y)$,}\\\
\\{y\\}&\text{ if $\psi(x)<\psi(y)$,}\\\ \\{x\\}&\text{ if $x=y$,}\\\
\\{z\,|\,\psi(z)\leq\psi(x)\\}\cup\\{0\\}&\text{ if $x=-y$.}\end{cases}$
By Lemma 5.3, $R$ is doubly distributive if and only if
$(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$
$(1\boxplus-1)(1\boxplus-1)=(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})\cdot(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\},$
and
$1\boxplus-1\boxplus 1\boxplus-1=(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})\boxplus(\\{z\,|\,\psi(z)\leq
1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)\leq 1\\}\cup\\{0\\}.$
So $R$ is doubly distributive when $F=\mathbb{S}$.
_Case 3:_ When $F$ is a skew field, hyperaddition is defined by
$\displaystyle x\boxplus 0$ $\displaystyle=\\{x\\},$ $\displaystyle x\boxplus
y$ $\displaystyle=\begin{cases}\\{x\\}&\text{ if $\psi(x)>\psi(y)$,}\\\
\\{y\\}&\text{ if $\psi(x)<\psi(y)$,}\\\ x\boxplus_{\psi(x)}y&\text{ if
$\psi(x)=\psi(y)$ and $0\notin x\boxplus_{\psi(x)}y$,}\\\
\\{z\,|\,\psi(z)<\psi(x)\\}\cup\\{0\\}&\text{ if $\psi(x)=\psi(y)$ and $0\in
x\boxplus_{\psi(x)}y$.}\\\ \end{cases}$
By Lemma 5.3, $R$ is doubly distributive if and only if
$(1\boxplus-1)(1\boxplus-1)=1\boxplus-1\boxplus 1\boxplus-1.$
$(1\boxplus-1)(1\boxplus-1)=(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})\cdot(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})=\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\},$
and
$1\boxplus-1\boxplus
1\boxplus-1=(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})\boxplus(\\{z\,|\,\psi(z)<1\\}\cup\\{0\\})=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\}.$
So $R$ is doubly distributive if and only if
$\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\}=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\}.$
We claim that
$\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\}=\psi^{-1}(\psi(1)\downarrow)\cup\\{0\\}=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\},$
if and only if
$\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}.$
$(\Rightarrow):$ If
$\\{xy\,|\,\psi(x),\psi(y)<1\\}\cup\\{0\\}=\\{z\,|\,\psi(z)<1\\}\cup\\{0\\}$,
the direction $\subseteq$ is clear. We just need to consider the other
direction. Let $c\in G$ be such that $c<1_{G}$ and let $z\in\psi^{-1}(c)$.
Then there exist $x,y\in H$ such that $z=xy$ and $\psi(x),\psi(y)<1$ by our
assumption. So $c=\psi(z)=\psi(xy)=\psi(x)\psi(y)$. We have
$c\in\\{ab\,|\,a,b<1_{G}\\}$.
$(\Leftarrow):$ If $\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}$, the direction
$\subseteq$ is also clear. We just need to consider the other direction. Let
$z\in H$ be such that $\psi(z)<1$ and let $c=\psi(z)$. Then there exist
$a,b\in G$ such that $c=ab$ and $a,b<1_{G}$ by our assumption. Let $x\in H$ be
such that $\psi(x)=a<1_{G}$ and let $y=x^{-1}z$. We have
$\psi(y)=\psi(x^{-1}z)=a^{-1}c=b<1_{G}$ and $z=xy$. So
$z\in\\{xy\,|\,\psi(x),\psi(y)<1\\}$.
So $R$ is doubly distributive if and only if
$\\{ab\,|\,a,b<1_{G}\\}=\\{c\,|\,c<1_{G}\\}.$
∎
## 6\. Reduction of stringent skew hyperrings to hyperfields
In this section, we will show that stringent skew hyperrings are very
restricted.
###### Theorem 6.1.
Every stringent skew hyperring is either a skew ring or a stringent skew
hyperfield.
###### Proof.
If $G$ is trivial, then $R=R_{1_{G}}$. So $R$ is either $\mathbb{K}$, or
$\mathbb{S}$, or a skew ring.
If $G$ is nontrivial, we would like to show that every element $x$ in
$R^{\times}$ is a unit. Now let $s$ and $t$ in $R^{\times}$ be such that
$x\cdot s>_{R}1$ and $t\cdot x>_{R}1$. Then by the remarks after Lemma 3.8, we
have
$1\in x\cdot s\boxplus-x\cdot s=x\cdot(s\boxplus-s),$ $1\in t\cdot
x\boxplus-t\cdot x=(t\boxplus-t)\cdot x.$
So there exists $y\in s\boxplus-s$ and $z\in t\boxplus-t$ such that $1=x\cdot
y=z\cdot x$. Thus $y=(z\cdot x)\cdot y=z\cdot(x\cdot y)=z$. So $x$ has a
multiplicative inverse $y$ in $R$. Then $x$ is a unit of $R$.
So every stringent skew hyperring is either a skew ring or a stringent skew
hyperfield. ∎
We cannot classify doubly distributive hyperring using our classification
because not every doubly distributive hyperring is stringent. The following is
a counterexample.
###### Example 6.2.
The hyperring $\mathbb{K}\times\mathbb{K}$ that is the square of the Krasner
hyperfield is doubly distributive but not stringent.
## 7\. Every stringent skew hyperfield is a quotient of a skew field
In this section, we would like to show that every stringent skew hyperfield is
a quotient of a skew field by some normal subgroup. In particular, every
stringent hyperfield is a quotient of a field by some special kind of
subgroups, called ‘hüllenbildend’. This kind of subgroups was studied by
Diller and Grenzdörffer in [DG73] when they tried to unify the treatment of
various notions of convexity in projective spaces over a field $K$ by
introducing for any subgroup $U\leq K^{\times}$ the notion of $U$-convexity.
They showed that this notion is reasonably well behaved if and only if $U$ is
as follows.
###### Definition 7.1.
[DG73] Let $K$ be a field and let $U\leq K^{\times}$. $U$ is called
$U$-‘hüllenbildend’ (hull producing) if $U$ satisfies
$x,y\in K,x+y-xy\in U\rightarrow x\in U\text{ or }y\in U.$ (1)
In [Dre77], Dress presented a simple complete classification of such
‘hüllenbildend’ subgroups $U$. We will combine our classification of stringent
(skew) hyperfields into three types with Dress’s classification of such
subgroups.
###### Theorem 7.2.
[Dre77, Theorem 1] Let $U\leq K^{\times}$ satisfy (1) and let $S_{U}=\\{x\in
K\,|\,x\notin U\text{ and }x+U\subseteq U\\}$. Then $S_{U}$ is the maximal
ideal of a valuation ring $R=R_{U}(=\\{x\in K\,|\,x\cdot S_{U}\subseteq
S_{U}\\})$ in $K$, $U$ is contained in $R$,
$\overline{U}=\\{\overline{x}\in\overline{K}_{U}=R_{U}/S_{U}\,|\,x\in U\\}$ is
either a domain of positivity in $\overline{K}_{U}$ (if $-1\notin U$, $2\in
U$) or $\overline{U}=\\{\overline{1}\\}$ or
$\overline{U}=\overline{K}_{U}^{\times}$ and, in any case, $U=\\{x\in
R_{U}\,|\,\overline{x}\in\overline{U}\\}$.
We will first explain how to choose the suitable subgroup $U$ in the case of
stringent hyperfield and then give the proof in full generality for stringent
skew hyperfields.
From our classification in Theorem 4.10, we know that every stringent
hyperfield $F$ has the form $M\rtimes_{H,\psi}G$, where $M$ is either
$\mathbb{K}$, or $\mathbb{S}$, or a field. To show that $F$ is a quotient by
some subgroup $U$, we will choose $U$ with $\overline{U}=\\{\overline{1}\\}$
if $M$ is $\mathbb{K}$, choose $U$ with $\overline{U}$ a domain of positivity
if $M$ is $\mathbb{S}$, and choose $U$ with
$\overline{U}=\overline{K}_{U}^{\times}$ if $M$ is a field.
Now we begin the proof that every stringent skew hyperfield is a quotient of a
skew field $K$ by some normal subgroup $U$. First, we recall the definition of
quotient hyperfield. The quotient hyperfield $K/U=\\{[g]=gU\,|\,g\in K\\}$ was
introduced by Krasner in [Kra83] with multiplication given by
$[g]\cdot[h]=[gh]$, for $[g],[h]\in K/U$. Hyperaddition is given by
$[g]\boxplus[0]=[g]$ and $[g]\boxplus[h]=\\{[f]\subseteq K/U\,|\,f\in
gU+hU\\}$, for $[g],[h]\in(K/U)^{\times}$. As the subgroup $U$ we are choosing
would be normal, so this quotient works in the skew case.
We may suppose that a stringent skew hyperfield $F=M\rtimes_{H,\psi}G$ arises
from a short exact sequence of groups
$1\to M^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,,$
where $G$ is a totally ordered group equipped with a total order $\leq$ and
$M$ is either $\mathbb{K}$, or $\mathbb{S}$, or a skew field. We define an
order $\leq^{\prime}$ on $G$ such that $x\leq^{\prime}y$ if and only if $y\leq
x$. So $\leq^{\prime}$ is also a total order on $G$. Similarly as in the non-
skew case, we will also choose $U$ with $\overline{U}=\\{\overline{1}\\}$ if
$M$ is $\mathbb{K}$, choose $U$ with $\overline{U}$ a domain of positivity if
$M$ is $\mathbb{S}$, and choose $U$ with
$\overline{U}=\overline{K}_{U}^{\times}$ if $M$ is a skew field.
Our difficulty now is to choose a suitable skew field $K$ for the quotient
hyperfield corresponding to each $U$. We will introduce two different
constructions of skew fields, as follows.
###### Example 7.3.
[FS01] (Construction 1) Let $k$ be an arbitrary field. Define $K=k((G))$ to be
the ring of formal power series whose powers come from $G$, that is, the
elements of $K$ are functions from $G$ to $k$ such that the support of each
function is a well-ordered subset of $(G,\leq^{\prime})$. Addition is
pointwise, and multiplication is the Cauchy product or convolution, that is
the natural operation when viewing the functions as power series
$\sum_{{a\in G}}p(a)x^{a}$
It is well known (and easy to check) that $K$ is a skew field.
We will construct a skew field $K=k((G))$ as in Example 7.3 by choosing $k$ to
be an arbitrary field when $M$ is $\mathbb{K}$ and choosing $k$ to be the
field $\mathbb{R}$ of real numbers (or any other ordered field) when $M$ is
$\mathbb{S}$.
The second construction is for a stringent skew hyperfield
$F=M\rtimes_{H,\psi}G$ when $M$ is a skew field.
###### Example 7.4.
(Construction 2) We define $K=M[[G]]$ to be the set of formal sums of elements
of $H$ all from different layers such that the support is well-ordered, that
is, an element of $K$ is a function $p$ from $G$ to $H$ such that for any $g$
in $G$, $p(g)\in\psi^{-1}(g)\cup\\{0\\}=A_{g}$ and the support of each
function is a well-ordered subset of $(G,\leq^{\prime})$. As $M$ is a skew
field and each $\lambda_{h}$ with $h\in H$ is an isomorphism of hypergroups,
then $(A_{g},\boxplus_{g},0)$ is always an abelian group. We claim that $K$ is
a skew field, viewing functions as power series
$\sum_{{a\in G}}p(a)x^{a}\,,$
with addition $+_{K}$ given by
$\sum_{{a\in G}}p(a)x^{a}+_{K}\sum_{{a\in G}}q(a)x^{a}=\sum_{{a\in
G}}(p(a)\boxplus_{a}q(a))x^{a},$
and the additive identity is $\sum_{{a\in G}}0x^{a}$. Multiplication
$\cdot_{K}$ is given by
$\Big{(}\sum_{{a\in G}}p(a)x^{a}\Big{)}\cdot_{K}\big{(}\sum_{{a\in
G}}q(a)x^{a}\Big{)}=\sum_{{s\in
G}}\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q),}{\underset{g\in\operatorname{supp}(p),}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}x^{s},$
and the multiplicative identity is $1x^{1_{G}}$. Since the proof that this
really gives a skew field is a long calculation and is very similar to that
for $k((G))$, we do not give it here but in Appendix A.
Now we divide the proof into three cases and show that $F\cong K/U$ in each
case. For simplicity, we denote $\min(\operatorname{supp}(p))$ by $m_{p}$ for
$p\in K^{\times}$.
1. _Case_ 1:
If $M$ is $\mathbb{K}$, then let $U=\\{p\in K^{\times}\,|\,m_{p}=1_{G}\\}$.
It’s easy to check that $U$ is normal. Then the quotient hyperfield
$K/U=\\{[q]=qU\,|\,q\in K\\}$ has
$\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\\},$
$\displaystyle[0]$ $\displaystyle=\\{0_{K}\\}.$
So we can identify $[q]$ in $(K/U)^{\times}$ with $m_{q}$ in $G$ and identify
$[0]$ in $K/U$ with $0$. So we have $K/U\cong(G\cup\\{0\\},\boxplus,\cdot)$
with multiplication given by
$\displaystyle 0\cdot g$ $\displaystyle=0,$ $\displaystyle g\cdot h$
$\displaystyle=g\cdot_{G}h,$
where $g,h\in G$. And hyperaddition is given by
$\displaystyle g\boxplus 0$ $\displaystyle=\\{g\\},$ $\displaystyle g\boxplus
h$ $\displaystyle=\begin{cases}\\{g\\}&\text{ if $g<^{\prime}h$, that is
$g>h$, }\\\ \\{h\\}&\text{ if $g>^{\prime}h$, that is $g<h$, }\\\ \\{f\in
G\,|\,f\geq^{\prime}g\\}\cup\\{0\\}=\\{f\in G\,|\,f\leq g\\}\cup\\{0\\}&\text{
if $g=h$,}\end{cases}$
where $g,h\in G$.
Now it is clear to see that
$K/U\cong(G\cup\\{0\\},\boxplus,\cdot)\cong\mathbb{K}\rtimes_{H,\psi}G=F$.
2. _Case_ 2:
If $M$ is $\mathbb{S}$, $k=\mathbb{R}$ (or any other ordered field) and
$K=k((G))$, then let $U=\\{p\in K^{\times}\,|\,m_{p}=1_{G}\text{ and
}p(1_{G})>0\\}$. It’s easy to check that $U$ is normal. Then the quotient
hyperfield $K/U=\\{[q]=qU\,|\,q\in K\\}$ has
$\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and
}p(m_{p})>0\\}\text{ if }q(m_{q})>0,$ $\displaystyle[q]$
$\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and
}p(m_{p})<0\\}\text{ if }q(m_{q})<0,$ $\displaystyle[0]$
$\displaystyle=\\{0_{K}\\}.$
We can identify $[q]$ in $(K/U)^{\times}$ with $(1,m_{q})$ if $q(m_{q})>0$,
identify $[q]$ in $(K/U)^{\times}$ with $(-1,m_{q})$ if $q(m_{q})<0$, and
identify $[0]$ with $0$. So we have $K/U\cong((\mathbb{S}^{\times}\times
G)\cup\\{0\\},\boxplus,\cdot)$ with multiplication given by
$\displaystyle(r,g)\cdot 0$ $\displaystyle=0,$
$\displaystyle(r_{1},g_{1})\cdot(r_{2},g_{2})$
$\displaystyle=(r_{1}\cdot_{\mathbb{S}}r_{2},g_{1}\cdot_{G}g_{2}),$
where $r,r_{1},r_{2}\in\mathbb{S}^{\times}$ and $g,g_{1},g_{2}\in G$. And
hyperaddition is given by
$\displaystyle(r,g)\boxplus 0$ $\displaystyle=\\{(r,g)\\},$
$\displaystyle(r_{1},g_{1})\boxplus(r_{2},g_{2})$
$\displaystyle=\begin{cases}\\{(r_{1},g_{1})\\}&\text{if
$g_{1}<^{\prime}g_{2}$, that is $g_{1}>g_{2}$, }\\\
\\{(r_{2},g_{2})\\}&\text{if $g_{1}>^{\prime}g_{2}$, that is $g_{1}<g_{2}$,
}\\\ \\{(r_{1},g_{1})\\}&\text{if $g_{1}=g_{2}$ and $r_{1}=r_{2}$ }\\\
\\{(r,f)\,|\,f\geq^{\prime}g\\}\cup\\{0\\}=\\{(r,f)\,|\,f\leq
g\\}\cup\\{0\\}&\text{if $g_{1}=g_{2}$ and $r_{1}=-r_{2}$,}\end{cases}$
where $r,r_{1},r_{2}\in\mathbb{S}^{\times}$ and $g,g_{1},g_{2}\in G$.
So by Theorem 4.17, $K/U\cong((\mathbb{S}^{\times}\times
G)\cup\\{0\\},\boxplus,\cdot)\cong\mathbb{S}\rtimes_{H,\psi}G=F$.
3. _Case_ 3:
If $M$ is a skew field and $K=M[[G]]$, then let $U=\\{p\in
K^{\times}\,|\,m_{p}=1_{G}\text{ and }p(1_{G})=1\\}$. It’s easy to check that
$U$ is normal. Then the quotient hyperfield $K/U=\\{[q]=qU\,|\,q\in K\\}$ has
$\displaystyle[q]$ $\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and
}p(m_{p})=q(m_{q})\\},$ $\displaystyle[0]$ $\displaystyle=0_{K}.$
We can identify $[q]$ in $(K/U)^{\times}$ with $q(m_{q})$ in $H$ (clearly
$\psi(q(m_{q}))=m_{q}$) and identify $[0]$ with $0_{F}$. So we have $K/U\cong
F$ with multiplication given by
$\displaystyle[q]\cdot 0$ $\displaystyle=0,$ $\displaystyle[q]\cdot[h]$
$\displaystyle=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\cdot_{G}m_{h}\text{ and
}p(m_{p})=q(m_{q})\cdot_{H}h(m_{h})\\}=[p]$
Hyperaddition is given by
$[q]\boxplus 0=[q],$ $\displaystyle[q]\boxplus[h]$ $\displaystyle=$
$\displaystyle\begin{cases}\\{[q]\\}&\text{if $m_{q}<^{\prime}m_{h}$, that is
$m_{q}>m_{h}$, }\\\ \\{[h]\\}&\text{if $m_{q}>^{\prime}m_{h}$, that is
$m_{q}<m_{h}$, }\\\ \\{[p]=\\{p\in K^{\times}\,|\,m_{p}=m_{q}\text{ and
}p(m_{p})=q(m_{q})\boxplus_{m_{q}}h(m_{h})\\}\\}&\text{if $m_{q}=m_{h}$ and
$0\notin q(m_{q})\boxplus_{m_{q}}h(m_{h})$,}\\\
\\{[p]\in(K/U)^{\times}\,|\,m_{p}>^{\prime}m_{q}\\}\cup\\{0\\}=\\{[p]\in(K/U)^{\times}\,|\,m_{p}<m_{q}\\}\cup\\{0\\}&\text{if
$m_{q}=m_{h}$ and $0\in q(m_{q})\boxplus_{m_{q}}h(m_{h}).$}\end{cases}$
where $[q],[h]\in(K/U)^{\times}$.
So $K/U\cong M\rtimes_{H,\psi}G=F.$
###### Theorem 7.5.
Every stringent skew hyperfield is a quotient of a skew field.
###### Corollary 7.6.
Every doubly distributive skew hyperfield is a quotient of a skew field.
It follows from the construction that the same statements with all instances
of the word ‘skew’ removed also hold.
## Appendix A The construction 2 in Example 7.4 gives a skew field
###### Lemma A.1.
Let $F=M\rtimes_{H,\psi}G$ be a stringent skew hyperfield arising from a short
exact sequence of groups
$1\to M^{\times}\xrightarrow{\varphi}H\xrightarrow{\psi}G\to 1\,,$
where $G$ is a totally ordered group and $M$ is a skew field. Define
$K=k[[G]]$ as we did in section 7. Then $K$ is a skew field.
###### Proof.
The commutativity and associativity of $(K,+_{K},\sum_{{a\in G}}0x^{a})$
follow from those of $(H\cup\\{0\\},\boxplus,0)$. So we only need to show the
associativity of $(K,\cdot_{K},1x^{1_{G}})$, the existence of a multiplicative
inverse for every element and the distributivity.
An important principle which we will need again and again as we go along is a
kind of distributivity of the composition of $H$ over the various additions
$\boxplus_{g}$. To express it cleanly, we begin by extending $\cdot_{H}$ to
$H\cup\\{0\\}$ by setting $x\cdot 0=0\cdot x=0$ for all $x\in H\cup\\{0\\}$.
Suppose that we have elements $x$ and $y_{1},y_{2}\ldots y_{n}$ of $H$ with
$\psi(y_{i})=u\in G$ for all $i$, so that $\boxplus_{i=1}^{n}y_{i}$ is
defined. Let $v\in G$ be such that $v=\psi(x)$. Then $z\mapsto x\cdot_{H}z$ is
a bijection from $A_{u}$ to $A_{v\cdot u}$ whose composition with
$\lambda_{y_{1}}$ is $\lambda_{x\cdot_{H}y_{1}}$, so it must also be an
isomorphism of hypergroups. Thus
$x\cdot_{H}\big{(}\underset{1\leq i\leq
n}{\boxplus_{u}}y_{i}\big{)}=\underset{1\leq i\leq n}{\boxplus_{v\cdot
u}}x\cdot_{H}y_{i}\,.$
A similar argument using the $\lambda^{\prime}_{h}$ defined in the proof of
Lemma 4.2 shows
$\big{(}\underset{1\leq i\leq
n}{\boxplus_{u}}y_{i}\big{)}\cdot_{H}x=\underset{1\leq i\leq
n}{\boxplus_{u\cdot v}}y_{i}\cdot_{H}x\,.$
To show the associativity of $(K,\cdot_{K},1x^{1_{G}})$, we let $p,q,w\in K$.
Then for $s\in G$,
$\displaystyle\big{(}(p\cdot_{K}q)\cdot_{K}w\big{)}(s)$
$\displaystyle=\underset{g\cdot_{G}c=s}{\underset{c\in\operatorname{supp}(w)}{\boxplus_{s}}}\Big{(}\big{(}\underset{a\cdot_{G}b=g}{\underset{b\in\operatorname{supp}(q)}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{g}}}}p(a)\cdot_{H}q(b)\big{)}\cdot_{H}w(c)\Big{)}$
$\displaystyle=\underset{g\cdot_{G}c=s}{\underset{c\in\operatorname{supp}(w)}{\boxplus_{s}}}\Big{(}\underset{a\cdot_{G}b=g}{\underset{b\in\operatorname{supp}(q)}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}}p(a)\cdot_{H}q(b)\cdot_{H}w(c)\Big{)}$
$\displaystyle=\underset{a\cdot_{G}b\cdot_{G}c=s}{\underset{c\in\operatorname{supp}(w)}{{\underset{b\in\operatorname{supp}(q)}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}}}}p(a)\cdot_{H}q(b)\cdot_{H}w(c),$
$\displaystyle=\underset{a\cdot_{G}h=s}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}\Big{(}\underset{b\cdot_{G}c=h}{\underset{c\in\operatorname{supp}(w)}{\underset{b\in\operatorname{supp}(q)}{\boxplus_{s}}}}p(a)\cdot_{H}q(b)\cdot_{H}w(c)\Big{)}$
$\displaystyle=\underset{a\cdot_{G}h=s}{\underset{a\in\operatorname{supp}(p)}{\boxplus_{s}}}\Big{(}p(a)\cdot_{H}\big{(}\underset{b\cdot_{G}c=h}{\underset{c\in\operatorname{supp}(w)}{\underset{b\in\operatorname{supp}(q)}{\boxplus_{h}}}}q(b)\cdot_{H}w(c)\big{)}\Big{)}$
$\displaystyle=\big{(}p\cdot_{K}(q\cdot_{K}w)\big{)}(s)\,.$
So $(p\cdot_{K}q)\cdot_{K}w=p\cdot_{K}(q\cdot_{K}w)$.
Next we will show that each element of $K$ has a multiplicative inverse. We do
this first for those $p\in K=k[[G]]$ such that $m_{p}=1_{G}$ and $p(m_{p})=1$.
Let $S$ be the set of finite sums of elements of $\operatorname{supp}(p)$. $S$
is well founded.
Define $q\in K=k[[G]]$ such that $q(1_{G}):=1$, $q(s):=0$ for $s\notin S$ and,
for $s\in S$, define $q(s)$ recursively by
$q(s):=-\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in
S-\\{s\\}}{\underset{g\in\operatorname{supp}(p)-\\{1_{G}\\}}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}.$
So
$\displaystyle p\cdot_{K}q(1_{G})$ $\displaystyle=1,$ $\displaystyle
p\cdot_{K}q(s)$ $\displaystyle=0$ if $s\notin S$, $\displaystyle
p\cdot_{K}q(s)$
$\displaystyle=\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q)}{\underset{g\in\operatorname{supp}(p)}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)$
$\displaystyle=\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q)-\\{s\\}}{\underset{g\in\operatorname{supp}(p)-\\{1_{G}\\}}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}\boxplus_{s}p(1)\cdot_{H}q(s)$
$\displaystyle=\Big{(}\underset{g\cdot_{G}h=s}{\underset{h\in\operatorname{supp}(q)-\\{s\\}}{\underset{g\in\operatorname{supp}(p)-\\{1_{G}\\}}{\boxplus_{s}}}}p(g)\cdot_{H}q(h)\Big{)}\boxplus_{s}q(s)$
$\displaystyle=0$ if $s\in S-\\{1_{G}\\}$.
So $p\cdot_{K}q$ is the identity. Therefore, $q$ is the multiplicative inverse
of $p$.
Next we consider elements of $K$ with only a single summand, that is, those of
the form $ax^{g}$. It is clear that each such element also has a
multiplicative inverse, namely $a^{-1}x^{g^{-1}}$.
Now every element of $K$ can be expressed as a product $p_{1}\cdot p_{2}$,
with $m_{p_{1}}=1_{G}$ and $p_{1}(m_{p_{1}})=1$ and such that $p_{2}$ has only
a single summand. As seen above, each of $p_{1}$ and $p_{2}$ has a
multiplicative inverse, and hence $p_{1}\cdot p_{2}$ also has one, namely
$p_{2}^{-1}\cdot p_{1}^{-1}$.
For distributivity, we first would like to show that
$p\cdot_{K}(q+_{K}w)=p\cdot_{K}q+_{K}p\cdot_{K}w$. For $s\in G$,
$\displaystyle(p\cdot_{K}(q+_{K}w))(s)$
$\displaystyle=\underset{g\cdot_{G}h=s}{\boxplus_{s}}p(g)\cdot_{H}(q(h)\boxplus_{h}w(h))$
$\displaystyle=\underset{g\cdot_{G}h=s}{\boxplus_{s}}\big{(}p(g)\cdot_{H}q(h)\boxplus_{s}p(g)\cdot_{H}w(h)\big{)}$
$\displaystyle=\big{(}\underset{g\cdot_{G}h=s}{\boxplus_{s}}p(g)\cdot_{H}q(h)\big{)}\boxplus_{s}\big{(}\underset{g\cdot_{G}h=s}{\boxplus_{s}}p(g)\cdot_{H}w(h)\big{)},$
$\displaystyle=(p\cdot_{K}q+_{K}p\cdot_{K}w)(s)\,.$
So $p\cdot_{K}(q+_{K}w)=p\cdot_{K}q+_{K}p\cdot_{K}w$. A similar calculation
shows that $(p+_{K}q)\cdot_{K}w=p\cdot_{K}w+_{K}q\cdot_{K}w$.
So $K=k[[G]]$ is a skew field. ∎
## Appendix B The semirings associated to doubly distributive hyperfields
In [GJL17], Lemma 6.2(2) provides a way to build a semiring out of a doubly
distributive hyperfield.222In [Row16, Theorem 2.5], Rowen extended the theory
of constructing the semiring to every hyperfield. In this section, we would
like to talk about these semirings.
For any doubly distributive hyperfield $H$ we can define binary operations
$\oplus$ and $\odot$ on $\mathcal{P}H$ by setting $A\oplus B:=\bigcup_{a\in
A,b\in B}a\boxplus b$ (this is just the extension of $\boxplus$ to subsets of
$H$ from Definition 2.2) and $A\odot B:=\\{ab\colon a\in A,b\in B\\}$. Let
$\langle H\rangle$ be the substructure of $(\mathcal{P}H,\oplus,\odot)$
generated from the singletons of elements of $H$. So $\langle H\rangle$ is a
semiring. We will refer $\langle H\rangle$ as the associated semiring to $H$.
Using our classification, we can easily determine all such associated
semirings. Surprisingly, some of the basic examples have already been
intensively studied and play an important role in the foundations of tropical
geometry. In each case, we find that $\langle H\rangle$ contains only few
elements in addition to the singletons of elements of $H$.
We have seen that any doubly distributive hyperfield has the form
$F\rtimes_{H,\psi}G$, where $F$ is the Krasner hyperfield, the sign hyperfield
or a field. We divide into cases according the value of $F$.
### B.1. Supertropical semirings
If $F$ is the Krasner hyperfield then $\psi\colon H^{\times}\to G$ is an
isomorphism, and we can take it to be the identity. Then the elements of
$\langle H\rangle$ are the singletons of elements of $H$ and the sets
$g^{\nu}:=\\{h\in G\colon h\leq g\\}\cup\\{0\\}$. To simplify the definition
of the addition we define an operation $\nu$ on $\langle
H\rangle\setminus\\{\\{0\\}\\}$ by $\nu(\\{g\\})=\nu(g^{\nu})=g^{\nu}$ and we
transfer the total order of $G$ to the $g^{\nu}$ in the obvious way. Then
addition is given by $x\oplus\\{0\\}=x$ for any $x$ and otherwise by
$x\oplus y=\begin{cases}x&\text{if $\nu(x)>\nu(y)$,}\\\ y&\text{if
$\nu(x)<\nu(y)$,}\\\ \nu(x)&\text{if $\nu(x)=\nu(y)$.}\\\ \end{cases}$
Multiplication is given by $x\odot\\{0\\}=\\{0\\}$, by
$\\{g\\}\odot\\{h\\}=\\{g\cdot h\\}$, by $\\{g\\}\odot h^{\nu}=(g\cdot
h)^{\nu}$ and by $g^{\nu}\odot h^{\nu}=(g\cdot h)^{\nu}$. In the case that $G$
is the ordered group of real numbers, this is simply the supertropical
semiring introduced by Izhakian in [Izh09]. This associated semiring has also
been studied by Rowen in [Row16]. It would be reasonable to call such
semirings in general supertropical semirings.
### B.2. Symmetrised $(\max,+)$-semirings
If $F$ is the sign hyperfield then by Theorem 4.17 without loss of generality
it arises from a short exact sequence
$1\to\mathbb{S}^{\times}\to\mathbb{S}^{\times}\times G\to G\to 1\,.$
The elements of $\langle H\rangle$ then have the form $0:=\\{0_{H}\\}$,
$\oplus g:=\\{(1,g)\\}$, $\ominus g:=\\{(-1,g)\\}$, or
$g^{\circ}:=\\{(i,h)\colon i\in\mathbb{S}^{\times},h\leq g\\}\cup\\{0_{H}\\}$.
There is an obvious projection map $\pi$ from $\langle
H\rangle\setminus\\{0\\}$ to $G$. Then addition is given by $x\oplus 0=x$ for
any $x$, by $x\oplus y=x$ if $\pi(x)>\pi(y)$, by $x\oplus g^{\circ}=g^{\circ}$
if $\pi(x)=g$, by $(\oplus g)\oplus(\oplus g)=\oplus g$, by $(\ominus
g)\oplus(\ominus g)=\ominus g$ and by $(\oplus g)\oplus(\ominus g)=g^{\circ}$.
Multiplication is given by $x\odot 0=0$ for any $x$, by $x\odot
g^{\circ}=(\pi(x)\cdot g)^{\circ}$, by $(\oplus g)\odot(\oplus
h)=\oplus(g\cdot h)$, by $(\ominus g)\odot(\ominus h)=\oplus(g\cdot h)$ and by
$(\oplus g)\odot(\ominus h)=\ominus(g\cdot h)$.
In the case that $G$ is the ordered group of real numbers, this is simply the
symmetrised $(\max,+)$-semiring introduced by Akian et al in [ACG+91]. So it
would be reasonable to call such semirings in general symmetrised
$(max,+)$-semirings.
### B.3. Linearised $(\max,+)$-semirings
If $F$ is a field, then the elements of $\langle H\rangle$ are the singletons
of elements of $H$ (which are in canonical bijection with $H$) and the sets
$\psi^{-1}(g\downarrow)\cup\\{0\\}$ (which are in canonical bijection with
$G$). So $\langle H\rangle$ is isomorphic to the semiring on $H\cup G$ with
$x\oplus y$ for $x,y\in H$ given by the unique element of $x\boxplus y$ if
this set is a singleton and by $\psi(x)$ otherwise, with $x\oplus g$ for $x\in
H$ and $g\in G$ given by $x$ if $\psi(x)\geq g$ and by $g$ otherwise, and with
$g\oplus h$ for $g,h\in G$ given by $\max(g,h)$. For multiplication, $x\odot
y=x\cdot y$ for $x,y\in H$ and $x\odot g=\psi(x)\cdot g$ for $x\in H$ and
$y\in G$ and finally $g\odot h=g\cdot h$ for $g,h\in G$.
By analogy to the previous construction, we could refer to such semirings as
linearised $(\max,+)$-semirings. So far as we know, such semirings have not
yet been seriously investigated.
## References
* [ACG+91] Marianne Akian, G. Cohen, S. Gaubert, Ramine Nikoukhah, and J.P. Quadrat. Linear systems in (max, +) algebra. In Proceedings of the 29th IEEE Conference on Decision and Control, pages 151 – 156 vol.1, 01 1991.
* [AD19] Laura Anderson and James F. Davis. Hyperfield Grassmannians. Adv. Math., 341:336–366, 2019.
* [BB16] Matthew Baker and Nathan Bowler. Matroids over hyperfields. arXiv:1601.01204, 2016.
* [BB19] Matthew Baker and Nathan Bowler. Matroids over partial hyperstructures. Adv. Math., 343:821–863, 2019.
* [BP19] Nathan Bowler and Rudi Pendavingh. Perfect matroids over hyperfields. arXiv:1908.03420, 2019.
* [DG73] Justus Diller and Jochen Grenzdörffer. $G$-Hüllen metrischer Teilräume. Math. Ann., 200:151–164, 1973.
* [Dre77] A. Dress. On orderings and valuations of fields. Geometriae Dedicata, 6(3):259–266, 1977.
* [DW92] Andreas W. M. Dress and Walter Wenzel. Valuated matroids. Adv. Math., 93(2):214–250, 1992.
* [FS01] László Fuchs and Luigi Salce. Modules over non-Noetherian domains, volume 84 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2001.
* [GJL17] Jeffrey Giansiracusa, Jaiung Jun, and Oliver Lorscheid. On the relation between hyperrings and fuzzy rings. Beitr. Algebra Geom., 58(4):735–764, 2017.
* [Izh09] Zur Izhakian. Tropical arithmetic and matrix algebra. Communications in Algebra, 37(4):1445–1468, 2009.
* [Kra57] Marc Krasner. Approximation des corps valués complets de caractéristique $p\not=0$ par ceux de caractéristique $0$. In Colloque d’algèbre supérieure, tenu à Bruxelles du 19 au 22 décembre 1956, Centre Belge de Recherches Mathématiques, pages 129–206. Établissements Ceuterick, Louvain; Librairie Gauthier-Villars, Paris, 1957.
* [Kra83] Marc Krasner. A class of hyperrings and hyperfields. Internat. J. Math. Math. Sci., 6(2):307–311, 1983.
* [Mar06] M. Marshall. Real reduced multirings and multifields. J. Pure Appl. Algebra, 205(2):452–468, 2006.
* [Pen18] Rudi Pendavingh. Field extensions, derivations, and matroids over skew hyperfields. arXiv:1802.02447, 2018.
* [Row16] Louis Halle Rowen. Algebras with a negation map. arXiv:1602.00353, 2016.
* [Vir10] Oleg Viro. Hyperfields for tropical geometry i. hyperfields and dequantization. arXiv:1006.3034v2, 2010.
* [Vir11] O. Ya. Viro. On basic concepts of tropical geometry. Tr. Mat. Inst. Steklova, 273(Sovremennye Problemy Matematiki):271–303, 2011.
|
2024-09-04T02:54:58.201400 | 2020-03-08T19:28:53 | 2003.03837 | {
"authors": "Lucileide M. D. da Silva, Maria G. F. Coutinho, Carlos E. B. Santos,\n Mailson R. Santos, Luiz Affonso Guedes, M. Dolores Ruiz, Marcelo A. C.\n Fernandes",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26105",
"submitter": "Marcelo Fernandes",
"url": "https://arxiv.org/abs/2003.03837"
} | arxiv-papers | # Hardware Architecture Proposal for TEDA algorithm to Data Streaming Anomaly
Detection
Lucileide M. D. da Silva<EMAIL_ADDRESS>Maria G. F. Coutinho
<EMAIL_ADDRESS>Carlos E. B. Santos<EMAIL_ADDRESS>Mailson
R. Santos<EMAIL_ADDRESS>Luiz Affonso Guedes<EMAIL_ADDRESS>M. Dolores Ruiz<EMAIL_ADDRESS>Marcelo A. C. Fernandes111Present address:
John A. Paulson School of Engineering and Applied Sciences, Harvard
University, Cambridge, MA 02138, USA<EMAIL_ADDRESS>Laboratory of
Machine Learning and Intelligent Instrumentation, Federal University of Rio
Grande do Norte, Natal 59078-970, Brazil. Federal Institute of Education,
Science and Technology of Rio Grande do Norte, Paraiso, Santa Cruz, RN,
59200-000, Brazil. Department of Statistics and Operations Research,
University of Granada, Spain. Department of Computer Engineering and
Automation, Federal University of Rio Grande do Norte, Natal, RN, 59078-970,
Brazil.
###### Abstract
The amount of data in real-time, such as time series and streaming data,
available today continues to grow. Being able to analyze this data the moment
it arrives can bring an immense added value. However, it also requires a lot
of computational effort and new acceleration techniques. As a possible
solution to this problem, this paper proposes a hardware architecture for
Typicality and Eccentricity Data Analytic (TEDA) algorithm implemented on
Field Programmable Gate Arrays (FPGA) for use in data streaming anomaly
detection. TEDA is based on a new approach to outlier detection in the data
stream context. In order to validate the proposals, results of the occupation
and throughput of the proposed hardware are presented. Besides, the bit
accurate simulation results are also presented. The project aims to Xilinx
Virtex-6 xc6vlx240t-1ff1156 as the target FPGA.
###### keywords:
FPGA , TEDA , data streaming , reconfigurable computing
††journal: arXiv.org
## 1 Introduction
Outlier detection or anomaly detection consists in detect rare events in a
data set. It is a central problem in many application areas such as time
series forecasting, data mining and industrial process monitoring. Due the
increasing number of sensors in the most diverse areas and applications, there
is a huge raise in the availability of data from time series. Thus, outlier
detection for temporal data has become a central problem [1], especially when
data are captured and processed continuously in online way. In this case, the
data are considered as data streams [2].
Some important aspects need to be considered when choosing an anomaly
detection method, such as the computational effort to handle large streaming
data. Since the received information need to be stored and analyzed without
compromising memory and run-time. Many of the solutions presented in the
literature require prior knowledge of the process and system, such as
mathematical models, data distribution, and predefined parameters [3]. Anomaly
detection is traditionally done from statistical analysis, using probability
and making a series of initial assumptions that in most cases are not in
practice applied.
A disadvantage of the traditional statistical method is comparing a single
point with the average of all points rather than comparing with sample or data
pairs. This way, the information is no longer punctual and local. Moreover,
probability theory was developed from examples where processes and variables
are purely random. However, real processes are not purely random and shows
dependency between samples. Thus, real problems are addressed from offline
processes, where the entire data set needs to be known. Being a potential
problem of the traditional method. Another problem with traditional approaches
is that they often use an offline dataset. Thus, all samples must be
previously available from the beginning of the algorithm execution [4], making
it impossible to use in real-time and data stream applications. This type of
data presents new technical challenges and opportunities in new fields of
work. Detecting real-time anomalies can provide valuable information in
critical scenarios, but it is a high computational demand problem that still
lacks reliable solutions capable of providing high processing capabilities.
Typicality and Eccentricity Data Analytic (TEDA) is based on new approach to
outlier detection in data stream context [5] and it can applied with an
algorithm to detect autonomous behavior in industrial process operation, for
example. TEDA analyzes the density of each sample of data read, calculated
according to the distance from the sample to the other samples previously
read. It is an online algorithm that learns autonomously without the need for
prior knowledge about the process or parameters. Therefore, the computational
effort required is smaller, allowing the use in real time applications [3].
TEDA can be used as an alternative statistical framework for analyzing most
data, except for purely random processes. It is based on new metrics, all
based on similarity/proximity of data in the data space, not in density or
entropy, as in traditional methods. The metrics used with TEDA are typicality,
defined in [5] as the extent to which objects are ?good examples? of a
concept, and eccentricity, defined as how distinct the object is from the rest
of the group. A high eccentricity data has a low typicality and is usually an
outlier [3].
Eccentricity can be very useful for anomaly detection, image processing, fault
detection, particle physics, etc. Allows analysis for data samples (which can
also be done in real time for data stream) [6]. It is also relevant in
clustering processes, since elements of a cluster are naturally opposed to the
atypical [5].
Another area where anomaly detection has been increasingly used is in industry
4.0 projects. One of the challenges of the Industry 4.0 is the detection of
production failures and defects [7]. New technologies aim to add value and
increase process productivity, but face difficulties in performing complex and
massive-scale computing due to the large amount of data generated [8]. The
huge accumulation of real time data to flow in a network, for example, can
quickly overload traditional computing systems due to the large amount of data
that originates from the sensors and the requirement for intensive processing
and high performance. The development of specialized hardware presents itself
as a possible solution to overcome the bottlenecks, making it possible to
create solutions for mass data processing and, at the same time, consider
ultra-low-latency, low-power, high-throughput, security and ultra-reliable
conditions, important requirements for increasing productivity and quality in
industry 4.0 processes.
Thinking about the challenges presented, this work proposes a specialized
hardware architecture of TEDA for anomaly detection. The development of the
hardware technique allows systems to be made even faster than their software
counterparts, extending the possibilities of use for situations where time
constraints are even more severe. In addition allowing its use in applications
with large data processing. The works [9, 10, 11, 12, 13] were developed in
hardware, specifically on FPGA, for the acceleration of complex algorithms.
The development of machine learning algorithms in hardware has grown
significantly. This is justified from performance data with respect to system
sampling times compared to software equivalents. One of the motivations for
this work is the possibility of accelerating the TEDA algorithm and handling
large data streams, such as streaming and real-time.
In this work, all validation and synthesis results was made using a FPGA
Virtex 6 xc6vlx240t1ff1156. The FPGA choice was because it has high
performance. Modern FPGAs can deliver performance and density comparable to
Application Specific Integrated Circuits (ASICs), without the disadvantages of
high development time and enabling reprogramming, as FPGAs have a flexible
architecture.
The rest of this paper is organized as follows: This first section presented a
introduction about the work explaining the motivation behind it and major
contributions. Section 2 discusses some related works and the state of the
art. In Section 3 will be presented a theoretical foundation regarding the
TEDA technique. Section 4 presents the implementation description details for
the architecture proposed. Section 5 will present the validation and synthesis
results of the proposed hardware, as well as comparisons with software
implementations. Finally, Section 6 will present the conclusions regarding the
obtained results.
## 2 Related work
Real-time anomaly detection in data stream has potential applications in many
areas. Such as: preventive maintenance, fault detection, fraud detection,
signals monitoring, among others. Concepts that can be used in many different
ranges of industry, such as information technology, finance, medicine,
security, energy, e-commerce, agriculture, social media, among others. In the
literature there are some uses of the TEDA technique for anomaly detection and
even for classification.
The article presented in [6] shows a proposal for a new TEDA-based anomaly
detection algorithm. The proposed method, called by the author $\sigma$ gap,
combines the accumulated proximity information for all samples with the
comparison of specific point pairs suspected of being anomalies. Using local
spatial distribution information about the vicinity of the suspect point. In
the journal, TEDA is compared to an approach using traditional statistical
methods, emphasizing that the set of initial assumptions is different. TEDA
has been shown to be a generalization of traditional statistics compared to a
known analysis, n $\sigma$, which is a widely used principle for threshold
anomaly detection. The same result was obtained for both approaches, although
TEDA does not need the initial assumptions. In addition, for various types of
proximity measurements (such as Euclidean, Cosine, Mahalanobis), it has been
shown that due to the recursion feature, TEDA is computationally more
efficient and suitable for online and real-time applications.
In the work [14] a study is presented about the use of TEDA for fault
detection in industrial processes. The work is pioneering the use of this
approach for real industry data. For the experiments, TEDA was applied online
to the dataset provided by the DAMADICS (Development and Application of
Methods for Actuator Diagnosis in Industrial Control Systems) database, one of
the most widely used benchmarks in fault detection and diagnosis applications.
The experiments showed good results both in accuracy and execution time, which
shows the suitability of this approach for real-time industrial applications.
Finally, it was found that the TEDA algorithm is capable of dealing with the
limitations and particularities of the industrial environment.
The paper of [15] is intended to enable the use of TEDAClass, which consists
of the TEDA algorithm for classification, in big data processing. The main
feature of the proposed algorithm, called TEDAClassBDp, is the processing of
block data, where each block uses the TEDAClass so that all blocks operate in
parallel. As with TEDAClass, the proposed algorithm does not require
information from previous data, and its operation occurs recursively, online
and in real-time. The results indicated a reduction in time and computational
complexity, without significantly compromising its accuracy, which indicates
the strong possibility of using the proposal in problems where it is necessary
to process large volumes of data quickly.
The work presented in [16] proposes a new non-frequency and density-based data
analysis tool. Classified by the author as a further development of TEDA and
an effective alternative to the probability distribution function (pdf).
Typicality Distribution Function (TDF) can provide valuable information for
extreme process analysis, fault detection and identification, where the number
of extreme event or fault observations is often disproportionately small. The
proposed offers a closed non-parametric analytical (quadratic) description,
extracted from the actual realizations of the data exactly in contrast to the
usual practice in which these distributions are being assumed or approximated.
In addition, for various types of proximity and similarity measures (such as
Euclidean, Mahalonobis, and cosine distances), it can be recursively
calculated, thus computationally efficient and suitable for online and real-
time algorithms. As a fundamental theoretical innovation, TDF and TEDA
application areas can range from anomaly detection, grouping, classification,
prediction, control, filter regression (similar to Kalman). Practical
applications may be even broader, so it is difficult to list them all.
The paper presented in [3] proposes the application of TEDA for fault
detection in industrial processes. The effectiveness of the proposal has been
demonstrated with two real industrial plants, using data streaming, and
compared with traditional failure detection methods. This paper presents a
practical application of the TEDA algorithm for two fault detection problems
of real industrial plants. The first application uses a well-known database,
DAMADICS, a database that provides actual data on the water evaporation
process of an operating plant of a Polish sugar manufacturing plant. The
second application was made from data analysis of a pilot plant of the
authors’ university laboratory. A plant equipped with real industrial
instruments used for process control.
The work of [4] presents a new proposal for unsupervised fuzzy classifier,
capable of aggregating the main characteristics of evolving classifiers, as
well as making fuzzy classifications of real time data streams completely
online. The proposed algorithm uses TEDA concepts, replacing traditional
clusters with data clouds, granular structures without shape or predefined
boundaries. For data classification, the proposed approach uses the concepts
of soft-labeling rather than mutually exclusive classes. Experiments performed
using data obtained from different operational failures of a real industrial
plant, showed very significant results regarding unsupervised as well as semi-
supervised learning, requiring minimal human interaction.
The manuscript presented in [2] brings a new algorithm for detecting anomalies
based on an online memory sequence algorithm called Hierarchical Temporal
Memory (HTM). The performance of the proposed algorithm was evaluated and
compared with a set of real time anomaly detection algorithms. Comparative
analysis was performed as a way to evaluate anomaly detection algorithms for
data streaming. All analyzes were performed from the Numenta Anomaly Benchmark
(NAB) [17], which is a benchmark of actual streaming data.
The paper published by [18] brings a study for anomaly detection in TCP / IP
networks. The purpose of the paper is to detect computer network anomalies in
the process of virtual machine (VM) live migration from local to cloud, by
comparing this approach between TEDA, clustering K-Means, and static analysis.
They used the tuple - Source IP, Destination IP, Source Port, and Destination
Port \- to create a signature process and validate errors, including those of
traffic flow hidden in the legitimate network. Testing was done using the
SECCRIT (SEcure Cloud Computing for CRitical Infrastructure IT -
http://www.seccrit.eu) project dataset, which allows anomalies or
environmental attacks to be analyzed with Live Migration and other background
traffic conditions. The results demonstrate that the proposed method makes it
possible to automatically and successfully detect anomalies in attacks,
network port scan (NPS) and network scan (NS). A major difficulty is
distinguishing a high-volume attack from a denial of service (DoS) attack, for
example. Accuracy and false negative rate calculations were made for
comparison with K-Means and the proposed solution, with TEDA having better
rates in almost all measurements performed.
As the amount of data that needs to be processed grows exponentially and
autonomous systems become increasingly important and necessary. Implementation
of machine learning and streaming algorithms have been studying in literature.
The work presented in [19] describes how to use run-time reconfiguration on
FPGAs to improve the efficiency of streaming data transmission in shared
communication channel with real-time applications. The reconfigurable
architecture proposed consists of two subsystems: the reconfiguration
subsystem, which running the modules, and the scheduling subsystem, that
controls which modules are loaded to the reconfiguration subsystem.
Besides, many works in the literature have been studied fault and anomaly
detection in hardware. In work [20], an implementation of target and anomaly
detection algorithms for real-time hyper-spectral imaging was proposed on
FPGA. The algorithms were implemented in streaming fashion, similar to this
work. The results, obtained from a Kintex-7 FPGA using fixed point structure,
were very satisfactory and demonstrated that the implementation can be used in
different detection circumstances. The work [21] presented a study of the
impact of Neural Network architectures compared to statistical methods in the
implementation of an Electrocardiogram (ECG) anomaly detection algorithm on
FPGA. The fixed point implementation contributes to reduce the amount of
needed resources. However, the design was made with High Level Sinthesys
(HLS), witch could not optimize the FPGA resources consumption. In relation to
the TEDA algorithm, no studies in the literature aimed at exploring its
hardware implementation on FPGA were identified to date this paper had been
write, which this work proposes to accomplish in a pioneering manner.
## 3 TEDA
TEDA was introduced by [22] as a statistical framework, influenced by
recursive density estimation algorithms. However, unlike algorithms that uses
data density as a measure of similarity, TEDA uses concepts of typicity and
eccentricity to infer whether a given sample is normal or abnormal to the
dataset. The methodology used in TEDA does not require the use of a previous
data information, and can be applied to problems involving fault detection,
clustering, classification, among others [22].
TEDA is a data structure-based anomaly detection algorithm that aims to
generalize and avoid the need for well-known, but very restrictive, initial
conditions inherent in traditional statistics and probability theory [23]. The
approach presented in the TEDA has some advantages over traditional
statistical anomaly detection methods. Its recursive feature allows it to
handle large volumes of data, such as data streams, with low computational
cost and online, enabling faster processing.
TEDA main features include [6]:
* 1.
It is entirely based on data and its distribution in data spaces;
* 2.
No previous assumptions are made;
* 3.
Limits and parameters does not need to be pre-specified;
* 4.
No sample independence required;
* 5.
An infinite number of observations are not required.
The typicality of TEDA is the similarity of a given data sample to the rest of
the dataset samples to which it belongs. Eccentricity, on the other hand, is
the opposite of typicality, which indicates how much a sample is dissociated
from the other samples in its set. Thus, an outlier can be defined as a sample
with high eccentricity and low typicality, considering a threshold established
for comparison. It is important to note that for eccentricity and typicality
calculations no parameter or threshold is required.
To calculate the eccentricity of each sample, TEDA uses the sum of the
geometric distances between the analyzed sample $\bm{x}_{k}$ and the other
samples in the set. Thus, the higher this value, the greater the eccentricity
of the sample, and consequently, the lower its typicality. [6] proposed
recursively calculating eccentricity. Thus, the eccentricity, $\xi$ can be
expressed as
$\xi_{k}(x)=\frac{1}{k}+\frac{(\bm{\mu}_{k}^{x}-\bm{x}_{k})^{T}(\bm{\mu}^{x}_{k}-\bm{x}_{k})}{k[\sigma^{2}]^{x}_{k}},[\sigma^{2}]^{x}_{k}>0$
(1)
where $k$ is discreization instant; $\bm{x}_{k}$ is a input set of N elements
in the k-th iteration, $\bm{x}_{k}=[x_{k}^{1}\ x_{k}^{2}\ ...\ x_{k}^{N}]$;
$\bm{\mu}_{x}^{k}$ is also a N elements vector, equal to the average of
$\bm{x}_{k}$ at the $k$-th iteration and $[\sigma^{2}]^{x}_{k}$ is the
variance of $\bm{x}_{k}$ at the $k$-th iteration. The calculation of
$\bm{\mu}_{x}^{k}$ and $[\sigma^{2}]^{x}_{k}$ is also recursively done, using
the following equation
$\bm{\mu}^{x}_{k}=\frac{(k-1)}{k}\bm{\mu}^{x}_{k-1}+\frac{1}{k}\bm{x}_{k},\
k\geq 1,\ \bm{\mu}^{x}_{0}=0$ (2)
and
$[\sigma^{2}]^{x}_{k}=\frac{(k-1)}{k}[\sigma^{2}]^{x}_{k-1}+\frac{1}{k}\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2},\
k\geq 1,\ [\sigma^{2}]^{x}_{0}=0.$ (3)
The typicality of a given sample $\bm{x}_{k}$, at the $k$-th iteration, can be
expressed as a complement to eccentricity [6], as follows
$\tau_{k}(x)=1-\xi_{k}(x).$ (4)
In addition, [6] also defined that normalized eccentricity can be calculated
as
$\zeta_{k}(x)=\frac{\xi_{k}(x)}{2},\sum^{k}_{i=1}\xi_{k}(x)=1,\ k\geq 2.$ (5)
In order to separate normal state data from abnormal state data, it is
necessary to define a comparison threshold. For anomaly detection, the use of
the $m\sigma$ [24] threshold is widespread. However, this principle must first
assume the distribution of the analyzed data, such as the Gaussian
distribution [6]. Chebyshev inequality can be used for any data distribution,
assuming that the probability that the data samples are more than $m\sigma$
from the average is less than or equal to $1/m^{2}$, where $\sigma$ is the
standard deviation of the data [25].
The condition that produces the same results as Chebyshev’s inequality,
discarding any assumptions about data and its independence, can be expressed
as [6]
$\zeta_{k}>\frac{m^{2}+1}{2k},\ m>0$ (6)
where $m$ corresponds to the comparison threshold.
For a better understanding of the hardware implemented technique in this work,
the Algorithm 1 details the operation of TEDA, based on the equations
presented above.
Input: $\mathbf{x}_{k}$: $k$-th sample; $m$: threshold
Output: outlier: sample classification as abnormal or normal
1 begin
2 while _receive $\mathbf{x}$_ do
3 if _k=1_ then
4 $\bm{\mu}^{x}_{k}\leftarrow\mathbf{x}_{k}$;
5 $[\sigma^{2}]^{x}_{k}\leftarrow 0$;
6
7 else
8 update $\bm{\mu}^{x}_{k}$ using equation 2;
9 update $[\sigma^{2}]_{k}^{x}$ using equation 3;
10 update $\xi_{k}(x)$ using equation 1;
11 update $\zeta_{k}(x)$ using equation 5;
12 if _$\zeta_{k}(x) >\frac{m^{2}+1}{2k}$_ then
13 $outlier\leftarrow true$;
14
15 else
16 $outlier\leftarrow false$;
17
18
19 $k\leftarrow k+1$;
20
21
22
Algorithm 1 TEDA
As presented in the Algorithm 1, only input data samples, $\bm{x}_{k}$, and a
comparison threshold, $m$, are used as input to the algorithm. The output for
each entry, $\bm{x}_{k}$, is the indication of the sample’s classification as
abnormal (outlier = true) or normal (outlier = false).
## 4 Implementation description
In this work, a TEDA FPGA proposal was implemented using Register Transfer
Level (RTL) such as works presented in [9, 10, 11, 12, 13]. In the following
section characteristics of the proposal will be presented, as well as details
regarding processing time. A design overview can be seen in Figure 1.
### 4.1 Architecture proposal overview
As illustrated in the Figure 1, the proposed implementation of TEDA has four
different block structures: The MEAN module, which implements the average
described in Equation 2; The VARIANCE module, responsible for calculate the
variance as presented at the equation 3; The ECCENTRICITY module, which
calculates the eccentricity, as presented in the equation 1; and the OUTLIER
module, a block used to normalize the eccentricity as in equation 5 and
compare with the threshold, as showed in equation 6. The architecture was
developed in an attempt to pipeline the operations presented in Algorithm 1 in
order to decrease the TEDA processing time. So, the output of the ECCENTRICITY
and OUTLIER modules are one clock cycle delayed in relation to VARIANCE module
and two in relation to MEAN module. As well as VARIANCE module is one clock
cycle delayed in relation to MEAN module. Each of the modules are detailed
later in the following sections.
The implementation has the Algorithm 1 as reference. The system receives the
FPGA clock and the $k$-th sample vector $\mathbf{x}_{k}$ as inputs. The $k$-th
iteration number is updated from the increment of a counter and the $m$
threshold is used as a constant, stored at OUTLIER module. As in the algorithm
line 1, the MEAN module do each single element average of $\mathbf{x}_{k}$
vector. It is possible to observe that there are $N$ MEAN blocks, where $N$ is
the vector size. This block will be detailed in section 4.2. After this step,
moving to the next line (1), the calculation of variance is done in VARIANCE
Module, this block is detailed in the section 4.3. ECCENTRICITY block has as
inputs the signals that left the block VARIANCE and $k$, as referred at line 1
and detailed in subsection 4.4. OUTLIER block is detailed in subsection 4.5.
It receives the ECCENTRICITY, $\xi_{k}(x)$, and calculate the normalized
eccentricity to compare with the threshold as presented in lines 1, 1 and 1.
Figure 1: General architecture overview.
### 4.2 Module I - MEAN
Each n-th MEAN module computes the average of each one of n-th elements vector
$\bm{x}_{k}$ acquired at run time. The implementations is based on Equation 2
and it is detailed in Figure 2. In addition to receiving the n-th element of
vector $\bm{x}_{k}$ as an input, the MEAN block uses a counter to define the
number of sample interaction, k. The implementation uses a comparator block
identified at the Figure 2 as MCOMPn witch is used to verify if the system is
in the first iteration as Line 1 of Algorithm 1. The MMUXn is a multiplexer
that acts as a conditional evaluation, using as selecting value the output of
MCOMPn comparator. The register MREGn is storing the n-th $\bm{\mu}^{x}_{k}$
element ($\mu^{n}_{k}$). The $\mu^{n}_{k}$ value stored in MREGn is multiplied
with $\frac{k-1}{k}$ in MMULT1n and added in MSUMn with the output of MMULT2n
that has as input $x^{n}_{k}$ and the inverse value of $k$. Each n-th element
of vector $\bm{x}_{k}$, $x^{n}_{k}$, requires a MEAN block.
Figure 2: MEAN module.
### 4.3 Module II - VARIANCE
The VARIANCE module is illustrated in Figure 3. It computes the variance of
$\bm{x}_{k}$ vector samples by receiving the $\bm{x}_{k}$ vector itself and
its average, $\bm{\mu}^{x}_{k}$, calculated in the previous MEAN blocks.
The VARIANCE module, as the MEAN module, uses a comparator identified at the
Figure 3 as VCOMP1 also to verify if the system is in the first iteration
(Line 1 of Algorithm 1). The VMUX1 is a multiplexer that also implements a
conditional evaluation to release the value $0$ in the register output VREG1
in the first iteration. The register VREG1 stores the variance value,
$[\sigma^{2}]^{x}_{k}$, from the second iteration. The other registers in the
block, VREG2 register and the N VREG$n$ registers, are used to delay by one
clock cycle the iteration number $k$ and the elements of $\bm{x}_{k}$
respectively.
Figure 3: VARIANCE module.
As demonstrated in Equation 3, the variance calculation is done recursively.
It is necessary to calculate $\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ and
to do that, N subtractors (VSUB$n$) and N multipliers (VMULT1_$n$) are used,
as well a adder (VSUM1) with N inputs. Each element of vector
$\bm{\mu}^{x}_{k}$ is subtracted from its respective element in vector
$\bm{x}_{k}$ and the result of this operation is multiplied by itself
(squared) and then added to the other results. The
$\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ value is the multiplied (at
VMULT2) by $1/k$. It is then added at VSUM2 adder with the variance calculated
in the previous iteration, $[\sigma^{2}]^{x}_{k}$, multiplied (VMULT3) by
$(k-1)/k$. From the second iteration on, this value passes through the VMUX1
multiplexer to the VREG1 register, delivering the calculation of the variance
value at the VARIANCE block output. The values of
$\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ and $1/k$ are also delivered at
the output of the VARIANCE block to avoid redundant operations as they will be
used in the next block, the ECCENTRICITY block.
### 4.4 Module III - ECCENTRICITY
The ECCENTRICITY module is a simpler block than those previously presented.
This is because it uses operations already performed in the VARIANCE block to
calculate eccentricity. The geometric distance
$\left\|\bm{x}_{k}-\bm{\mu}_{k}\right\|^{2}$ (equivalent to
$(\bm{\mu}_{k}^{x}-\bm{x}_{k})^{T}(\bm{\mu}^{x}_{k}-\bm{x}_{k})$) is stored in
register EREG3 and $1/k$ is stored in EREG4 register. As the ECCENTRICITY
module is the architecture design of Equation 1 (Algorithm 1 line 1), the
variance value $[\sigma^{2}]^{x}_{k}$ is multiplied by $k$ (EMULT1) and used
to divise (EDIV1) the geometric distance
$(\bm{\mu}_{k}^{x}-\bm{x}_{k})^{T}(\bm{\mu}^{x}_{k}-\bm{x}_{k})$. This
operation output is added to $1/k$ in the ESUM1 adder, calculating the
eccentricity of the samples ($\xi_{k}(x)$) and delivering to the ECCENTRICITY
block output.
Figure 4: ECCENTRICITY module.
### 4.5 Module IV - OUTLIER
Finally, in the OUTLIER block, the samples are classified into abnormal
(outlier = true) or normal (outlier = false). The design module can be seen in
Figure 5. To classify the samples, the OUTLIER block normalizes eccentricity
by dividing (ODIV1) by a constant, as shown in Equation 5, and compares
(OCOMP1 this normalized eccentricity with a threshold as shown in the Lines 1,
1, 1 and 1 of the Algorithm 1. The register OREG1 and OREG2 are used to
synchronize the iteration number $k$, since as the modules act in pipeline,
the operations carried out in the OUTLIER block (as well as in ECCENTRICITY
module) are delayed by two clock cycles in relation to the system input.
Figure 5: OUTLIER module.
### 4.6 Processing time
The proposed architecture has an initial delay, $d$, that can be expressed as
$d=3\times t_{c}$ (7)
where $t_{c}$ is the system critical path time.
The execution time of the circuit implemented for TEDA algorithm is determined
by the system critical path time, $t_{c}$. So, after the initial delay, the
execution time of the proposed TEDA, $t_{TEDA}$, can be expressed as
$t_{TEDA}=t_{c}$ (8)
thus, in every $t_{TEDA}$ it is possible to obtain the output of a sample
inserted, that is, the sample classification as abnormal or normal.
The throughput of the implementation, $th_{TEDA}$, in samples per second (SPS)
can be expressed as
$th_{TEDA}=\frac{1}{t_{TEDA}}.$ (9)
## 5 Results
In this section will be presented the hardware validation and synthesis
results for the architecture proposed in this work. All cases were validated
and synthesized on floating point. Validation results were used to verify the
hardware functionality, while synthesis results allow the system to be
analyzed for important parameters for the design of hardware architectures
such as hardware occupancy and processing time, considering factors such as
throughput and speedup.
### 5.1 Validation results
To validate the hardware architecture of the TEDA algorithm, we used the
DAMADICS (Development and Application of Methods of the Actuator Diagnosis in
Industrial Control Systems) benchmark dataset [26]. The benchmark provides a
real data set of the water evaporation process in a Polish sugar factory. It
is a plant with three actuators; a control valve, which controls the flow of
water in the pipes; a pneumatic motor, which controls variable valve openings
and a positioner. This dataset has faults at different times of the day on
specific days. There are four different fault types, as shown in Table 1.
Table 1: Fault types [26]. Fault | Description
---|---
f16 | Positioner supply pressure drop
f17 | Unexpected pressure change across the valve
f18 | Fully or partly opened bypass valves
f19 | Flow rate sensor fault
Artificial failures were introduced on specific days to plant operation data.
The dataset has a set of $19$ faults in these $3$ actuators. As a way to
validate the architecture, actuator $1$ failures were simulated. Table 2 shows
a detailed description of some introduced faults for actuator $1$.
Table 2: List of artificial failures introduced to actuator 1 [26]. Item | Fault | Sample | Date | Description
---|---|---|---|---
1 | f18 | 58800-59800 | Oct 30, 2001 | Partly opened bypass valve
2 | f16 | 57275-57550 | Nov 9, 2001 | Positioner supply pressure drop
3 | f18 | 58830-58930 | Nov 9, 2001 | Partly opened bypass valve
4 | f18 | 58520-58625 | Nov 9, 2001 | Partly opened bypass valve
5 | f18 | 54600-54700 | Nov 17, 2001 | Partly opened bypass valve
6 | f16 | 56670-56770 | Nov 17, 2001 | Positioner supply pressure drop
7 | f17 | 37780-38400 | Nov 20, 2001 | Unexpected pressure drop across the valve
Figure 6 shows the results obtained for the item 1 signal of Table 2. Figure
6(a) illustrates the behavior of two simulated input variables in hardware
architecture ($\bm{x}_{k}=[x_{k}^{1}\ x_{k}^{2}]$). It is possible to observe
that a failure happens between the moments $k$=58900 and $k$=59800. In Figure
6(b) it is possible to observe that there is a sudden change in the behavior
of the eccentricity (black curve), surpassing the value of the comparison
threshold with $m=3$ (red curve).
(a) Fault item 1 - input vector $\bm{x}_{k}$.
(b) Fault item 1 - normalized eccentricity $\zeta_{k}(x)$ with $5/k$ $(m=3)$
threshold.
Figure 6: Detection of outliers in the dataset: Behavior of fault item 1.
In Figure 7 it is possible to observe the results obtained for the item 7
signal, from Table 2. As within Figure 6, Figures 7(a) illustrates the
behavior of two elements of input $\bm{x}_{k}=[x^{1}_{k}\ x^{2}_{k}]$ in
hardware architecture and in Figure 7(b) it is possible to observe that there
is a change of eccentricity (black curve), surpassing the value of the
comparison threshold (red curve) also to $m=3$. The failure happens between
moments $k=37700$ and $k=38400$.
(a) Fault item 7 - input vector $\bm{x}_{k}$.
(b) Fault item 7 - normalized eccentricity $\zeta_{k}(x)$ with $5/k$ $(m=3)$
threshold.
Figure 7: Detection of outliers in the dataset: Behavior of fault item 7.
Validation results in hardware architecture were compared with results
obtained in a python software implementation of the algorithm TEDA. The
hardware architecture was designed with floating point number format.
### 5.2 Synthesis results
After performing to validate the implemented circuit, the hardware synthesis
was performed to obtain the FPGA resource occupation report, as well as the
critical time information used to calculate the proposed implementation
processing time. The floating point synthesis results were obtained for a
Xilinx Virtex 6 xc6vlx240t-1ff1156 FPGA.
#### 5.2.1 Hardware occupation
Table 3 presents data related to the hardware occupation of the circuit
implemented in the target FPGA. The first column shows the number of
multipliers used, the second column displays the number of registers, and the
third column shows the number of logical cells used as LUT ($n_{LUT}$)
throughout the circuit.
Table 3: Hardware occupation. Multipliers | | Registers
---
$n_{LUT}$
$27$ ($3$%) | $414$ ($<1$%) | $11.567$ ($7$%)
Analyzing the data presented in Table 3 it can be seen that even using a
floating point resolution, which demands a greater amount of hardware
resources than a fixed point implementation, only a small portion of the
resources were occupied from the target FPGA, with a total of only about $3\%$
from multipliers, less than $1\%$ from registers, and about $7\%$ from logical
cells used as LUT. With this, we found that the proposed circuit could also be
applied in low cost FPGAs, where the amount of available hardware resources is
even smaller. In addition, multiple TEDA modules could be applied in parallel
for anomaly detection in the same dataset, in order to further reduce
processing time.
#### 5.2.2 Processing time
Table 4 presents information about the processing time (from Line 1 to Line 1
in Algorithm 1) of the architecture implemented for the TEDA technique. The
first column indicates the circuit critical time, $t_{c}$, the second column
shows the initial delay, expressed by Equation 7, the third column the TEDA
run-time, expressed by Equation 8, and the last column the implementation
throughput in samples per second (SPS), expressed by Equation 9, which
consists of the amount of samples processed and classified (as normal or
outlier) by TEDA every second.
Table 4: Processing time. Critical time | | Delay
---
TEDA time | Throughput
$138\,\text{ns}$ | $414\,\text{ns}$ | $138\,\text{ns}$ | $7.2$ MSPS
The data presented in Table 4 are quite expressive. The circuit critical time,
which also corresponds in the TEDA run-time, was only $t_{c}=138\,\text{ns}$.
Thus, after the $414\,\text{ns}$ delay, it is possible to get output for a
processed sample sorted every $138\,\text{ns}$, which guarantees a throughput
of $7.2$ million sorted samples per second. These results indicate the
feasibility of using the proposal presented in this work to manipulate large
data flows in real time.
### 5.3 Platforms comparison
To date, no previous literature has been found to explore TEDA hardware
implementations. Thus, this paper presents, for the first time, a proposal to
implement the TEDA technique on FPGA. To verify the advantages of the hardware
application proposed here over implementations on other software platforms,
some comparisons of the FPGA processing time with the processing time of other
software implementations were made. Table 5 presents the results of the
comparisons made. The first column indicates the hardware used, the second
presents the processing time required to obtain the classification of each
sample, and the third column, the speedup achieved by the proposal presented
in this paper.
Table 5: Software implementations comparison. Platform | Time | Speedup
---|---|---
This work proposal on FPGA | $138\,\text{ns}$ | $-$
Python (Colab without GPU) | $435\,\text{ms}$ | $3\text{,}000\text{,}000\times$
Python (Colab with Tesla K80 GPU) | $39.2\,\text{ms}$ | $280\text{,}000\times$
Python (Local execution with 940 MX GPU) | $23.1\,\text{ms}$ | $167\text{,}000\times$
The data presented in Table 5 reaffirm the importance of this work. The
hardware implementation on FPGA proposed here has been able to achieve
speedups of up to $3$ million times compared to a Pyhton TEDA implementation
using the Colab tool (without GPU processing). For the same Pyhton
implementation using the Tesla K80 GPU processing Colab tool, a speedup of
$280$ thousand times was obtained. In addition, when compared to a Python
implementation on Intel(R) Core(TM) i7-7500U<EMAIL_ADDRESS>with 16 GB of RAM and
GeForce 940 MX GPU, the hardware implementation on FPGA still had a $167$
thousand times advantage. Results that prove the advantages of using the
proposal presented in this work to accelerate the TEDA technique, through the
implementation on FPGA.
## 6 Conclusion
This work presented a proposal for hardware implementation of the TEDA data
streaming anomaly detection technique. The hardware was implemented in RTL
using floating point format. Synthesis results were obtained for a Xilinx
Virtex 6 xc6vlx240t-1ff1156 FPGA. The proposed implementation used a small
portion of the target FPGA resources, besides allowing the results to be
obtained in a short processing time. The high speedups obtained in comparison
with other software platforms reaffirmed the importance of this work, which is
pioneering the hardware implementation of the TEDA technique on FPGA. The
proposed architecture is feasible to be used in practical fault detection
applications in real industrial processes with severe time constraints, as
well as to handle large data volumes, such as data streaming, using low
processing time.
## References
* [1] M. Gupta, J. Gao, C. C. Aggarwal, J. Han, Outlier detection for temporal data: A survey, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 26 (9) (2019) 2250–2267.
* [2] S. Ahmad, A. Lavin, S. Purdy, Z. Agha, Unsupervised real-time anomaly detection for streaming data, Neurocomputing 262 (2017) 134 – 147, online Real-Time Learning Strategies for Data Streams. doi:https://doi.org/10.1016/j.neucom.2017.04.070.
URL http://www.sciencedirect.com/science/article/pii/S0925231217309864
* [3] C. G. Bezerra, B. S. J. Costa, L. A. Guedes, P. P. Angelov, An evolving approach to unsupervised and real-time fault detection in industrial processes, Expert Systems with Applications 63 (2016) 134 – 144. doi:https://doi.org/10.1016/j.eswa.2016.06.035.
URL http://www.sciencedirect.com/science/article/pii/S0957417416303153
* [4] B. S. J. Costa, C. G. Bezerra, L. A. Guedes, P. P. Angelov, Unsupervised classification of data streams based on typicality and eccentricity data analytics, in: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2016, pp. 58–63. doi:10.1109/FUZZ-IEEE.2016.7737668.
* [5] D. Osherson, E. E. Smith, On typicality and vagueness, Cognition 64 (2) (1997) 189 – 206. doi:https://doi.org/10.1016/S0010-0277(97)00025-5.
URL http://www.sciencedirect.com/science/article/pii/S0010027797000255
* [6] P. Angelov, Anomaly detection based on eccentricity analysis, in: 2014 IEEE Symposium on Evolving and Autonomous Learning Systems (EALS), 2014, pp. 1–8. doi:10.1109/EALS.2014.7009497.
* [7] P. Napoletano, F. Piccoli, R. Schettini, Anomaly detection in nanofibrous materials by cnn-based self-similarity, Sensors 18 (1). doi:10.3390/s18010209.
URL https://www.mdpi.com/1424-8220/18/1/209
* [8] I. A. T. Hashem, I. Yaqoob, N. B. Anuar, S. Mokhtar, A. Gani, S. U. Khan, The rise of “big data” on cloud computing: Review and open research issues, Information Systems 47 (2015) 98 – 115. doi:https://doi.org/10.1016/j.is.2014.07.006.
URL http://www.sciencedirect.com/science/article/pii/S0306437914001288
* [9] M. G. F. Coutinho, M. F. Torquato, M. A. C. Fernandes, Deep neural network hardware implementation based on stacked sparse autoencoder, IEEE Access 7 (2019) 40674–40694. doi:10.1109/ACCESS.2019.2907261.
* [10] L. M. D. Da Silva, M. F. Torquato, M. A. C. Fernandes, Parallel implementation of reinforcement learning q-learning technique for fpga, IEEE Access 7 (2019) 2782–2798. doi:10.1109/ACCESS.2018.2885950.
* [11] M. F. Torquato, M. A. Fernandes, High-performance parallel implementation of genetic algorithm on fpga, Circuits, Systems, and Signal Processing 38 (9) (2019) 4014–4039.
* [12] F. F. Lopes, J. C. Ferreira, M. A. Fernandes, Parallel implementation on fpga of support vector machines using stochastic gradient descent, Electronics 8 (6) (2019) 631.
* [13] A. L. X. Da Costa, C. A. D. Silva, M. F. Torquato, M. A. C. Fernandes, Parallel implementation of particle swarm optimization on fpga, IEEE Transactions on Circuits and Systems II: Express Briefs 66 (11) (2019) 1875–1879. doi:10.1109/TCSII.2019.2895343.
* [14] B. S. J. Costa, C. G. Bezerra, L. A. Guedes, P. P. Angelov, Online fault detection based on typicality and eccentricity data analytics, in: 2015 International Joint Conference on Neural Networks (IJCNN), 2015, pp. 1–6. doi:10.1109/IJCNN.2015.7280712.
* [15] D. Kangin, P. Angelov, J. A. Iglesias, A. Sanchis, Evolving classifier tedaclass for big data, Procedia Computer Science 53 (2015) 9 – 18, iNNS Conference on Big Data 2015 Program San Francisco, CA, USA 8-10 August 2015. doi:https://doi.org/10.1016/j.procs.2015.07.274.
URL http://www.sciencedirect.com/science/article/pii/S1877050915017779
* [16] P. Angelov, Typicality distribution function — a new density-based data analytics tool, in: 2015 International Joint Conference on Neural Networks (IJCNN), 2015, pp. 1–8. doi:10.1109/IJCNN.2015.7280438.
* [17] A. Lavin, S. Ahmad, Evaluating real-time anomaly detection algorithms – the numenta anomaly benchmark, in: 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), 2015, pp. 38–44. doi:10.1109/ICMLA.2015.141.
* [18] R. S. Martins, P. Angelov, B. Sielly Jales Costa, Automatic detection of computer network traffic anomalies based on eccentricity analysis, in: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2018, pp. 1–8. doi:10.1109/FUZZ-IEEE.2018.8491507.
* [19] T. Ziermann, J. Teich, Adaptive traffic scheduling techniques for mixed real-time and streaming applications on reconfigurable hardware, in: 2010 IEEE International Symposium on Parallel Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010, pp. 1–4. doi:10.1109/IPDPSW.2010.5470738.
* [20] B. Yang, M. Yang, A. Plaza, L. Gao, B. Zhang, Dual-mode fpga implementation of target and anomaly detection algorithms for real-time hyperspectral imaging, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8 (6) (2015) 2950–2961. doi:10.1109/JSTARS.2015.2388797.
* [21] M. Wess, P. D. S. Manoj, A. Jantsch, Neural network based ecg anomaly detection on fpga and trade-off analysis, in: 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 2017, pp. 1–4. doi:10.1109/ISCAS.2017.8050805.
* [22] P. Angelov, Outside the box: an alternative data analytics framework, Journal of Automation Mobile Robotics and Intelligent Systems 8 (2) (2014) 29–35.
* [23] A. Plamen, Outside the box: an alternative data analytics framework, Journal of Automation, Mobile Robotics and Intelligent Systems 8 (2) (2014) 29–35.
* [24] A. Bernieri, G. Betta, C. Liguori, On-line fault detection and diagnosis obtained by implementing neural algorithms on a digital signal processor, IEEE Transactions on Instrumentation and Measurement 45 (5) (1996) 894–899. doi:10.1109/19.536707.
* [25] J. G. Saw, M. C. Yang, T. C. Mo, Chebyshev inequality with estimated mean and variance, The American Statistician 38 (2) (1984) 130–132.
* [26] E. F. R. T. Network, Damadics rtn information web site (2002).
URL http://diag.mchtr.pw.edu.pl/damadics/
|
2024-09-04T02:54:58.219827 | 2020-03-08T23:23:31 | 2003.03867 | {
"authors": "Wojciech Jamroga, Wojciech Penczek, and Teofil Sidoruk",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26106",
"submitter": "Wojciech Jamroga",
"url": "https://arxiv.org/abs/2003.03867"
} | arxiv-papers |
# Strategic Abilities of Asynchronous Agents:
Semantic Side Effects and How to Tame Them
Wojciech Jamroga1,2 Wojciech Penczek1 Teofil Sidoruk1,3
1Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland
2Interdisciplinary Centre on Security, Reliability and Trust, SnT, University
of Luxembourg
3Faculty of Mathematics and Information Science, Warsaw University of
Technology
{jamroga, penczek<EMAIL_ADDRESS>
###### Abstract
Recently, we have proposed a framework for verification of agents’ abilities
in asynchronous multi-agent systems (MAS), together with an algorithm for
automated reduction of models (?). The semantics was built on the modeling
tradition of distributed systems. As we show here, this can sometimes lead to
counterintuitive interpretation of formulas when reasoning about the outcome
of strategies. First, the semantics disregards finite paths, and yields
unnatural evaluation of strategies with deadlocks. Secondly, the semantic
representations do not allow to capture the asymmetry between proactive agents
and the recipients of their choices. We propose how to avoid the problems by a
suitable extension of the representations and change of the execution
semantics for asynchronous MAS. We also prove that the model reduction scheme
still works in the modified framework.
## 1 Introduction
Modal logics of strategic ability. _Alternating-time temporal logic_
$\mathbf{ATL_{\mathrm{}}^{*}}$ (?; ?; ?) is probably the most popular logic to
describe interaction of agents in multi-agent systems. Formulas of
$\mathbf{ATL_{\mathrm{}}^{*}}$ allow to express statements about what agents
(or groups of agents) can achieve. For example,
$\langle\\!\langle{taxi}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{fatality}}$
says that the autonomous cab can drive in such a way that nobody is ever
killed, and
$\langle\\!\langle{taxi,passg}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{destination}}$
expresses that the cab and the passenger have a joint strategy to arrive at
the destination, no matter what any other agents do. Such statements allow to
express important functionality and safety requirements in a simple and
intuitive way. Moreover, the provide input to algorithms and tools for
verification of strategic abilities, that have been in constant development
for over 20 years (?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?; ?). Still,
there are two caveats.
First, all the realistic scenarios of agent interaction, that one may want to
specify and verify, involve imperfect information. That is, the agents in the
system do not always know exactly the global state of the system, and thus
they have to make their decisions based on their local view of the situation.
Unfortunately, verification of agents with imperfect information is hard to
very hard – more precisely, $\mathbf{{\Delta_{{2}}^{\mathbf{{P}}}}}$-complete
to undecidable, depending on the syntactic and semantic variant of the logic
(?; ?; ?). Also, the imperfect information semantics of
$\mathbf{ATL_{\mathrm{}}^{*}}$ does not admit alternation-free fixpoint
characterizations (?; ?; ?), which makes incremental synthesis of strategies
impossible, or at least difficult to achieve (?; ?; ?; ?; ?).
Secondly, the semantics of strategic logics is traditionally based on
synchronous concurrent game models. In other words, one implicitly assumes the
existence of a global clock that triggers subsequent global events in the
system; at each tick of the clock, all the agents choose their actions, and
the system proceeds accordingly with a global transition. However, many real-
life systems are inherently asynchronous, and do not operate on a global clock
that perfectly synchronizes the atomic steps of all the components. Moreover,
many systems that are synchronous at the implementation level can be more
conveniently modeled as asynchronous on a more abstract level. In many
scenarios, both aspects combine. For example, when modeling an anti-poaching
operation (?), one may take into account the truly asynchronous nature of
events happening in different national parks, but also the best level of
granularity for modeling the events happening within a single nature reserve.
Asynchronous semantics and partial-order reduction. We have recently proposed
how to adapt the semantics of $\mathbf{ATL_{\mathrm{}}^{*}}$ to asynchronous
MAS (?). We also showed that the technique of _partial order reduction (POR)_
(?; ?; ?; ?; ?; ?; ?) can be adapted to verification of strategic abilities in
asynchronous MAS. In fact, the (almost 30 years old) POR for linear time logic
$\mathbf{LTL}$ can be taken off the shelf and applied to a significant part of
$\mathbf{ATL^{*}_{\mathrm{ir}}}$, the variant of
$\mathbf{ATL_{\mathrm{}}^{*}}$ based on strategies with imperfect information
and imperfect recall. This is very important, as the practical verification of
asynchronous systems is often impossible due to the state- and transition-
space explosion resulting from interleaving of local transitions. POR allows
for a significant, sometimes even exponential, reduction of the models.
Semantic side effects. While the result is appealing, there is a sting in its
tail: the $\mathbf{ATL_{\mathrm{}}^{*}}$ semantics in (?) leads to
counterintuitive interpretation of strategic properties. First, it disregards
finite paths, and evaluates some intuitively losing strategies as winning (and
vice versa). Secondly, it provides a flawed interpretation of the _concurrency
fairness_ assumption. Thirdly, the representations and their execution
semantics do not allow to capture the asymmetry between the agents that
control which synchronization branch will be taken, and those influenced by
their choices. We tentatively indicated some of the problems in the extended
abstract (?). In this paper, we demonstrate them carefully, and propose how
they can be avoided.
Contribution. Our contribution is threefold. First, we discuss in detail the
semantic side effects of adding strategic reasoning on top of classical models
of concurrent systems (?). We identify the reasons, and demonstrate the
problematic phenomena on simple examples. Secondly, we show how to avoid these
pitfalls by extending the class of representations and slightly changing the
execution semantics of strategies. Specifically, we add “silent”
$\epsilon$-transitions in the models and on outcome paths of strategies, and
allow for nondeterministic choices in the agents’ repertoires. We also
identify a family of fairness-style conditions, suitable for the interaction
of proactive and reactive agents. No less importantly, we prove that partial
order reduction is still correct in the modified framework.
Motivation. The variant of $\mathbf{ATL_{\mathrm{}}^{*}}$ for asynchronous
systems in (?) was proposed mainly as a framework for formal verification.
This was backed by the results showing that it submits to partial order
reduction. However, a verification framework is only useful if it allows to
specify requirements in an intuitive way, so that the property we _think_ we
are verifying is indeed _the one being verified_. In this paper, we show that
this was not the case. We also propose how to overcome the problems without
spoiling the efficient reduction scheme. The solutions are not merely
technical. In fact, they lead to a better understanding of how strategic
activity influences the overall behavior of the system, and how it should be
integrated with the traditional models of asynchronous interaction.
## 2 Models of Multi-agent Systems
We first recall the models of asynchronous interaction in MAS, proposed in (?)
and inspired by (?; ?; ?).
### 2.1 Asynchronous Multi-agent Systems
In logical approaches to MAS, one usually assumes synchronous actions of all
the agents (?; ?). However, many agent systems are inherently asynchronous, or
it is useful to model them without assuming precise timing relationships
between the actions of different agents. As an example, consider a team of
logistic robots running in a factory (?). Often no global clock is available
to all the robots, and even if there is one, the precise relative timing for
robots operating in different places is usually irrelevant.
Such a system can be conveniently represented with a set of automata that
execute asynchronously by interleaving local transitions, and synchronize
their moves whenever a shared event occurs. The idea is to represent the
behavior of each agent by a finite automaton where the nodes and transitions
correspond, respectively, to the agent’s local states and the events in which
it can take part. Then, the global behavior of the system is obtained by the
interleaving of local transitions, assuming that, in order for a shared event
to occur, all the corresponding agents must execute it in their automata. This
motivates the following definition.
###### Definition 2.1 (Asynchronous MAS).
An _asynchronous multi-agent system (AMAS)_ S consists of $n$ agents
${\mathbb{A}\mathrm{gt}}=\\{{1,\dots,n}\\}$,111 We do not consider the
environment component, which may be added with no technical difficulty. each
associated with a tuple
$A_{i}=(L_{i},\iota_{i},\mathit{Evt}_{i},R_{i},T_{i}{,\mathcal{PV}_{i},V_{i}})$
including a set of _possible local states_
$L_{i}=\\{l_{i}^{1},l_{i}^{2},\dots,l_{i}^{n_{i}}\\}$, an _initial state_
$\iota_{i}\in L_{i}$, and a set of _events_
$\mathit{Evt}_{i}=\\{\alpha_{i}^{1},\alpha_{i}^{2},\ldots,\alpha_{i}^{m_{i}}\\}$.
An agent’s _repertoire of choices_ 222 In interpreted systems, this function
is usually referred to as a _protocol_. Here, we opt for a different name to
avoid possible confusion, e.g., with security protocols. $R_{i}:L_{i}\to
2^{\mathit{Evt}_{i}}\setminus\\{\emptyset\\}$ selects the events available at
each local state. $T_{i}:L_{i}\times\mathit{Evt}_{i}\rightharpoonup L_{i}$ is
a (partial) _local transition function_ such that $T_{i}(l_{i},\alpha)$ is
defined iff $\alpha\in R_{i}(l_{i})$. That is, $T_{i}(l,\alpha)$ indicates the
result of executing event $\alpha$ in local state $l$ from the perspective of
agent $i$.
Let $\mathit{Evt}=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}\mathit{Evt}_{i}$ be
the set of all events, and $Loc=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}L_{i}$ be
the set of all local states in the system. For each event
$\alpha\in\mathit{Evt}$,
$Agent(\alpha)=\\{{i\in{\mathbb{A}\mathrm{gt}}\mid\alpha\in\mathit{Evt}_{i}}\\}$
is the set of agents which have $\alpha$ in their repertoires; events shared
by multiple agents are jointly executed by all of them. We assume that each
agent $i$ in the AMAS is endowed with a disjoint set of its _local
propositions $\mathcal{PV}_{i}$_, and their valuation $V_{i}:L_{i}\rightarrow
2^{\mathcal{PV}_{i}}$. The overall set of propositions
$\mathcal{PV}=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}\mathcal{PV}_{i}$ collects
all the local propositions.
As our working example, we use the following scenario.
###### Example 2.2 (Conference in times of epidemic).
Consider the AMAS in Figure 1, consisting of the Steering Committee Chair
($sc$), the General Chair ($gc$), and the Organizing Committee Chair ($oc$).
Faced with the Covid-19 epidemics, $sc$ can decide to give up the conference,
or send a signal to $gc$ to proceed and open the meeting. Then, $gc$ and $oc$
jointly decide whether the conference will be run on site or online. In the
former case, the epidemiologic risk is obviously much higher, indicated by the
atomic proposition $\mathsf{{epid}}$.
The set of events, the agents’ repertoires of choices, and the valuation of
atomic propositions can be easily read from the graph. For easier reading, all
the private events are shown in grey. Note that event $proceed$ is shared by
agents $sc$ and $gc$, and can only be executed jointly. Similarly, $onsite$
and $online$ are shared by $gc$ and $oc$. All the other events are private,
and do not require synchronization.
gc | oc | sc
---|---|---
|
$0$$1$$\mathsf{{open}}$$2$$3$$\boldsymbol{proceed}$$\boldsymbol{onsite}$${online}$$\boldsymbol{rest}$$\boldsymbol{rest}$
---
|
$0$$1$$\mathsf{{epid}}\quad{}$$2$$3$$\mathsf{{closed}}\quad{}$$onsite$$\boldsymbol{online}$$\boldsymbol{handle}$$\boldsymbol{handle}$$\boldsymbol{idle}$
---
| $0$$1$$2$${proceed}$$giveup$$proceed$$giveup$
---
Figure 1: Simple asynchronous MAS: agents $gc$, $oc$, and $sc$. A joint
strategy of agents $\\{{gc,oc}\\}$ is highlighted.
### 2.2 Interleaved Interpreted Systems
To understand the interaction between asynchronous agents, we use the standard
execution semantics from concurrency models, i.e., interleaving with
synchronization on shared events. To this end, we compose the network of local
automata (i.e., AMAS) to a single automaton based on the notions of _global
states_ and _global transitions_ , see below.
###### Definition 2.3 (Model).
Let $S$ be an AMAS with $n$ agents. Its _model_ $IIS(S)$ extends $S$ with: (i)
the set of global states $St\subseteq L_{1}\times\ldots\times L_{n}$,
including the _initial state_ $\iota=(\iota_{1},\dots,\iota_{n})$ and all the
states reachable from $\iota$ by $T$ (see below); (ii) the _global transition
function_ $T:St\times\mathit{Evt}\rightharpoonup St$, defined by
$T(g_{1},\alpha)=g_{2}$ iff $T_{i}(g_{1}^{i},\alpha)=g^{i}_{2}$ for all $i\in
Agent(\alpha)$ and $g_{1}^{i}=g^{i}_{2}$ for all
$i\in{\mathbb{A}\mathrm{gt}}\setminus Agent(\alpha)$; (iii) the _global
valuation_ of propositions $V:St\rightarrow 2^{\mathcal{PV}}$, defined as
$V(l_{1},\dots,l_{n})=\bigcup_{i\in{\mathbb{A}\mathrm{gt}}}V_{i}(l_{i})$.
Models, sometimes called _interleaved interpreted systems_ (IIS), are used to
provide an execution semantics to AMAS, and consequently provide us with
semantic structures to reason about AMAS. Intuitively, the global states in
$IIS(S)$ can be seen as the possible configurations of local states of all the
agents. Moreover, the transitions are labeled by events that are
simultaneously selected (in the current configuration) by all the agents that
have the event in their repertoire. Clearly, private events (i.e., events such
that $Agent(\alpha)$ is a singleton) require no synchronization.
###### Example 2.4 (Conference).
The model for the asynchronous MAS of Example 2.2 is shown in Figure 1.
We say that event $\alpha\in\mathit{Evt}$ is _enabled_ at $g\in St$ if
$T(g,\alpha)=g^{\prime}$ for some $g^{\prime}\in St$. The set of events
enabled at $g$ is denoted by $enabled(g)$. The global transition function is
assumed to be serial, i.e., at each $g\in St$ there exists at least one
enabled event.
Discussion. This modeling approach is standard in theory of concurrent
systems, where it dates back to the early 1980s and the idea of APA Nets
(asynchronous, parallel automata nets) (?). Note that APA Nets and their
models were _not_ proposed with causal interpretation in mind. In particular,
they were _not_ meant to capture the interaction of purposeful agents that
freely choose their strategies, but rather a set of reactive components
converging to a joint behavior. Despite superficial differences, the same
applies to process-algebraic approaches to concurrency, such as CSP (?), CCS
(?), ACP (?), and $\pi$-calculus (?).
Definition 2.1 extends that with the repertoire functions from synchronous
models of MAS (?; ?). Agent $i$’s repertoire lists the events available to
$i$, and is supposed to define the space of $i$’s strategies. As we show
further, this is not enough in case of asynchronous MAS.
## 3 Reasoning About Abilities: ATL*
_Alternating-time temporal logic_ $\mathbf{ATL_{\mathrm{}}^{*}}$ (?; ?; ?)
generalizes the branching-time temporal logic $\mathbf{CTL^{*}}$ (?) by
replacing the path quantifiers $\mathsf{E},\mathsf{A}$ with _strategic
modalities_ $\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$,
expressing that agents $A$ can enforce the temporal property $\gamma$. While
the semantics of $\mathbf{ATL_{\mathrm{}}^{*}}$ is typically defined for
models of synchronous systems, a variant for asynchronous MAS was proposed
recently (?). We summarize the main points in this section.
### 3.1 Syntax
Let $\mathcal{PV}$ be a set of propositional variables and
${\mathbb{A}\mathrm{gt}}$ the set of all agents. The language of
$\mathbf{ATL_{\mathrm{}}^{*}}$ is defined as below.
$\varphi::=\mathsf{{p}}\mid\neg\varphi\mid\varphi\wedge\varphi\mid\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$,
$\gamma::=\varphi\mid\neg\gamma\mid\gamma\land\gamma\mid\mathrm{X}\,\gamma\mid\gamma\,\mathrm{U}\,\gamma$,
where $\mathsf{p}\in\mathcal{PV}$, $A\subseteq{\mathbb{A}\mathrm{gt}}$,
$\mathrm{X}\,$ stands for “next”, and $\,\mathrm{U}\,$ for “strong until”
($\gamma_{1}\,\mathrm{U}\,\gamma_{2}$ denotes that $\gamma_{1}$ holds until
$\gamma_{2}$ becomes true). The other Boolean operators and constants are
defined as usual. “Release” can be defined as
$\gamma_{1}\,\mathrm{R}\,\gamma_{2}\equiv\neg((\neg\gamma_{1})\,\mathrm{U}\,(\neg\gamma_{2}))$.
“Eventually” and “always” can be defined as
$\mathrm{F}\,\gamma\equiv\mathit{true}\,\mathrm{U}\,\gamma$ and
$\mathrm{G}\,\gamma\equiv\mathit{false}\,\mathrm{R}\,\gamma$. Moreover, the
$\mathbf{CTL^{*}}$ operator “for all paths” can be defined as
$\mathsf{A}\gamma\equiv\langle\\!\langle{\emptyset}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$.
###### Example 3.1 (Conference).
Formula
$\langle\\!\langle{sc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{open}}$
expresses that the Steering Chair can enforce that the conference is
eventually opened. Moreover, formula
$\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{epid}}$
says that the General Chair and the Organizing Chair have a joint strategy to
avoid high epidemiological risk.
### 3.2 Strategies and Outcomes
We adopt Schobbens’ taxonomy and notation for strategy types (?):
$\mathrm{ir}$, $\mathrm{Ir}$, $\mathrm{iR}$, and $\mathrm{IR}$, where _I_
(resp. _i_) denotes perfect (resp. imperfect) _information_ , and _R_ (resp.
_r_) denotes perfect (resp. imperfect) _recall_. In particular, an _imperfect
information/imperfect recall strategy ($\mathrm{ir}$-strategy) for $i$_ is a
function $\sigma_{i}\colon L_{i}\to\mathit{Evt}_{i}$ s.t. $\sigma_{i}(l)\in
R_{i}(l)$ for each $l\in L_{i}$. We denote the set of such strategies by
$\Sigma_{i}^{\mathrm{ir}}$. A _collective strategy_ $\sigma_{A}$ for a
coalition $A=(1,\dots,m)\subseteq{\mathbb{A}\mathrm{gt}}$ is a tuple of
strategies, one per agent $i\in A$. The set of $A$’s collective $\mathrm{ir}$
strategies is denoted by $\Sigma_{A}^{\mathrm{ir}}$. We will sometimes use
$\sigma_{A}(g)=(\sigma_{a_{1}}(g),\dots,\sigma_{a_{m}}(g))$ to denote the
tuple of $A$’s selections at state $g$.
###### Example 3.2 (Conference).
A collective strategy for the General Chair and the OC Chair in the conference
scenario is shown in Figure 1.
An infinite sequence of global states and events
$\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots$ is called an (interleaved)
_path_ if $g_{i}\stackrel{{\scriptstyle\alpha_{i}}}{{\longrightarrow}}g_{i+1}$
for every $i\geq 0$. $\mathit{Evt}(\pi)=\alpha_{0}\alpha_{1}\alpha_{2}\ldots$
is the sequence of events in $\pi$, and $\pi[i]=g_{i}$ is the $i$-th global
state of $\pi$. $\Pi_{M}(g)$ denotes the set of all paths in model $M$
starting at $g$. Intuitively, the outcome of $\sigma_{A}$ in $g$ is the set of
all the paths that can occur when the agents in $A$ follow $\sigma_{A}$ and
the agents in ${\mathbb{A}\mathrm{gt}}\setminus A$ freely choose events from
their repertoires. To define it formally, we first refine the concept of an
enabled event, taking into account the choices of $A$ in strategy
$\sigma_{A}$.
###### Definition 3.3 (Enabled events).
Let $A=(1,\dots,m)$, $g\in St$, and let
$\overrightarrow{\alpha}_{A}=(\alpha_{1},\dots,\alpha_{m})$ be a tuple of
events such that every $\alpha_{i}\in R_{i}(g^{i})$. That is, every
$\alpha_{i}$ can be selected by its respective agent $i$ at state $g$. We say
that event $\beta\in\mathit{Evt}$ is _enabled by $\overrightarrow{\alpha}_{A}$
at $g\in St$_ iff
* •
for every $i\in Agent(\beta)\cap A$, we have $\beta=\alpha_{i}$, and
* •
for every $i\in Agent(\beta)\setminus A$, it holds that $\beta\in
R_{i}(g^{i})$.
Thus, $\beta$ is enabled by $\overrightarrow{\alpha}_{A}$ if all the agents
that “own” $\beta$ can choose $\beta$ for execution, even when
$\overrightarrow{\alpha}_{A}$ has been selected by the coalition $A$. We
denote the set of such events by $enabled(g,\overrightarrow{\alpha}_{A})$.
Clearly, $enabled(g,\overrightarrow{\alpha}_{A})\subseteq enabled(g)$.
###### Example 3.4 (Conference).
Consider state $g=000$ and the choices of agents $A=\\{{gc,oc}\\}$ shown in
Figure 1, i.e., $\overrightarrow{\alpha}_{A}=(proceed,online)$. The only
events enabled by $\overrightarrow{\alpha}_{A}$ are $proceed$ and $giveup$.
Event $onsite$ is not enabled because $A$ chose different events for
execution; $online$ is not enabled because it requires synchronization which
is impossible at $000$.
###### Definition 3.5 (Outcome paths).
The _outcome_ of strategy $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ in state
$g\in St$ is the set $\mathit{out}_{M}(g,\sigma_{A})\subseteq\Pi_{M}(g)$ such
that
$\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots\in\mathit{out}_{M}(g,\sigma_{A})$
iff $g_{0}=g$, and $\forall i\geq 0\quad\alpha_{i}\in
enabled(\pi[i],\sigma_{A}(\pi[i]))$.
One often wants to look only at paths that do not consistently ignore agents
whose choice is always enabled. Formally, a path $\pi$ satisfies _concurrency-
fairness_ (CF) if there is no event $\alpha$ enabled in all states of $\pi$
from $\pi[n]$ on and such that for every $\alpha_{i}$ actually executed in
$\pi[i]$, $i=n,n+1,\dots$, we have $Agent(\alpha)\cap
Agent(\alpha_{i})=\emptyset$. We denote the set of all such paths starting at
$g$ by $\Pi_{M}^{\textbf{CF}}(g)$.
###### Definition 3.6 (CF-outcome).
The _CF -outcome_ of $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ is defined as
$\mathit{out}^{\textbf{CF}}_{M}(g,\sigma_{A})=\mathit{out}_{M}(g,\sigma_{A})\cap\Pi_{M}^{\textbf{CF}}(g)$.
### 3.3 Strategic Ability for Asynchronous Systems
The semantics of $\mathbf{ATL_{\mathrm{ir}}^{*}}$ in AMAS is defined by the
following clause for strategic modalities (?):
$M,g\models_{{}_{\mathrm{ir}}}\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$
iff there is a strategy $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ s.t.
$\mathit{out}_{M}(g,\sigma_{A})\neq\emptyset$ and, for each path
$\pi\in\mathit{out}_{M}(g,\sigma_{A})$, we have
$M,\pi\models_{{}_{\mathrm{ir}}}\gamma$.
The clauses for Boolean and temporal operators are standard. Moreover, the
_concurrency-fair semantics_ $\models_{{}_{\mathrm{ir}}}^{\textbf{CF}}$ of
$\mathbf{ATL_{\mathrm{}}}$ and $\mathbf{ATL_{\mathrm{}}^{*}}$ is obtained by
replacing $\mathit{out}_{M}(g,\sigma_{A})$ with
$\mathit{out}_{M}^{\textbf{CF}}(g,\sigma_{A})$ in the above clause.
###### Example 3.7 (Conference).
Clearly, formula
$\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{epid}}$
holds in $(M_{\mathit{conf}},000)$, in both $\models_{{}_{\mathrm{ir}}}$ and
$\models_{{}_{\mathrm{ir}}}^{\textbf{CF}}$ semantics. To see that, fix
$\sigma_{gc}(0)=proceed$ and $\sigma_{gc}(1)=\sigma_{oc}(0)=online$ in the
collective strategy of $\\{{gc,oc}\\}$. Note also that
$M_{\mathit{conf}},000\models_{{}_{\mathrm{ir}}}\neg\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{closed}}$
because, after executing $proceed$ and $online$ (or $onsite$), event $rest$
may be selected forever. On the other hand, such paths are not concurrency-
fair, and thus
$M_{\mathit{conf}},000\models_{{}_{\mathrm{ir}}}^{\textbf{CF}}\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{closed}}$.
Discussion. Strategic play assumes proactive attitude: the agents in
$\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}$ are free to choose
_any_ available strategy $\sigma_{A}$. This is conceptually consistent with
the notion of agency (?). At the same time, it is somewhat at odds with the
standard semantics of concurrent processes, where the components cannot
stubbornly refuse to synchronize if that is the only way to proceed with a
transition. This seems a minor problem, but it is worrying that a strategy can
have the empty set of outcomes, and equally worrying that such strategies are
treated differently from the other ones. Indeed, as we will show in the
subsequent sections, the semantics proposed in (?) leads to a counterintuitive
interpretation of strategic formulas.
## 4 Semantic Problems and How to Avoid Them
$000$$101$$\mathsf{{open}}$$002$$211$$\mathsf{{epid}}$$321$$231$$\mathsf{{closed}}$$331$$\mathsf{{closed}}$$proceed$$giveup$$onsite$$online$$giveup$$rest$$handle$$rest$$handle$$rest$$idle$$rest$$idle$
---
Figure 2: Model $M_{\mathit{conf}}$ for the conference scenario. We highlight
the transitions enabled by the strategy in Figure 1, and the resulting
reachable states.
Starting with this section, we describe some problematic phenomena that follow
from the straightforward combination of strategic ability with models of
concurrent systems, proposed in (?). We also show how to extend the
representations and modify their execution semantics to avoid the
counterintuitive interpretation of formulas.
### 4.1 Deadlock Strategies and Finite Paths
An automata network is typically required to produce no deadlock states, i.e.,
every global state in its composition must have at least one outgoing
transition. Then, all the maximal paths are infinite, and it is natural to
refer to only infinite paths in the semantics of temporal operators. In case
of AMAS, the situation is more delicate. Even if the AMAS as a whole produces
no deadlocks, some strategies might, which makes the interpretation of
strategic modalities cumbersome. We illustrate this on the following example.
###### Example 4.1 (Conference).
Recall the 3-agent AMAS of Figure 1, together with its model
$M_{\mathit{conf}}$ (Figure 2). Clearly, $M_{\mathit{conf}}$ has no deadlock
states. Let us now look at the collective strategies of coalition
$\\{{gc,oc}\\}$, with agent $sc$ serving as the opponent. It is easy to see
that the coalition has no way to prevent the opening of the conference, i.e.,
it cannot prevent the system from reaching state $101$. However, the strategy
depicted in Figure 1 produces only one _infinite_ path:
$(000\,giveup\,002\,giveup\,\dots)$. Since the $\mathbf{ATL_{\mathrm{}}^{*}}$
semantics in Section 3 disregards finite paths, we get that
$M_{\mathit{conf}},000\models\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{open}}$,
which is counterintuitive.
Things can get even trickier. In particular, the outcome of a strategy can be
empty – in fact, it may even happen that a coalition has only strategies with
empty outcomes.
|
$0$$1$$2$$3$$\mathsf{{voted_{a}}}$$4$$\mathsf{{voted_{b}}}$$vote_{a}$$vote_{b}$$send$$send$$idle_{v}$$idle_{v}$
---
| $0$$1$$vote_{a}$$vote_{b}$$send$$idle_{ebm}$
---
Figure 3: Casting a ballot: voter $v$ (left) and EBM $ebm$ (right)
###### Example 4.2 (Voting).
Consider the AMAS in Figure 3 that depicts a simple voting scenario. A voter
$v$ can fill in an electronic ballot with a vote for candidate $\mathsf{{a}}$
or $\mathsf{{b}}$, and then push the $send$ button. The Electronic Ballot
Machine $ebm$ duly registers the choices of the voter. Note that all the
_joint_ strategies of $\\{{v,ebm}\\}$ produce only finite sequences of
transitions. This is because $ebm$ must choose a single event at location $0$
in a memoryless strategy, and thus $v$ and $ebm$ are bound to “miscoordinate”
either at the first or at the second step. Since finite paths are not included
in the outcome sets, and the semantics in Section 3.3 rules out strategies
with empty outcomes, we get that
$IIS(S_{vote}),00\models\neg\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\top$,
which is quite strange.
Notice that removing the non-emptiness requirement from the semantic clause in
Section 3.3 does not help. In that case, any joint strategy of $\\{{v,ebm}\\}$
could be used to demonstrate that
$\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\bot$.
### 4.2 Solution: Adding Silent Transitions
To deal with the problem, we augment the model of the system with special
“silent” transitions, labeled by $\epsilon$, that are fired whenever no “real”
transition can occur. In our case, the $\epsilon$-transitions account for the
possibility that some agents miscoordinate and thus block the system.
Moreover, we redefine the outcome set of a strategy so that an
$\epsilon$-transition is taken whenever such miscoordination occurs.
###### Definition 4.3 (Undeadlocked IIS).
Let $S$ be an AMAS, and assume that no agent in S has $\epsilon$ in its
alphabet of events. The _undeadlocked model of S_ , denoted
$M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$, extends the model
$M=IIS(S)$ as follows:
* •
$\mathit{Evt}_{M^{\text{$\epsilon$}}}=\mathit{Evt}_{M}\cup\\{{\epsilon}\\}$,
where $Agent(\epsilon)=\emptyset$;
* •
For each $g\in St$, we add the transition
$g\stackrel{{\scriptstyle\epsilon}}{{\longrightarrow}}g$ iff there is a
selection of agents’ choices
$\overrightarrow{\alpha}_{A}=(\alpha_{1},\dots,\alpha_{k})$, $\alpha_{i}\in
R_{i}(g)$, such that $enabled_{M}(g,\overrightarrow{\alpha}_{A})=\emptyset$.
In that case, we also fix
$enabled_{M^{\text{$\epsilon$}}}(g,\overrightarrow{\alpha}_{A})=\\{{\epsilon}\\}$.
In other words, “silent” loops are added in the states where a combination of
the agents’ actions can block the system.
Paths are defined as in Section 2.2. The following is trivial.
###### Proposition 4.4.
For any AMAS $S$, any state $g\in IIS^{\text{$\epsilon$}}(S)$, and any
strategy $\sigma_{A}$, we have that
$enabled_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A}(state))\neq\emptyset$.
###### Example 4.5 (Conference).
The undeadlocked model $M_{\mathit{conf}}^{\text{$\epsilon$}}$ of the
conference scenario (Example 2.2) extends the model in Figure 2 with one
$\epsilon$-loop at state $101$. The loop models the situation when the agents
choose $(onsite,online,proceed)$ or $(online,onsite,proceed)$. We leave it for
the reader to check that, at the other states, all the combinations of choices
enable at least one transition.
For the strategy in Example 4.1, notice that its outcome in
$M_{\mathit{conf}}^{\text{$\epsilon$}}$ contains _two_ infinite paths: not
only $(000\,giveup\,002\,giveup\,002\dots)$, but also
$(000\,proceed\,101\,\epsilon\,101\dots)$. Since the latter path invalidates
the temporal formula $\mathrm{G}\,\neg\mathsf{{open}}$, we get that
$M_{\mathit{conf}},000\not\models\langle\\!\langle{gc,oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{open}}$,
as expected.
$00$$10$$20$$31$$\mathsf{{voted_{a}}}$$41$$\mathsf{{voted_{b}}}$$\epsilon$$vote_{a}$$vote_{b}$$\epsilon$$send$$\epsilon$$send$$\genfrac{}{}{0.0pt}{}{idle_{v}}{idle_{ebm}}$$\genfrac{}{}{0.0pt}{}{idle_{v}}{idle_{ebm}}$
Figure 4: Undeadlocked IIS for the voting scenario
###### Example 4.6 (Voting).
The undeadlocked model for the voting scenario is presented in Figure 4. Note
that formula
$\neg\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\top$
does not hold anymore, because the joint strategies of $\\{{v,ebm}\\}$ have
nonempty outcomes in $IIS^{\text{$\epsilon$}}(S_{vote})$. On the other hand,
the formula
$\langle\\!\langle{v}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$
(and even
$\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$)
does not hold, which is contrary to the intuition behind the modeling. We will
come back to this issue in Section 7.
Discussion. Adding “silent” transitions to account for the control flow when
no observable event occurs is pretty standard. The crucial issue is _where_ to
add them. Here, we add the $\epsilon$-transitions whenever a subset of agents
might choose to miscoordinate (and stick to their choices). Again, this is in
line with the notion of agency and strategic play in MAS (?; ?). In the next
section, we will discuss a concept of “agent fairness” where the addition of
$\epsilon$-transitions is constrained by the assumption that only a given
subset of agents is fully proactive.
The examples used in this section expose an important feature of agent
systems. The execution semantics of concurrent processes is often defined by a
state-transition graph (or, alternatively, by the tree of paths generated by
the graph, i.e., the tree unfolding of the graph). For systems that involve
proactive agents, this is not enough. Rather, the execution semantics should
map from the possible coalitions and their available strategies to the outcome
sets of those strategies. In this sense, the possible behaviors of an agent
system should be understood via the _set of possible execution trees_ , rather
than a single tree. This is consistent with the theoretical model of MAS in
(?), based on path effectivity functions.
An alternative way out of the problem is to include finite maximal paths in
the outcomes of strategies. However, the interpretation of strategic
modalities over finite paths is rather nonstandard (?) and may pose new
problems in the asynchronous setting. Moreover, our approach allows to reuse
the existing techniques and tools, which are typically built for infinite path
semantics, including the verification and partial order reduction
functionalities of tools like SPIN (?) and STV (?). In general, this is a
design dilemma between changing the logical semantics of the formulas vs.
updating the execution semantics of the representations. Here, we choose the
latter approach.
## 5 Playing Against Reactive Opponents
The solution proposed in Section 4.2 is based on the assumption that an agent
is free to choose any event in its repertoire – even one that prevents the
system from executing anything. The downside is that, for most systems, only
safety goals can be achieved (i.e., properties specified by
$\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\varphi$).
For reachability, there is often a combination of the opponents’ choices that
blocks the execution early on, and prevents the coalition from reaching their
goal. In this section, we define a fairness-style condition that constrains
the choices of more “reactive” opponents. We also show a construction to
verify the abilities of the coalition over the resulting paths in a
technically simpler way.
### 5.1 Opponent-Reactiveness
Given a strategy $\sigma_{A}$, the agents in $A$ are by definition assumed to
be proactive. Below, we propose an execution semantics for $\sigma_{A}$ which
assumes that $A$ cannot be stalled forever by miscoordination on the part of
the opponents.
###### Definition 5.1 (Opponent-reactiveness).
A path $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots$ in
$IIS^{\text{$\epsilon$}}(S)$ is _opponent-reactive for strategy $\sigma_{A}$_
iff we have that $\alpha_{n}=\epsilon$ implies
$enabled(g_{n},\sigma_{A}(g_{n}))=\\{{\epsilon}\\}$. In other words, whenever
the agents outside $A$ have a way to proceed, they must proceed. The _reactive
outcome_ (or _React-outcome_) of $\sigma_{A}$ in $g$, denoted
$\mathit{out}^{\textup{React}}(g,\sigma_{A})$, is the restriction of
$\mathit{out}(g,\sigma_{A})$ to its opponent-reactive paths.
###### Example 5.2 (Conference).
Consider the undeadlocked model $M_{\mathit{conf}}^{\text{$\epsilon$}}$ of
Example 4.5. Path $(000\,proceed\,101\,\epsilon\,101\dots)$ is opponent-
reactive for the strategy of agents $\\{{gc,oc}\\}$ shown in Figure 1.
On the other hand, consider coalition $\\{{gc,sc}\\}$, and the following
strategy of theirs:
$\sigma_{gc}(0)=proceed,\sigma_{gc}(1)=onsite,\sigma_{sc}(0)=proceed$. The
same path is _not_ opponent-reactive for the strategy because the only
opponent ($oc$) has a response at state $101$ that enables a “real” transition
($onsite$).
###### Proposition 5.3.
In $\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$,
the only possible occurrence of $\epsilon$ is as an infinite sequence of
$\epsilon$-transitions following a finite prefix of “real” transitions.
###### Proof.
Take any
$\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots\in\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$
such that $\epsilon$ occurs on $\pi$, and let $i$ be the first position on
$\pi$ st. $\alpha_{i}=\epsilon$. By Definition 5.1, we get that
$enabled(g_{i},\sigma_{A}(g_{i}))=\\{{\epsilon}\\}$. Moreover,
$state_{i+1}=g_{i}$, so also
$enabled(g_{i+1},\sigma_{A}(g_{i+1}))=\\{{\epsilon}\\}$. Thus,
$\alpha_{i+1}=\epsilon$. It follows by simple induction that
$\alpha_{j}=\epsilon$ for every $j\geq i$. ∎
The _opponent-reactive semantics_
$\models_{{}_{\mathrm{ir}}}^{\textup{React}}$ of
$\mathbf{ATL_{\mathrm{}}^{*}}$ is obtained by replacing
$\mathit{out}_{M}(g,\sigma_{A})$ with
$\mathit{out}_{M}^{\textup{React}}(g,\sigma_{A})$ in the semantic clause
presented in Section 3.3.
### 5.2 Encoding Strategic Deadlock-Freeness Under Opponent-Reactiveness in
AMAS
If we adopt the assumption of opponent-reactiveness for coalition $A$, there
is an alternative, technically simpler way to obtain the same semantics of
strategic ability as in Section 4.2. The idea is to introduce the “silent”
transitions already at the level of the AMAS.
###### Definition 5.4 (Undeadlocked AMAS).
The _undeadlocked variant of $S$_ is constructed from $S$ by adding an
auxiliary agent $A_{\epsilon}$ with $L_{\epsilon}=\\{{q_{0}^{\epsilon}}\\}$,
$\iota_{\epsilon}=q_{0}^{\epsilon}$,
$\mathit{Evt}_{\epsilon}=\\{{\epsilon}\\}$,
$R_{\epsilon}(q_{0}^{\epsilon})=\\{{\epsilon}\\}$,
$T_{i}(q_{0}^{\epsilon},\epsilon)=q_{0}^{\epsilon}$, and
$\mathcal{PV}_{\epsilon}=\emptyset$. In other words, we add a module with a
single local state and a “silent” loop labeled by $\epsilon$, as in Figure 5.
We will denote the undeadlocked variant of $S$ by $S^{\text{$\epsilon$}}$.
Note that $S^{\text{$\epsilon$}}$ can be seen as a special case of AMAS. Thus,
the outcome sets and reactive outcomes of strategies in
$IIS(S^{\text{$\epsilon$}})$ are defined exactly as before.
$q_{0}^{\epsilon}$$\epsilon$ Figure 5: The auxiliary agent added in
$S^{\text{$\epsilon$}}$
###### Example 5.5 (Voting).
The undeadlocked AMAS $\mathrm{ASV}_{1,2}^{\epsilon}$ is obtained by
augmenting $\mathrm{ASV}_{1,2}$ with the auxiliary agent in Figure 5.
Obviously, the extra agent adds $\epsilon$-loops to the model of $S$, i.e., to
$IIS(S)$. We show now that, under the assumption of opponent-reactiveness, the
view of $A$’s strategic ability in the undeadlocked AMAS
$S^{\text{$\epsilon$}}$ corresponds precisely to $A$’s abilities in the
undeadlocked model of the original AMAS $S$, i.e.,
$IIS^{\text{$\epsilon$}}(S)$. This allows to deal with deadlocks and finite
paths without redefining the execution semantics for AMAS, set in Definition
2.3, and thus use the existing tools such as SPIN (?) in a straightforward
way.
###### Proposition 5.6.
Let $A\subseteq{\mathbb{A}\mathrm{gt}}$. In
$\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})$,
the only possible occurrence of $\epsilon$ is as an infinite suffix of
$\epsilon$-transitions.
###### Proof.
Analogous to Proposition 5.3. ∎
###### Theorem 5.7.
For every strategy $\sigma_{A}$ in $S$, we have that
$\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})=\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A}).$
###### Proof.
$\boldsymbol{\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})}$:
Consider any
$\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots\in\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$.
If there are no $\epsilon$-transitions on $\pi$, we have that
$\pi\in\mathit{out}^{\textup{React}}_{IIS(S)}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})$,
QED. Suppose that $\pi$ includes $\epsilon$-transitions, with $\alpha_{i}$
being the first one. Then, we have that $\alpha_{j}\neq\epsilon$ and
$\alpha_{j}\in enabled_{IIS^{\text{$\epsilon$}}(S)}(g_{j},\sigma_{A}(g_{j}))$
for every $j<i$, hence also $\alpha_{j}\in
enabled_{IIS(S)}(g_{j},\sigma_{A}(g_{j}))\subseteq
enabled_{IIS(S^{\text{$\epsilon$}})}(g_{j},\sigma_{A}(g_{j}))$. (*)
By Proposition 5.3, $g_{j}=g_{i}$ and $\alpha_{j}=\epsilon$ for every $j\geq
i$. By Definition 5.1,
$enabled_{IIS^{\text{$\epsilon$}}(S)}(g_{j},\sigma_{A}(g_{j}))=\\{{\epsilon}\\}$.
Hence, $enabled_{IIS(S)}(g_{j},\sigma_{A}(g_{j}))=\emptyset$ and
$enabled_{IIS(S^{\text{$\epsilon$}})}(g_{j},\sigma_{A}(g_{j}))=\\{{\epsilon}\\}$.
(**)
Thus, by (*) and (**),
$\pi\in\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})$,
QED.
$\boldsymbol{\mathit{out}^{\textup{React}}_{IIS(S^{\text{$\epsilon$}})}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})}$:
Analogous, with Proposition 5.6 used instead of Proposition 5.3. ∎
Discussion. Opponent-reactiveness is to strategic properties what fairness
conditions are to temporal properties of asynchronous systems. If an important
property cannot be satisfied in all possible executions, it may at least hold
under some reasonable assumptions about which events can be selected by whom
in response to what. Clearly, the condition can be considered intuitive by
some and problematic by others. The main point is, unlike in the previous
semantics, now it is made explicit, and can be adopted or rejected depending
on the intuition. Note that the semantic extensions proposed in this paper
(silent transitions and nondeterministic choices for strategies) make sense
both with and without opponent-reactiveness.
Note that, under the reactiveness assumption, we have that
$M_{\mathit{conf}}^{\text{$\epsilon$}},000\models_{{}_{\mathrm{ir}}}^{\textup{React}}\langle\\!\langle{gc,sc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{epid}}$
and
$M_{\mathit{conf}}^{\text{$\epsilon$}},000\models_{{}_{\mathrm{ir}}}^{\textup{React}}\langle\\!\langle{oc}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{G}\,\neg\mathsf{{epid}}$.
This seems to contradict the commonly accepted requirement of _regularity_ in
games (?). However, the contradiction is only superficial, as the two formulas
are evaluated _under different execution assumptions_ : for the former, we
assume agent $oc$ to be reactive, whereas the latter assumes $gc$ and $sc$ to
react to the strategy of $oc$.
## 6 Concurrency-Fairness Revisited
In Def. 3.6, we recalled the notion of concurrency-fair outcome of (?). The
idea was to remove from $out(g,\sigma_{A})$ the paths that consistently ignore
agents whose events are enabled _at the level of the whole model_.
Unfortunately, the definition has unwelcome side effects, too.
### 6.1 Problems with Concurrency-Fairness
We first show that, contrary to intuition, Definition 3.6 automatically
disregards _deadlock paths_ , i.e., paths with finitely many “real”
transitions.
###### Proposition 6.1.
Consider an AMAS $S$ and a path $\pi$ in $IIS^{\text{$\epsilon$}}(S)$ such
that, from some point $i$ on, $\pi$ includes only $\epsilon$-transitions.
Then, for every strategy $\sigma_{A}$ in $S$, we have that
$\pi\notin\mathit{out}^{\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$.
###### Proof.
Take $\pi$ as above, i.e., $\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}\dots
g_{i}\epsilon g_{i}\epsilon g_{i}\dots$. Since the transition function in
$IIS^{\text{$\epsilon$}}(S)$ is serial, there must be some event
$\beta\neq\epsilon$ enabled in $g_{i}$. In consequence, $\beta$ is always
enabled from $i$ on, but none of its “owners” in $Agent(\beta)$ executes an
event on $\pi$ after $i$. Hence, $\pi$ does not satisfy CF, and does not
belong to
$\mathit{out}^{\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$ for
any strategy $\sigma_{A}$. ∎
Thus, the CF condition eliminates all the deadlock paths from the outcome of a
strategy (for instance, the path $(000\,proceed\,101\,\epsilon\,101\dots)$ in
Example 4.5). In consequence, reasoning about concurrency-fair paths suffers
from the problems that we identified in Section 4.1, even for undeadlocked
models. Moreover, combining the temporal and strategic fairness (i.e., CF and
React) collapses the undeadlocked execution semantics altogether, see below.
###### Proposition 6.2.
Reasoning about reactive _and_ fair outcomes in an undeadlocked model reduces
to reasoning about the fair executions in the original model without
$\epsilon$-transitions. Formally, let
$\mathit{out}^{\textup{React},\textbf{CF}}_{M}(g,\sigma_{A})=\mathit{out}^{\textup{React}}_{M}(g,\sigma_{A})\cap\mathit{out}^{\textbf{CF}}_{M}(g,\sigma_{A})$.
For any AMAS $S$ and any strategy $\sigma_{A}$ in $S$, we have:
$\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})=\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})$.
###### Proof.
Clearly, we have
$\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})\subseteq\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$,
since
$\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$
can only add to $\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})$ new paths
that include $\epsilon$-transitions.
For the other direction, take any
$\pi\in\mathit{out}^{\textup{React},\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$,
and suppose that it contains an $\epsilon$-transition. By Proposition 5.3, it
must have an infinite suffix consisting only of $\epsilon$-transitions. Then,
by Proposition 6.1,
$\pi\notin\mathit{out}^{\textbf{CF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$,
which leads to a contradiction. Thus, $\pi$ contains only transitions from
$IIS(S)$, and hence $\pi\in\mathit{out}^{\textbf{CF}}_{IIS(S)}(g,\sigma_{A})$,
QED. ∎
### 6.2 Strategic Concurrency-Fairness
So, how should fair paths be properly defined for strategic reasoning? The
answer is simple: in relation to the outcome of the strategy being executed.
###### Definition 6.3 (Strategic CF).
$\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}g_{2}\dots$ is a _concurrency-fair path for
strategy $\sigma_{A}$ and state $g$_ iff $g_{0}=g$, and there is no event
$\alpha$ s.t., for some $n$ and all $i\geq n$, we have $\alpha\in
enabled(\pi[i],\sigma_{A}(\pi[i]))$ and $Agent(\alpha)\cap
Agent(\alpha_{i})=\emptyset$. That is, agents with an event always enabled _by
$\sigma_{A}$_ cannot be ignored forever.
The _SCF -outcome_ of $\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$ is defined as
$\mathit{out}^{\textbf{SCF}}_{M}(g,\sigma_{A})=\\{\pi\in\mathit{out}_{M}(g,\sigma_{A})\mid\pi\text{
is concurrency-fair for }\sigma_{A},g\\}$.
The following formal results show that SCF does not suffer from the problems
demonstrated in Section 6.1.
###### Proposition 6.4.
There is an AMAS $S$, a strategy $\sigma_{A}$ in $S$, and a deadlock path
$\pi$ in $IIS^{\text{$\epsilon$}}(S)$ such that $\pi$ is concurrency-fair for
$\sigma_{A}$.
###### Proof.
To demonstrate the property, it suffices to take the AMAS and the strategy of
$\\{{gc,oc}\\}$ depicted in Figure 1, and the path
$\pi=(000\,proceed\,101\,\epsilon\,101\dots)$. ∎
###### Theorem 6.5.
Opponent-reactiveness and strategic concurrency-fairness are incomparable.
Formally, there exists an AMAS $S$, a state $g$ in
$IIS^{\text{$\epsilon$}}(S)$, and a strategy $\sigma_{A}$ such that
$\mathit{out}^{\textbf{SCF}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})\not\subseteq\mathit{out}^{\textup{React}}_{IIS^{\text{$\epsilon$}}(S)}(g,\sigma_{A})$,
and vice versa.
###### Proof.
Consider the undeadlocked model $M_{\mathit{conf}}^{\text{$\epsilon$}}$ in
Example 4.5, and the strategy discussed in Example 5.2:
$\sigma_{gc}(0)=proceed$, $\sigma_{gc}(1)=onsite$, $\sigma_{sc}(0)=proceed$.
Let
$\pi_{1}=(000\,proceed\,101\,\epsilon\,101\,onsite\,211\,rest\,211\linebreak
handle\,211\,rest\,211\dots)$. We have
$\pi_{1}\in\mathit{out}^{\textbf{SCF}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$,
but
$\pi_{1}\notin\mathit{out}^{\textup{React}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$.
On the other hand, for path
$\pi_{2}=(000\,proceed\,101\,onsite\,211\,rest\,211\,rest\,\dots)$, we have
that
$\pi_{2}\notin\mathit{out}^{\textbf{SCF}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$,
but
$\pi_{2}\in\mathit{out}^{\textup{React}}_{M_{\mathit{conf}}^{\text{$\epsilon$}}}(g,\sigma_{A})$.
∎
Discussion. Theorem 6.5 suggests that reactiveness and fairness conditions
arise from orthogonal concerns. The two concepts refer to different factors
that influence which sequences of events can occur. Opponent-reactiveness
constrains the choices that (a subset of) the agents can select. Concurrency-
fairness and its strategic variant restrict the way in which the “scheduler”
(Nature, Chance, God…) can choose from the events selected by the agents.
## 7 Strategies in Asymmetric Interaction
Now, we point out that AMAS are too restricted to model the strategic aspects
of asymmetric synchronization in a natural way (e.g., a sender sending a
message to a receiver).
### 7.1 Simple Choices are Not Enough
We demonstrate the problem on an example.
###### Example 7.1 (Voting).
As already pointed out, we have
$IIS^{\text{$\epsilon$}}(S_{vote}),00\not\models\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$
in the model of Example 4.2. This is because receiving a vote for
$\mathsf{{a}}$, a vote for $\mathsf{{b}}$, and the signal to send the vote,
belong to _different choices_ in the repertoire of the EBM, and the agent can
only select one of them in a memoryless strategy. Moreover, formula
$\langle\\!\langle{ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$
holds under the condition of opponent-reactiveness, i.e., the EBM can force a
reactive voter to vote for a selected candidate. Clearly, it was not the
intention behind the AMAS: the EBM is supposed to _listen_ to the choice of
the voter. No matter whose strategies are considered, and who reacts to whose
actions, the EBM should have no influence on what the voter votes for.
The problem arises because the repertoire functions in AMAS are based on the
assumption that the agent can choose any single event in $R_{i}(l_{i})$. This
does not allow for natural specification of situations when the exact
transition is determined by another agent. For the AMAS in Example 4.2, the
decision to vote for candidate $\mathsf{{a}}$ or $\mathsf{{b}}$ (or to press
$send$) should belong solely to the voter. Thus, setting the EBM repertoire as
$R_{ebm}(0)=\\{{vote_{a},vote_{b},send}\\}$ does not produce a good model of
strategic play in the scenario.
### 7.2 AMAS with Explicit Control
As a remedy, we extend the representations so that one can indicate which
agent(s) control the choice between events.
###### Definition 7.2 (AMAS with explicit control).
Everything is exactly as in Definition 2.1, except for the repertoires of
choices, which are now functions $R_{i}:L_{i}\to
2^{2^{\mathit{Evt}_{i}}\setminus\\{\emptyset\\}}\setminus\\{\emptyset\\}$.
That is, $R_{i}(l)$ lists nonempty subsets of events
$X_{1},X_{2},\dots\subseteq\mathit{Evt}_{i}$, each capturing an available
choice of $i$ at the local state $l$. If the agent chooses
$X_{j}=\\{{\alpha_{1},\alpha_{2},\dots}\\}$, then only an event in that set
can be executed within the agent’s module; however, the agent has no firmer
control over which one will be fired. Accordingly, we assume that
$T_{i}(l,\alpha)$ is defined iff $\alpha\in\bigcup R_{i}(l)$.333 For a set of
sets $X$, we use $\bigcup X$ to denote its “flattening” $\bigcup_{x\in X}x$.
Notice that the AMAS of Definition 2.1 can be seen as a special case where
$R_{i}(l)$ is always a list of singletons. The definitions of IIS and
undeadlocked IIS stay the same, as agents’ repertoires of choices are not
actually used to generate the state-transition structure for the model of $S$.
Moreover, undeadlocked AMAS with explicit control can be obtained analogously
to Definition 5.4 by adding the auxiliary “epsilon”-agent with
$R_{\epsilon}(q_{0}^{\epsilon})=\\{{\\{{\epsilon}\\}}\\}$ in its sole local
state.
Strategies still assign choices to local states; hence, the type of agent
$i$’s strategies is now $\sigma_{i}\colon L_{i}\to
2^{\mathit{Evt}_{i}}\setminus\\{\emptyset\\}$ s.t. $\sigma_{i}(l)\in
R_{i}(l)$. The definition of the outcome set is updated accordingly, see
below.
###### Definition 7.3 (Outcome sets for AMAS with explicit control).
First, we lift the set of events enabled by
$\overrightarrow{\alpha}_{A}=(\alpha_{1},\dots,\alpha_{m})$ at $g$ to match
the new type of repertoires and strategies. Formally, $\beta\in
enabled(g,\overrightarrow{\alpha}_{A})$ iff: (1) for every $i\in
Agent(\beta)\cap A$, we have $\beta\in\alpha_{i}$, and (2) for every $i\in
Agent(\beta)\setminus A$, it holds that $\beta\in\bigcup R_{i}(g^{i})$.
The outcome, React-outcome, and SCF-outcome of $\sigma_{A}$ in $M,g$ are given
as in Definitions 3.5, 5.1, and 6.3.
###### Example 7.4 (Voting).
We improve our voting model by assuming repertoires of choices for the voter
and the EBM as follows: $R_{ebm}(0)=\\{{\\{{vote_{a},vote_{b},send}\\}}\\}$,
$R_{v}(0)=\\{{\\{{vote_{a}}\\},\\{{vote_{b}}\\}}\\}$,
$R_{v}(1)=R_{v}(2)=\\{{\\{{send}\\}}\\}$, etc. That is, the voter’s choices
are as before, but the EBM only listens to what the voter selects.
Clearly,
$\langle\\!\langle{v,ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$
holds in the new AMAS. Moreover,
$\langle\\!\langle{ebm}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\mathrm{F}\,\mathsf{{voted_{a}}}$
does not hold anymore, even assuming opponent-reactiveness.
It is easy to see that Propositions 4.4, 5.3, 5.6, and 6.4, as well as
Theorems 5.7 and 6.5 still hold in AMAS with explicit control.
Discussion. When reasoning about strategic play of asynchronous agents, two
kinds of asymmetry come into the picture. On the one hand, the processes
(agents) being modeled often synchronize in an asymmetric way. For example,
the sender chooses which message to send to the receiver. On the other hand,
the agents $A$ in formula
$\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\varphi$ choose the
strategy and thus push the other agents to respond accordingly. The variant of
AMAS introduced in (?) does not allow to capture the former kind of asymmetry.
In consequence, the choice between the available synchronization branches
belongs solely to the agents indicated by the formula. Unfortunately, there is
no natural way to model the converse situation, i.e., when the agents in
$\langle\\!\langle{A}\rangle\\!\rangle_{{}_{\\!\mathit{}}}$ are forced by the
choices of their opponents. With the new variant of AMAS, we extend the
representations so that the modeler can explicitly specify the degree of
autonomy of each participating agent. Without that, the degree of autonomy is
implicit and comes from the formula being evaluated.
Related modeling approaches. Various forms of asymmetric synchronization are
present in most process algebras. For example, $\pi$-calculus distinguishes
between the action $\overline{c}\langle a\rangle$ of sending the value $a$ on
channel $c$, and action $c(x)$ of listening on channel $c$ and storing
whatever comes in variable $x$. CSP goes further, and allows for a similar
degree of flexibility to ours through suitable combinations of deterministic
choice, nondeterministic choice, and interface parallel operators. Other
synchronization primitives are also possible, see e.g. (?) for an overview.
Instead of allowing for multiple synchronization primitives, we come up with a
single general primitive that can be instantiated to cover different kinds of
interaction.
We note in passing the similarity of our new repertoire functions in
Definition 7.2 to state effectivity functions (?; ?) and especially
alternating transition systems (?).
## 8 Partial Order Reduction Still Works
_Partial order reduction (POR)_ has been defined for temporal and temporal-
epistemic logics without “next” (?; ?; ?; ?), and recently extended to
strategic specifications (?). The idea is to take a network of automata (AMAS
in our case), and use depth-first search through the space of global states to
generate a reduced model that satisfies exactly the same formulas as the full
model. Essentially, POR removes paths that change only the interleaving order
of an “irrelevant” event with another event. Importantly, the method generates
the reduced model directly from the representation, without generating the
full model at all.
### 8.1 Correctness of POR in the New Semantics
POR is a powerful technique to contain state-space explosion and facilitate
verification, cf. e.g. the experimental results in (?). In this paper, we
extend the class of models, and modify their execution semantics. We need to
show that the reduction algorithm in (?), defined for the flawed semantics of
ability, is still correct after the modifications. Our main technical result
in this respect is Theorem A.11, presented below. The detailed definitions,
algorithms and proofs are technical (and rather tedious) adaptations of those
in (?). We omit them here for lack of space, and refer the inquisitive reader
to Appendix A.
Theorem A.11. Let $M=\mathit{IIS}(S^{\text{$\epsilon$}})$,
$M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$ and let
$A\subseteq{\mathbb{A}\mathrm{gt}}$ be a subset of agents. Moreover, let
${M{{}^{\prime}}}\subseteq M$ and $M^{\text{$\epsilon$}}{{}^{\prime}}\subseteq
M^{\text{$\epsilon$}}$ be the reduced models generated by DFS with the choice
of enabled events $E(g^{\prime})$ given by conditions C1, C2, C3 and the
independence relation $I_{A,\mathit{PV}}$. For each
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ formula $\varphi$ over
$\mathit{PV}$, that refers only to coalitions $\hat{A}\subseteq A$, we have:
1. 1.
$M,\iota\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$ iff
${M{{}^{\prime}}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$,
and
2. 2.
$M^{\text{$\epsilon$}},\iota\models_{{}_{\mathrm{ir}}}\varphi$ iff
$M^{\text{$\epsilon$}}{{}^{\prime}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}\varphi$.
Thus, the reduced models can be used to model-check the
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ properties of the full models.
Proof idea. We aim at showing that the full model $M$ and the reduced one
$M^{\prime}$ satisfy the same formulas of
$\mathbf{ATL_{\mathrm{\mathrm{ir}}}^{*}}$ referring only to coalitions
$\hat{A}\subseteq A$ and containing no nested strategic operators. Thanks to
the restriction on the formulas, the proof can be reduced to showing that
${M{{}^{\prime}}}$ satisfies the condition $\textbf{AE}_{A}$, which states
that for each strategy and for each path of the outcome of this strategy in
$M$ there is an equivalent path in the outcome of the same strategy in
$M^{\prime}$. In order to show that $\textbf{AE}_{A}$ holds, we use the
conditions on the selection of events $E(g^{\prime})$ to be enabled at state
$g^{\prime}$ in $M^{\prime}$. The conditions include the requirement that
$\epsilon$ is always selected, together with the three conditions ${\bf
C1,C2,C3}$ adapted from (?; ?; ?).
Intuitively, ${\bf C1}$ states that, along each path $\pi$ in $M$ which starts
at $g^{\prime}$, each event that is dependent on an event in $E(g^{\prime})$
cannot be executed in $M$ unless an event in $E(g^{\prime})$ is executed first
in $M$. ${\bf C2}$ says that $E(g^{\prime})$ either contains all the events,
or only events that do not change the values of relevant propositions. ${\bf
C3}$ guarantees that for every cycle in $M^{\prime}$ containing no
$\epsilon$-transitions, there is at least one node $g^{\prime}$ in the cycle
for which all the enabled events of $g^{\prime}$ are selected.
First, we show that $M$ and $M^{\prime}$ are stuttering-equivalent, i.e., they
have the same sets of paths modulo stuttering (that is, finite repetition of
states on a path). The crucial observation here is that the reduction of $M$
under the conditions C1, C2, C3 is equivalent to the reduction of $M$ without
the $\epsilon$-loops under the conditions C1, C2, C3 of (?), and then adding
the $\epsilon$-loops to all the states of the reduced model. Therefore, for
the paths without $\epsilon$-loops the stuttering equivalence can be shown
similarly to (?, Theorem 12) while for the paths with $\epsilon$-loops we need
more involved arguments in the proof. It turns out that in addition to the
fact that $M$ and $M^{\prime}$ are stuttering equivalent, we can show that
stuttering equivalent paths of $M$ and $M^{\prime}$ have the same maximal
sequence of visible events. From that, we can prove that $\textbf{AE}_{A}$
holds.
## 9 Conclusions
In this paper, we reconsider the asynchronous semantics of strategic ability
for multi-agent systems, proposed in (?). We have already hinted at certain
problems with the semantics in the extended abstract (?). Here, we demonstrate
in detail how the straightforward combination of strategic reasoning and
models of distributed systems leads to counterintuitive interpretation of
formulas. We identify three main sources of problems. First, the execution
semantics does not handle reasoning about deadlock-inducing strategies well.
Secondly, fairness conditions need to be redefined for strategic play.
Thirdly, the class of representations lacks constructions to resolve the
tension between the asymmetry imposed by strategic operators on the one hand,
and the asymmetry of interaction, e.g., between communicating parties.
We deal with the problems as follows. First, we change the execution semantics
of strategies in asynchronous MAS by adding “silent” $\epsilon$-transitions in
states where no “real” event can be executed. We also propose and study the
condition of _opponent-reactiveness_ that assumes the agents outside the
coalition to not obstruct the execution of the strategy forever. Note that,
while the assumption may produce similar interpretation of formulas as in (?),
it is now explicit – as opposed to (?), where it was “hardwired” in the
semantics. The designer or verifier is free to adopt it or reject it,
depending on their view of how the agents in the system behave and choose
their actions.
Secondly, we propose a new notion of _strategic concurrency-fairness_ that
selects the fair executions of a strategy. Thirdly, we allow for
nondeterministic choices in agents’ repertoires. This way, we allow to
explicitly specify that one agent has more control over the outcome of an
event than the other participants of the event.
The main technical result consists in proving that partial order reduction for
strategic abilities (?) is still correct after the semantic modifications.
Thus, the new, more intuitive semantics admits efficient verification.
Beyond $\mathbf{ATL_{\mathrm{ir}}}$. In this study, we have concentrated on
the logic $\mathbf{ATL^{*}_{\mathrm{ir}}}$, i.e., the variant of
$\mathbf{ATL_{\mathrm{}}^{*}}$ based on memoryless imperfect information
strategies. Clearly, the concerns raised here are not entirely (and not even
not primarily) logical. $\mathbf{ATL^{*}_{\mathrm{ir}}}$ can be seen as a
convenient way to specify the players and the winning conditions in a certain
class of games (roughly speaking, $1.5$-player games with imperfect
information, positional strategies, and $\mathbf{LTL}$ objectives). The
semantic problems, and our solutions, apply to all such games interpreted over
arenas given by asynchronous MAS.
Moreover, most of the claims presented here are not specific to
$\mathrm{ir}$-strategies. In fact, we conjecture that our examples of semantic
side effects carry over to the other types of strategies (except for the
existence of coalitions whose all strategies have empty outcomes, which can
happen for neither perfect information nor perfect recall). Similarly, our
technical results should carry over to the other strategy types (except for
the correctness of POR, which does not hold for agents with perfect
information). We leave the formal analysis of those cases for future work.
Other issues. An interesting question concerns the relationship between
asynchronous and synchronous models. We conjecture that AMAS with explicit
control can be simulated by concurrent game structures and alternating
transition systems. Similarly, it should be possible to simulate CGS and ATS
by AMAS with explicit control, at the expense of using a huge space of fully
synchronized actions. For the model checking complexity in AMAS with explicit
control, we expect the same results as in (?).
## Acknowledgements
We thank the anonymous reviewers for their insightful comments. The authors
acknowledge the support of the National Centre for Research and Development,
Poland (NCBR), and the Luxembourg National Research Fund (FNR), under the
PolLux/FNR-CORE project STV (POLLUX-VII/1/2019). W. Penczek and T. Sidoruk
acknowledge support from CNRS/PAS project PARTIES.
## References
* Alur et al. 1998 Alur, R.; Henzinger, T.; Mang, F.; Qadeer, S.; Rajamani, S.; and Tasiran, S. 1998\. MOCHA: Modularity in model checking. In Proceedings of CAV, volume 1427 of Lecture Notes in Computer Science, 521–525. Springer.
* Alur et al. 2001 Alur, R.; de Alfaro, L.; Grossu, R.; Henzinger, T.; Kang, M.; Kirsch, C.; Majumdar, R.; Mang, F.; and Wang, B.-Y. 2001\. jMocha: A model-checking tool that exploits design structure. In Proceedings of ICSE, 835–836. IEEE Computer Society Press.
* Alur, Henzinger, and Kupferman 1997 Alur, R.; Henzinger, T. A.; and Kupferman, O. 1997\. Alternating-time Temporal Logic. In Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS), 100–109. IEEE Computer Society Press.
* Alur, Henzinger, and Kupferman 1998 Alur, R.; Henzinger, T. A.; and Kupferman, O. 1998\. Alternating-time Temporal Logic. Lecture Notes in Computer Science 1536:23–60.
* Alur, Henzinger, and Kupferman 2002 Alur, R.; Henzinger, T. A.; and Kupferman, O. 2002\. Alternating-time Temporal Logic. Journal of the ACM 49:672–713.
* Belardinelli et al. 2017a Belardinelli, F.; Lomuscio, A.; Murano, A.; and Rubin, S. 2017a. Verification of broadcasting multi-agent systems against an epistemic strategy logic. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, 91–97.
* Belardinelli et al. 2017b Belardinelli, F.; Lomuscio, A.; Murano, A.; and Rubin, S. 2017b. Verification of multi-agent systems with imperfect information and public actions. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, São Paulo, Brazil, May 8-12, 2017, 1268–1276.
* Belardinelli et al. 2018 Belardinelli, F.; Lomuscio, A.; Murano, A.; and Rubin, S. 2018\. Alternating-time temporal logic on finite traces. In Proceedings of IJCAI, 77–83.
* Bergstra and Klop 1985 Bergstra, J. A., and Klop, J. W. 1985\. Algebra of communicating processes with abstraction. Theoretical Computer Science 37:77–121.
* Bloem et al. 2015 Bloem, R.; Jacobs, S.; Khalimov, A.; Konnov, I.; Rubin, S.; Veith, H.; and Widder, J. 2015\. Decidability of Parameterized Verification. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers.
* Bratman 1987 Bratman, M. E. 1987\. Intentions, Plans, and Practical Reason. Harvard University Press.
* Bulling and Jamroga 2011 Bulling, N., and Jamroga, W. 2011\. Alternating epistemic mu-calculus. In Proceedings of IJCAI-11, 109–114.
* Busard et al. 2014 Busard, S.; Pecheur, C.; Qu, H.; and Raimondi, F. 2014\. Improving the model checking of strategies under partial observability and fairness constraints. In Formal Methods and Software Engineering, volume 8829 of Lecture Notes in Computer Science. Springer. 27–42.
* Busard et al. 2015 Busard, S.; Pecheur, C.; Qu, H.; and Raimondi, F. 2015\. Reasoning about memoryless strategies under partial observability and unconditional fairness constraints. Information and Computation 242:128–156.
* Cermák et al. 2014 Cermák, P.; Lomuscio, A.; Mogavero, F.; and Murano, A. 2014\. MCMAS-SLK: A model checker for the verification of strategy logic specifications. In Proc. of CAV’14, volume 8559 of Lecture Notes in Computer Science, 525–532. Springer.
* Cermák, Lomuscio, and Murano 2015 Cermák, P.; Lomuscio, A.; and Murano, A. 2015\. Verifying and synthesising multi-agent systems against one-goal strategy logic specifications. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA., 2038–2044.
* Chen et al. 2013 Chen, T.; Forejt, V.; Kwiatkowska, M.; Parker, D.; and Simaitis, A. 2013\. PRISM-games: A model checker for stochastic multi-player games. In Proceedings of TACAS, volume 7795 of Lecture Notes in Computer Science, 185–191. Springer.
* Clarke and Emerson 1981 Clarke, E., and Emerson, E. 1981\. Design and synthesis of synchronization skeletons using branching time temporal logic. In Proceedings of Logics of Programs Workshop, volume 131 of Lecture Notes in Computer Science, 52–71.
* Clarke, Grumberg, and Peled 1999 Clarke, E. M.; Grumberg, O.; and Peled, D. A. 1999\. Model Checking. Cambridge, Massachusetts: The MIT Press.
* Courcoubetis et al. 1992 Courcoubetis, C.; Vardi, M.; Wolper, P.; and Yannakakis, M. 1992\. Memory-efficient algorithms for the verification of temporal properties. Formal Methods in System Design 1(2/3):275–288.
* Dima and Tiplea 2011 Dima, C., and Tiplea, F. L. 2011\. Model-checking ATL under imperfect information and perfect recall semantics is undecidable. CoRR abs/1102.4225.
* Dima, Maubert, and Pinchinat 2014 Dima, C.; Maubert, B.; and Pinchinat, S. 2014\. The expressive power of epistemic $\mu$-calculus. CoRR abs/1407.5166.
* Dima, Maubert, and Pinchinat 2015 Dima, C.; Maubert, B.; and Pinchinat, S. 2015\. Relating paths in transition systems: The fall of the modal mu-calculus. In Proceedings of MFCS, volume 9234 of Lecture Notes in Computer Science, 179–191. Springer.
* Fagin et al. 1995 Fagin, R.; Halpern, J. Y.; Moses, Y.; and Vardi, M. Y. 1995\. Reasoning about Knowledge. MIT Press.
* Fang et al. 2017 Fang, F.; Nguyen, T. H.; Pickles, R.; Lam, W. Y.; Clements, G. R.; An, B.; Singh, A.; Schwedock, B. C.; Tambe, M.; and Lemieux, A. 2017\. PAWS - A deployed game-theoretic application to combat poaching. AI Magazine 38(1):23–36.
* Gerth et al. 1999 Gerth, R.; Kuiper, R.; Peled, D.; and Penczek, W. 1999\. A partial order approach to branching time logic model checking. Information and Computation 150:132–152.
* Godefroid and Wolper 1994 Godefroid, P., and Wolper, P. 1994\. A partial approach to model checking. Information and Computation 110(2):305–326.
* Goranko and Jamroga 2015 Goranko, V., and Jamroga, W. 2015\. State and path coalition effectivity models of concurrent multi-player games. Autonomous Agents and Multi-Agent Systems 1–40.
* Guelev, Dima, and Enea 2011 Guelev, D. P.; Dima, C.; and Enea, C. 2011\. An alternating-time temporal logic with knowledge, perfect recall and past: axiomatisation and model-checking. Journal of Applied Non-Classical Logics 21(1):93–131.
* Hoare 1978 Hoare, C. A. R. 1978\. Communicating sequential processes. Communications of the ACM 21(8):666–677.
* Holzmann 1997 Holzmann, G. J. 1997\. The model checker SPIN. IEEE Transactions on Software Engineering 23(5):279–295.
* Huang and van der Meyden 2014 Huang, X., and van der Meyden, R. 2014\. Symbolic model checking epistemic strategy logic. In Proceedings of AAAI, 1426–1432.
* Jamroga et al. 2018 Jamroga, W.; Penczek, W.; Dembiński, P.; and Mazurkiewicz, A. 2018\. Towards partial order reductions for strategic ability. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018, 156–165. IFAAMAS.
* Jamroga et al. 2019 Jamroga, W.; Knapik, M.; Kurpiewski, D.; and Mikulski, Ł. 2019\. Approximate verification of strategic abilities under imperfect information. Artificial Intelligence 277\.
* Jamroga et al. 2020 Jamroga, W.; Penczek, W.; Sidoruk, T.; Dembiński, P.; and Mazurkiewicz, A. 2020\. Towards partial order reductions for strategic ability. Journal of Artificial Intelligence Research 68:817–850.
* Jamroga, Knapik, and Kurpiewski 2017 Jamroga, W.; Knapik, M.; and Kurpiewski, D. 2017\. Fixpoint approximation of strategic abilities under imperfect information. In Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 1241–1249. IFAAMAS.
* Jamroga, Penczek, and Sidoruk 2021 Jamroga, W.; Penczek, W.; and Sidoruk, T. 2021\. Strategic abilities of asynchronous agents: Semantic side effects. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021, 1545–1547. ACM.
* Kacprzak and Penczek 2004 Kacprzak, M., and Penczek, W. 2004\. Unbounded model checking for alternating-time temporal logic. In Proceedings of the 3rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2004, 646–653. IEEE Computer Society.
* Kurpiewski et al. 2021 Kurpiewski, D.; Pazderski, W.; Jamroga, W.; and Kim, Y. 2021\. STV+Reductions: Towards practical verification of strategic ability using model reductions. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2021, 1770–1772. IFAAMAS.
* Kurpiewski, Jamroga, and Knapik 2019 Kurpiewski, D.; Jamroga, W.; and Knapik, M. 2019\. STV: Model checking for strategies under imperfect information. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2019, 2372–2374. IFAAMAS.
* Lomuscio and Raimondi 2006 Lomuscio, A., and Raimondi, F. 2006\. MCMAS : A model checker for multi-agent systems. In Proceedings of TACAS, volume 4314 of Lecture Notes in Computer Science, 450–454. Springer.
* Lomuscio, Penczek, and Qu 2010a Lomuscio, A.; Penczek, W.; and Qu, H. 2010a. Partial order reductions for model checking temporal epistemic logics over interleaved multi-agent systems. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2010, 659–666.
* Lomuscio, Penczek, and Qu 2010b Lomuscio, A.; Penczek, W.; and Qu, H. 2010b. Partial order reductions for model checking temporal-epistemic logics over interleaved multi-agent systems. Fundam. Inform. 101(1-2):71–90.
* Lomuscio, Qu, and Raimondi 2017 Lomuscio, A.; Qu, H.; and Raimondi, F. 2017\. MCMAS: An open-source model checker for the verification of multi-agent systems. International Journal on Software Tools for Technology Transfer 19(1):9–30.
* Lomuscio, van der Meyden, and Ryan 2000 Lomuscio, A.; van der Meyden, R.; and Ryan, M. 2000\. Knowledge in multiagent systems: initial configurations and broadcast. ACM Transactions on Computational Logic 1(2):247–284.
* Malvone, Murano, and Sorrentino 2017 Malvone, V.; Murano, A.; and Sorrentino, L. 2017\. Hiding actions in multi-player games. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, São Paulo, Brazil, May 8-12, 2017, 1205–1213.
* Milner, Parrow, and Walker 1992 Milner, R.; Parrow, J.; and Walker, D. 1992\. A calculus of mobile processes, I. Information and Computation 100(1):1–40.
* Milner 1980 Milner, R. 1980\. A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer.
* Pauly 2001a Pauly, M. 2001a. Logic for Social Software. Ph.D. Dissertation, University of Amsterdam.
* Pauly 2001b Pauly, M. 2001b. A logical framework for coalitional effectivity in dynamic procedures. Bulletin of Economic Research 53(4):305–324.
* Pauly 2002 Pauly, M. 2002\. A modal logic for coalitional power in games. Journal of Logic and Computation 12(1):149–166.
* Peled 1993 Peled, D. 1993\. All from one, one for all: On model checking using representatives. In Proceedings of the 5th International Conference on Computer Aided Verification, LNCS 697, 409–423. Springer-Verlag.
* Peled 1994 Peled, D. 1994\. Combining partial order reductions with on-the-fly model-checking. In Proceedings of the 6th International Conference on Computer Aided Verification, LNCS 818, 377–390. Springer-Verlag.
* Peled 1996 Peled, D. 1996\. Partial order reductions: Model checking using representatives. In Proceedings of the 21st International Symposium on Mathematical Foundations of Computer Science (MFCS’96), volume 1113 of LNCS, 93–112. Springer-Verlag.
* Penczek et al. 2000 Penczek, W.; Szreter, M.; Gerth, R.; and Kuiper, R. 2000\. Improving partial order reductions for universal branching time properties. Fundamenta Informaticae 43:245–267.
* Pilecki, Bednarczyk, and Jamroga 2014 Pilecki, J.; Bednarczyk, M.; and Jamroga, W. 2014\. Synthesis and verification of uniform strategies for multi-agent systems. In Proceedings of CLIMA XV, volume 8624 of Lecture Notes in Computer Science, 166–182. Springer.
* Priese 1983 Priese, L. 1983\. Automata and concurrency. Theoretical Computer Science 25(3):221 – 265.
* Schlingloff, Stubert, and Jamroga 2016 Schlingloff, B.; Stubert, H.; and Jamroga, W. 2016\. Collaborative embedded systems - a case study. In Proceedings of the 3rd International Workshop on Emerging Ideas and Trends in Engineering of Cyber-Physical Systems (EITEC@CPSWeek), 17–22.
* Schobbens 2004 Schobbens, P. Y. 2004\. Alternating-time logic with imperfect recall. Electronic Notes in Theoretical Computer Science 85(2):82–93.
## Appendix A Partial Order Reduction: Details
All the results in this appendix are formulated and proved for the semantics
of $\mathbf{ATL_{\mathrm{ir}}}$ over undeadlocked AMAS with explicit control.
Also, we restrict the formulas to $\mathbf{ATL_{\mathrm{}}^{*}}$ without
nested strategic modalities and the next step operator $\mathrm{X}\,$ (“simple
$\mathbf{ATL_{\mathrm{}}^{*}}$”, or $\mathbf{sATL_{\mathrm{}}^{*}}$). As noted
in (?), $\mathbf{sATL_{\mathrm{}}^{*}}$ is sufficient for most practical
specifications and much more expressive than $\mathbf{LTL}$. Yet, as we prove
below, it enjoys the same efficiency of partial order reduction.
We begin by introducing the relevant notions of equivalence. Then, we propose
conditions on reduced models that preserve the stuttering equivalence with and
without the assumption of _opponent-reactiveness_ (React). We point out
algorithms that generate such models, and prove their correctness.
It should be stressed that the reduction scheme proposed here is general, in
the sense that it preserves equivalent representatives of both fair and unfair
paths in the model. In particular, we do _not_ propose a variant of POR,
optimized for strategic concurrency-fair paths, analogous to reductions of (?)
for CF. A variant of POR for $\mathbf{sATL_{\mathrm{\mathrm{ir}}}}$ under the
SCF assumption is planned for future work.
### A.1 Properties of Submodels
Given an undeadlocked AMAS $S^{\text{$\epsilon$}}$, partial order reduction
attempts to generate only a subset of states and transitions that is
sufficient for verification of $S^{\text{$\epsilon$}}$, i.e., a relevant
_submodel_ of $\mathit{IIS}(S^{\text{$\epsilon$}})$.
###### Definition A.1 (Submodel).
Let models $M,{M{{}^{\prime}}}$ extend the same AMAS $S^{\text{$\epsilon$}}$,
so that $St^{\prime}\subseteq St$, $\iota\in St^{\prime}$, $T$ is an extension
of $T^{\prime}$, and $V^{\prime}=V|_{St^{\prime}}$. Then, we write
${M{{}^{\prime}}}\subseteq M$ and call ${M{{}^{\prime}}}$ a _submodel_ of $M$.
Note that, for each $g\in St^{\prime}$, we have
$\Pi_{{M{{}^{\prime}}}}(g)\subseteq\Pi_{M}(g)$.
###### Lemma A.2.
Let ${M{{}^{\prime}}}\subseteq M$, $A\in{\mathbb{A}\mathrm{gt}}$,
$\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$. Then, we have
$\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{A})=\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})\cap\Pi_{{M{{}^{\prime}}}}(\iota)$.
_Proof._ Note that each joint $\mathrm{ir}$-strategy in $M$ is also a well
defined $\mathrm{ir}$-joint strategy in ${M{{}^{\prime}}}$ as it is defined on
the local states of each agent of an AMAS which is extended by both $M$ and
${M{{}^{\prime}}}$. The lemma follows directly from the definition of React-
outcome (Def. 5.1 and 7.3), plus the fact that
$\Pi_{{M{{}^{\prime}}}}(\iota)\subseteq\Pi_{M}(\iota)$. $\blacksquare$
###### Lemma A.3.
Let $M$ be a model, $\pi,\pi^{\prime}\in\Pi_{M}(\iota)$, and for some
$i\in{\mathbb{A}\mathrm{gt}}:$
$\mathit{Evt}(\pi)\mid_{\mathit{Evt}_{i}}=\mathit{Evt}(\pi^{\prime})\mid_{\mathit{Evt}_{i}}$.
Then, for each $\mathrm{ir}$-strategy $\sigma_{i}$, we have
$\pi\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$ iff
$\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$.
_Proof._ Let $\mathit{Evt}(\pi)\mid_{\mathit{Evt}_{i}}=b_{0}b_{1}\ldots$ be
the sequence of the events of agent $i$ in $\pi$. For each $b_{j}$ let
$\pi[b_{j}]$ denote the global state from which $b_{j}$ is executed in $\pi$.
By induction we can show that for each $j\geq 0$, we have
$\pi[b_{j}]^{i}=\pi^{\prime}[b_{j}]^{i}$. For $j=0$ it is easy to see that
$\pi[b_{0}]^{i}=\pi[b_{0}]^{i}=\iota^{i}$. Assume that the thesis holds for
$j=k$. The induction step follows from the fact the local evolution $T_{i}$ is
a function, so if $\pi[b_{k}]^{i}=\pi^{\prime}[b_{k}]^{i}=l$ for some $l\in
L_{i}$, then $\pi[b_{k+1}]^{i}=\pi^{\prime}[b_{k+1}]^{i}=T_{i}(l,b_{k})$.
Thus, by Def. 5.1 and 7.3, for each $\mathrm{ir}$-strategy $\sigma_{i}$ we
have $\pi\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$ iff
$\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{i})$, which
concludes the proof. $\blacksquare$
Lemma A.3 can be easily generalized to joint strategies
$\sigma_{A}\in\Sigma_{A}^{\mathrm{ir}}$.
### A.2 Stuttering Equivalence
Let $M$ be a model, ${M{{}^{\prime}}}\subseteq M$, and
$\mathit{PV}\subseteq\mathcal{PV}$ a subset of propositions. Stuttering
equivalence says that two paths can be divided into corresponding finite
segments, each satisfying exactly the same propositions. Stuttering path
equivalence444 The property is usually called _stuttering trace equivalence_
(?). We use a slightly different name to avoid confusion with Mazurkiewicz
traces, also used in this paper. requires two models to always have
corresponding, stuttering-equivalent paths.
###### Definition A.4 (Stuttering equivalence).
Two paths $\pi\in\Pi_{M}(\iota)$ and
$\pi^{\prime}\in\Pi_{{M{{}^{\prime}}}}(\iota)$ are stuttering equivalent,
denoted $\pi\equiv_{s}\pi^{\prime}$, if there exists a partition
$B_{0}=(\pi[0],\dots,\pi[i_{1}-1]),\ B_{1}=(\pi[i_{1}],\dots,\pi[i_{2}-1]),\
\ldots$ of the states of $\pi$, and an analogous partition
$B^{\prime}_{0},B^{\prime}_{1},\ldots$ of the states of $\pi^{\prime}$, s.t.
for each $j\geq 0:$ $B_{j}$ and $B^{\prime}_{j}$ are nonempty and finite, and
$V(g)\cap\mathit{PV}=V^{\prime}(g^{\prime})\cap\mathit{PV}$ for every $g\in
B_{j}$ and $g^{\prime}\in B^{\prime}_{j}$.
Models $M$ and ${M{{}^{\prime}}}$ are stuttering path equivalent, denoted
$M\equiv_{s}{M{{}^{\prime}}}$ if for each path $\pi\in\Pi_{M}(\iota)$, there
is a path $\pi^{\prime}\in\Pi_{{M{{}^{\prime}}}}(\iota)$ such that
$\pi\equiv_{s}\pi^{\prime}$.555Typically, the definition also contains the
symmetric condition which in our case always holds for $M$ and its submodel
${M{{}^{\prime}}}$, as $\Pi_{{M{{}^{\prime}}}}(\iota)\subseteq\Pi_{M}(\iota)$.
###### Theorem A.5 ((?)).
If $M\equiv_{s}{M{{}^{\prime}}}$, then we have $M,\iota\models\varphi$ iff
${M{{}^{\prime}}},\iota^{\prime}\models\varphi$, for any $\mathbf{LTL_{-X}}$
formula $\varphi$ over $\mathit{PV}$.
### A.3 Independence of Events
Intuitively, an event is invisible iff it does not change the valuations of
the propositions.666 This concept of invisibility is technical, and is not
connected to the view of any agent in the sense of (?). Additionally, we can
designate a subset of agents $A$ whose events are visible by definition.
Furthermore, two events are independent iff they are not events of the same
agent and at least one of them is invisible.
###### Definition A.6 (Invisible events).
Consider a model $M$, a subset of agents $A\subseteq{\mathbb{A}\mathrm{gt}}$,
and a subset of propositions $\mathit{PV}\subseteq\mathcal{PV}$. An event
$\alpha\in\mathit{Evt}$ is invisible wrt. $A$ and $\mathit{PV}$ if
$Agent(\alpha)\cap A=\emptyset$ and for each two global states
$g,g^{\prime}\in St$ we have that
$g\stackrel{{\scriptstyle\alpha}}{{\longrightarrow}}g^{\prime}$ implies
$V(g)\cap\mathit{PV}=V(g^{\prime})\cap\mathit{PV}$. The set of all invisible
events for $A,\mathit{PV}$ is denoted by $Invis_{A,\mathit{PV}}$, and its
closure – of visible events – by $Vis_{A,\mathit{PV}}=\mathit{Evt}\setminus
Invis_{A,\mathit{PV}}$.
###### Definition A.7 (Independent events).
The notion of _independence_
$I_{A,\mathit{PV}}\subseteq\mathit{Evt}\times\mathit{Evt}$ is defined as:
$I_{A,\mathit{PV}}=\\{(\alpha,\alpha^{\prime})\in\mathit{Evt}\times\mathit{Evt}\mid
Agent(\alpha)\cap Agent(\alpha^{\prime})=\emptyset\\}\ \setminus\
(Vis_{A,\mathit{PV}}\times Vis_{A,\mathit{PV}})$. Events
$\alpha,\alpha^{\prime}\in\mathit{Evt}$ are called dependent if
$(\alpha,\alpha^{\prime})\not\in I_{A,\mathit{PV}}$. If it is clear from the
context, we omit the subscript $\mathit{PV}$.
### A.4 Preserving Stuttering Equivalence
Rather than generating the full model $M=\mathit{IIS}(S^{\text{$\epsilon$}})$,
one can generate a reduced model ${M{{}^{\prime}}}$ satisfying the following
property:
$\textbf{AE}_{A}:\>\forall\sigma_{A}\\!\in\\!\Sigma_{A}^{\mathrm{ir}}\quad\forall\pi\\!\in\\!\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})$
$\qquad\exists\pi^{\prime}\\!\in\\!\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{A})\quad\pi\\!\equiv_{s}\\!\pi^{\prime}$.
We define a class of algorithms that generate reduced models satisfying
$\textbf{AE}_{A}$ (Section A.4), and then prove that these models preserve
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ (Section A.4).
Algorithms for partial order reduction. POR is used to reduce the size of
models while preserving satisfaction for a class of formulas. The standard DFS
(?) or DDFS (?) is modified in such a way that from each visited state $g$ an
event $\alpha$ to compute the successor state $g_{1}$ such that
$g\stackrel{{\scriptstyle\alpha}}{{\rightarrow}}g_{1}$, is selected from
$E(g)\cup\\{{\epsilon}\\}$ such that $E(g)\subseteq
enabled(g)\setminus\\{{\epsilon}\\}$. That is, the algorithm always selects
$\epsilon$, plus a subset of the enabled events at $g$. Let
$A\subseteq{\mathbb{A}\mathrm{gt}}$. The conditions on the heuristic selection
of $E(g)$ given below are inspired by (?; ?; ?).
C1
Along each path $\pi$ in $M$ that starts at $g$, each event that is dependent
on an event in $E(g)$ cannot be executed in $\pi$ without an event in $E(g)$
being executed first in $\pi$. Formally, $\forall\pi\in\Pi_{M}(g)$ such that
$\pi=g_{0}\alpha_{0}g_{1}\alpha_{1}\ldots$ with $g_{0}=g$, and $\forall
b\in\mathit{Evt}$ such that $(b,c)\notin I_{A}$ for some $c\in E(g)$, if
$\alpha_{i}=b$ for some $i\geq 0$, then $\alpha_{j}\in E(g)$ for some $j<i$.
C2
If $E(g)\neq enabled(g)\setminus\\{{\epsilon}\\}$, then $E(g)\subseteq
Invis_{A}$.
C3
For every cycle in ${M{{}^{\prime}}}$ containing no $\epsilon$-transitions,
there is at least one node $g$ in the cycle for which
$E(g)=enabled(g)\setminus\\{{\epsilon}\\}$, i.e., for which all the successors
of $g$ are expanded.
###### Theorem A.8.
Let $A\subseteq{\mathbb{A}\mathrm{gt}}$,
$M=\mathit{IIS}(S^{\text{$\epsilon$}})$, and ${M{{}^{\prime}}}\subseteq M$ be
the reduced model generated by DFS with the choice of $E(g^{\prime})$ for
$g^{\prime}\in St^{\prime}$ given by conditions C1, C2, C3 and the
independence relation $I_{A}$. Then, ${M{{}^{\prime}}}$ satisfies
$\textbf{AE}_{A}$.
_Proof._ Let ${M{{}^{\prime}}}\subseteq M=\mathit{IIS}(S^{\text{$\epsilon$}})$
be the reduced model generated as specified. Notice that the reduction of $M$
under the conditions C1, C2, C3 above is equivalent to the reduction of $M$
without the $\epsilon$-loops under the conditions C1, C2, C3 of (?), and then
adding the $\epsilon$-loops to all the states of the reduced model. Although
the setting is slightly different, it can be shown similarly to (?, Theorem
12) that the conditions C1, C2, C3 guarantee that the models: (i) $M$ without
$\epsilon$-loops and (ii) ${M{{}^{\prime}}}$ without $\epsilon$-loops are
stuttering path equivalent. More precisely, for each path
$\pi=g_{0}a_{0}g_{1}a_{1}\cdots$ with $g_{0}=\iota$ (without
$\epsilon$-transitions) in $M$ there is a stuttering equivalent path
$\pi^{\prime}=g^{\prime}_{0}a^{\prime}_{0}g^{\prime}_{1}a^{\prime}_{1}\cdots$
with $g^{\prime}_{0}=\iota$ (without $\epsilon$-transitions) in $M^{\prime}$
such that
$\mathit{Evt}(\pi)|_{Vis_{A}}=\mathit{Evt}(\pi^{\prime})|_{Vis_{A}}$, i.e.,
$\pi$ and $\pi^{\prime}$ have the same maximal sequence of visible events for
$A$. (*)
We will now prove that this implies $M\equiv_{s}{M{{}^{\prime}}}$. Removing
the $\epsilon$-loops from $M$ eliminates two kinds of paths: (a) paths with
infinitely many “proper” events, and (b) paths ending with an infinite
sequence of $\epsilon$-transitions. Consider a path $\pi$ of type (a) from
$M$. Notice that the path $\pi_{1}$, obtained by removing the
$\epsilon$-transitions from $\pi$, is stuttering-equivalent to $\pi$.
Moreover, by (*), there exists a path $\pi_{2}$ in ${M{{}^{\prime}}}$ without
$\epsilon$-transitions, which is stuttering-equivalent to $\pi_{1}$. By
transitivity of the stuttering equivalence, we have that $\pi_{2}$ is
stuttering equivalent to $\pi$. Since $\pi_{2}$ must also be a path in
${M{{}^{\prime}}}$, this concludes this part of the proof.
Consider a path $\pi$ of type (b) from $M$, i.e., $\pi$ ends with an infinite
sequence of $\epsilon$-transitions. Let $\pi_{1}$ be the sequence obtained
from $\pi$ after removing $\epsilon$-transitions, and $\pi_{2}$ be any
infinite path without $\epsilon$-transitions such that $\pi_{1}$ is its
prefix. Then, it follows from (*) that there is a stuttering equivalent path
$\pi_{2}^{\prime}=g^{\prime}_{0}a^{\prime}_{0}g^{\prime}_{1}a^{\prime}_{1}\cdots$
with $g^{\prime}_{0}=\iota$ in $M^{\prime}$ such that
$\mathit{Evt}(\pi_{2})|_{Vis_{A}}=\mathit{Evt}(\pi_{2}^{\prime})|_{Vis_{A}}$.
Consider the minimal finite prefix $\pi_{1}^{\prime}$ of $\pi_{2}^{\prime}$
such that
$\mathit{Evt}(\pi_{1}^{\prime})|_{Vis_{A}}=\mathit{Evt}(\pi_{1})|_{Vis_{A}}$.
Clearly, $\pi_{1}^{\prime}$ is a sequence in $M^{\prime}$ and can be extended
with an infinite number of $\epsilon$-transitions to the path $\pi^{\prime}$
in $M^{\prime}$. It is easy to see that $\pi$ and $\pi^{\prime}$ are
stuttering equivalent.
So far, we have shown that our reduction under the conditions C1, C2, C3
guarantees that the models $M$ and ${M{{}^{\prime}}}$ are stuttering path
equivalent, and more precisely that for each path
$\pi=g_{0}a_{0}g_{1}a_{1}\cdots$ with $g_{0}=\iota$ in $M$ there is a
stuttering equivalent path
$\pi^{\prime}=g^{\prime}_{0}a^{\prime}_{0}g^{\prime}_{1}a^{\prime}_{1}\cdots$
with $g^{\prime}_{0}=\iota$ in $M^{\prime}$ such that
$\mathit{Evt}(\pi)|_{Vis_{A}}=\mathit{Evt}(\pi^{\prime})|_{Vis_{A}}$, i.e.,
$\pi$ and $\pi^{\prime}$ have the same maximal sequence of visible events for
$A$. To show that ${M{{}^{\prime}}}$ satisfies $\textbf{AE}_{A}$, consider an
$\mathrm{ir}$-joint strategy $\sigma_{A}$ and
$\pi\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})$. As demonstrated
above, there is $\pi^{\prime}\in\Pi_{{M{{}^{\prime}}}}(\iota)$ such that
$\pi\equiv_{s}\pi^{\prime}$ and
$\mathit{Evt}(\pi)|_{Vis_{A}}=\mathit{Evt}(\pi^{\prime})|_{Vis_{A}}$. Since
$\mathit{Evt}_{i}\subseteq Vis_{A}$ for each $i\in A$, the same sequence of
events of each $\mathit{Evt}_{i}$ is executed in $\pi$ and $\pi^{\prime}$.
Thus, by the generalization of Lemma A.3 to $\mathrm{ir}$-joint strategies we
get $\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{A})$. So,
by Lemma A.2 we have
$\pi^{\prime}\in\mathit{out}^{\textup{React}}_{M{{}^{\prime}}}(\iota,\sigma_{A})$.
$\blacksquare$
Algorithms generating reduced models, in which the choice of $E(g)$ is given
by similar conditions, can be found for instance in (?; ?; ?; ?; ?; ?).
POR for proactive opponents. The same reduction still works without the
assumption of opponent-reactiveness (React).
###### Theorem A.9.
Let $M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$ be an undeadlocked IIS.
Then, its reduced model $M^{\text{$\epsilon$}}{{}^{\prime}}$, generated as in
Theorem A.8, satisfies $\textbf{AE}_{A}$.
_Proof (Sketch)._ In this setting, there is no auxiliary agent in the AMAS,
and $\epsilon$-transitions are added directly to the IIS in accordance with
Definition 4.3. Hence, not every global state of $M^{\text{$\epsilon$}}$
necessarily has an $\epsilon$ loop, but only those where a miscoordinating
combination of events exists. However, this does not impact the reduction
itself.
First, note that Lemma A.2 still holds, directly from the definition of
outcome (Definition 3.5). Furthermore, because in the undeadlocked IIS
$M^{\text{$\epsilon$}}$ the $\epsilon$-transitions do not belong to any agent,
Lemma A.3, where sequences of some agent $i$’s events are considered, also
holds. Note that the React condition only restricts the outcome sets, and not
the model itself: both $M=IIS(S^{\epsilon})$ and $M^{\text{$\epsilon$}}$
contain the same two types (a) and (b) of paths with $\epsilon$-transitions as
discussed in Theorem A.8. Hence, following its reasoning, it can first be
shown that models $M^{\text{$\epsilon$}}$ and
$M^{\text{$\epsilon$}}{{}^{\prime}}$ without their $\epsilon$-transitions are
stuttering path equivalent, and that it remains the case also when both types
of paths including $\epsilon$ loops are included.
Note that the remark about ${M{{}^{\prime}}}$ being equivalent to reducing $M$
without $\epsilon$ loops and adding them to each global state obviously does
not apply to $M^{\text{$\epsilon$}}$ (not every global state of
$M^{\text{$\epsilon$}}$ has them in the first place). However, this
observation has no bearing on the proof. As before, $\epsilon$ is explicitly
stated to be selected for the subset $E(g)$, ensuring preservation of
representative paths with $\epsilon$ in $M^{\text{$\epsilon$}}{{}^{\prime}}$.
$\blacksquare$
Correctness of reductions satisfying $\textbf{AE}_{A}$. We show that the
reduced models satisfying $\textbf{AE}_{A}$ preserve
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$.
###### Theorem A.10.
Let $A\subseteq{\mathbb{A}\mathrm{gt}}$, and let models
${M{{}^{\prime}}}\subseteq M$, $M^{\text{$\epsilon$}}{{}^{\prime}}\subseteq
M^{\text{$\epsilon$}}$ satisfy $\textbf{AE}_{A}$. For each
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ formula $\varphi$ over
$\mathit{PV}$, that refers only to coalitions $\hat{A}\subseteq A$, we have
that:
1. 1.
$M,\iota\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$ iff
${M{{}^{\prime}}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$,
and
2. 2.
$M^{\text{$\epsilon$}},\iota\models_{{}_{\mathrm{ir}}}\varphi$ iff
$M^{\text{$\epsilon$}}{{}^{\prime}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}\varphi$.
_Proof._ Proof by induction on the structure of $\varphi$. We show the case
$\varphi=\langle\\!\langle{\hat{A}}\rangle\\!\rangle_{{}_{\\!\mathit{}}}\gamma$.
The cases for $\neg,\land$ are straightforward.
Notice that
$\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{\hat{A}})\subseteq\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{\hat{A}})$,
which together with the condition $\textbf{AE}_{A}$ implies that the sets
$\mathit{out}^{\textup{React}}_{M}(\iota,\sigma_{\hat{A}})$ and
$\mathit{out}^{\textup{React}}_{{M{{}^{\prime}}}}(\iota,\sigma_{\hat{A}})$ are
stuttering path equivalent. Analogously, this is the case for
$\mathit{out}_{{M{{}^{\prime}}}}(\iota,\sigma_{\hat{A}})\subseteq\mathit{out}_{M}(\iota,\sigma_{\hat{A}})$,
i.e. without the React assumption. Hence, (1) and (2) follow from Theorem A.5.
$\blacksquare$
Together with Theorems A.8 and A.9, we obtain the following.
###### Theorem A.11.
Let $M=\mathit{IIS}(S^{\text{$\epsilon$}})$,
$M^{\text{$\epsilon$}}=IIS^{\text{$\epsilon$}}(S)$ and let
${M{{}^{\prime}}}\subseteq M$ and $M^{\text{$\epsilon$}}{{}^{\prime}}\subseteq
M^{\text{$\epsilon$}}$ be the reduced models generated by DFS with the choice
of $E(g^{\prime})$ for $g^{\prime}\in St^{\prime}$ given by conditions C1, C2,
C3 and the independence relation $I_{A,\mathit{PV}}$. For each
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ formula $\varphi$ over
$\mathit{PV}$, that refers only to coalitions $\hat{A}\subseteq A$, we have:
1. 1.
$M,\iota\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$ iff
${M{{}^{\prime}}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}^{\textup{React}}\varphi$,
and
2. 2.
$M^{\text{$\epsilon$}},\iota\models_{{}_{\mathrm{ir}}}\varphi$ iff
$M^{\text{$\epsilon$}}{{}^{\prime}},\iota^{\prime}\models_{{}_{\mathrm{ir}}}\varphi$.
This concludes the proof that the adaptation of POR for $\mathbf{LTL_{-X}}$ to
$\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$, originally presented in (?),
remains sound in the updated semantics proposed in Sections 4 and 7. That is,
the structural condition $\textbf{AE}_{A}$ is sufficient to obtain correct
reductions for $\mathbf{sATL_{\mathrm{\mathrm{ir}}}^{*}}$ with and without the
new opponent-reactiveness assumption (Theorem A.11). Thanks to that, one can
potentially reuse or adapt the existing POR algorithms and tools for
$\mathbf{LTL_{-X}}$, and the actual reductions are likely to be substantial.
|
2024-09-04T02:54:58.237523 | 2020-03-09T02:21:30 | 2003.03891 | {
"authors": "Nursefa Zengin, Baris Fidan",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26107",
"submitter": "Nursefa Zengin",
"url": "https://arxiv.org/abs/2003.03891"
} | arxiv-papers | # Adaptive Extremum Seeking Using Recursive Least Squares
Nursefa Zengin, Baris Fidan School of Engineering, University of Waterloo, ON,
Canada<EMAIL_ADDRESS>
###### Abstract
Extremum seeking (ES) optimization approach has been very popular due to its
non-model based analysis and implementation. This approach has been mostly
used with gradient based search algorithms. Since least squares (LS)
algorithms are typically observed to be superior, in terms of convergence
speed and robustness to measurement noises, over gradient algorithms, it is
expected that LS based ES schemes will also provide faster convergence and
robustness to sensor noises. In this paper, with this motivation, a recursive
least squares (RLS) estimation based ES scheme is designed and analysed for
application to scalar parameter and vector parameter static map and dynamic
systems. Asymptotic convergence to the extremum is established for all the
cases. Simulation studies are provided to validate the performance of proposed
scheme.
## I Introduction
Extremum seeking (ES) is a popular technique for adaptive optimization of the
performance of dynamic systems by tuning certain system parameters based on
measurements. The main advantage of this technique is that limited or no
knowledge of the plant model is required. ES is suitable for optimization of
the performance of systems with complex dynamics, unavailable suitable
measurements to validate the model, and time-varying disturbances that are
difficult to model accurately ([1]).
The most common ES algorithm used in the literature is the classical band-pass
filtering based one, in which the gradient of the output with respect to the
input will determine the direction of adjusting the input variables. This
method was successfully applied to different application areas including
biochemical reactors [[2, 3]], ABS control in automotive brakes ([4, 1, 5, 6,
7]), mobile robots ([8, 9, 10]), mobile sensor networks ([11, 12, 13]).
Among other types of ES algorithms, perturbation based ES relies on added
perturbation signals to estimate the gradient of the output by correlating the
perturbations. To overcome the implementation drawbacks of introducing
perturbation signals, some methods that are free of perturbation signals have
been developed by [14, 15, 16].
Convergence rate of conventional ES algorithms is a limiting factor in many
applications. Recursive Least Squares (RLS) based estimation has significant
potential in relaxing this limitation and improving robustness to measurement
noises. [17, 15, 18] used certain LS based techniques in their ES algorithms
to obtain better convergence results. [17] estimated the gradient of the
output with respect to the input using a LS based adaptive law for a class of
nonlinear dynamic systems together with a sinusoidal perturbation signal. [15]
used past data of a performance map to estimate the gradient of this
performance map by a first order LS fit. The proposed method used no dither
signal, but utilized a time window of history data of the performance map.
[18] provided general results and a framework for the design of ES schemes
applied to systems with parametric uncertainties and used LS algorithm to
estimate unknown parameters of the known system.
In absence of the parameter knowledge, a series of control/optimization
schemes have been proposed in the literature utilizing certain ES tools such
as switching methods ([19]), signal perturbation for persistence excitation,
and band pass filtering ([19],[20],[21],[18]. [22] and [23] used a discrete
time ES scheme to estimate the gradient as a time-varying parameter using LS
like update laws. They removed the need for averaging system in order to
achieve the convergence of ES. The designs are simulated for static unknown
maps, systems with unknown discrete-time dynamics and sampled-data systems.
In this paper, a continuous time RLS parameter estimation based ES scheme is
designed and analysed for scalar parameter and vector parameter static map and
dynamic systems. Asymptotic convergence to the extremum is established for
each case. Numerical simulation examples are provided to validate the
performance of proposed scheme comparing the results with gradient parameter
estimation based one. A specific simulation example, antilock braking systems
(ABS), in [1] is studied to compare the performance of RLS estimation based ES
with classical gradient based ES.
Contents of this paper are as follows. Section II is dedicated to the problem
statement. In Section III, existing classical perturbation based ES is
reviewed. Proposed RLS estimation based adaptive ES is developed for scalar
parameter systems in Section IV, and for vector parameter systems in Section
V. Comparative simulation examples are presented in Section VI. Finally,
conclusions of the paper are given in Section VII.
## II Problem Statement
The ES problem of interest is defined for static map systems and dynamic
systems separately in the following subsections.
### II-A Static Maps
Consider a concave static map system
$\displaystyle
y=h_{s}(u)=\bar{h}_{s}(\theta^{*},u),\quad\theta^{*}=\begin{bmatrix}\theta^{*}_{1}&\cdots&\theta^{*}_{N}\end{bmatrix}^{T},$
(1)
where $\theta^{*}\in\mathbb{R}^{N}$ is a fixed unknown parameter vector,
$u\in\mathbb{R}^{m}$ is the input and $y\in\mathbb{R}$ is the output of the
system. Assume that the control input signal $u$ is generated by a smooth
control law
$u=\alpha(\theta)$ (2)
parametrized by a control parameter vector $\theta\in\mathbb{R}^{N}$.
###### Assumption 1
The static map $\bar{h}_{s}(\theta^{*},u)$ is smoothly differentiable.
###### Assumption 2
$h_{s}(u)=\bar{h}_{s}(\theta^{*},u)$ has a single extremum (maximum) $y^{*}$
at $u=\alpha(\theta^{*}).$
The control objective is to maximize the steady-state value of $y$ but without
requiring the knowledge of $\theta^{*}$ or the system function $h_{s}$.
### II-B Dynamic Systems
Consider a general multi-input-single-output (MISO) nonlinear system
$\dot{x}=f(x,u)=\bar{f}(\theta^{*},x,u),$ (3)
$y=h_{d}(x)=\bar{h}_{d}(\theta^{*},\theta)=h(\theta),$ (4) $\theta=\pi(x)$ (5)
where $x\in\mathbb{R}^{n}$ is the state, $u\in\mathbb{R}^{m}$ is the input,
$y\in\mathbb{R}$ is the output, all measurable, and
$f:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}$ and
$h_{d}=h\circ\pi$ are smooth functions. Assume that the control input signal
$u$ is in the form (2), the control parameter $\theta\in\mathbb{R}^{N}$ is
dependant on $x$ through a map
$\pi(.):\mathbb{R}^{n}\rightarrow\mathbb{R}^{N}$.
The closed loop system can be written as follows:
$\dot{x}=f(x,\alpha(\theta))=f(x,\alpha(\pi(x)).$ (6)
The equilibria of (6) can be parameterized by $\theta$. The following
assumptions about the closed loop system (3) are made, similarly to [21].
###### Assumption 3
There exists a smooth function $l:\mathbb{R}^{N}\rightarrow\mathbb{R}^{m}$
such that
$f(x,\alpha(x,\theta))=0\quad\text{if and only if}\quad x=l(\theta),$ (7)
for any $(x,\theta)\in\mathbb{R}^{m}\times\mathbb{R}^{N}.$ For each
$\theta\in\mathbb{R}^{N}$, the equilibrium $x_{e}=l(\theta)$ of the system (6)
is locally exponentially stable with decay and overshoot constants uniformly
dependent on $\theta$.
###### Assumption 4
There exists $\theta^{*}\in\mathbb{R}^{N}$ such that for all admissible $x$
values, $h_{d}(x)$ has its unique maximum at $x=x^{*}=l(\theta^{*}),$
$y^{\prime}(x^{*})={\frac{\partial h}{\partial
x}}\Bigr{|}_{\begin{subarray}{c}x=x^{*}\end{subarray}}=0,$ (8)
and the $m\times m$ Hessian matrix
$y^{\prime\prime}(x^{*})={\frac{\partial^{2}h}{\partial
x^{2}}}\Bigr{|}_{\begin{subarray}{c}x=x^{*}\end{subarray}}$ is negative
definite.
The control objective is to maximize the steady-state value of $y$ but without
requiring the knowledge of $\theta^{*}$ or the system functions $h_{d},f$.
This objective could be perfectly performed if $\theta^{*}$ was known and
substituted in (2).
The control parameter vector estimation can be done in different ways, leading
to different ES schemes, even for the fixed control structure (2). The
assumption that $h$ has a maximum is without loss of generality, considering a
maximum seeking task. Minimum seeking case would be treated identically,
replacing $y$ with $-y$ in the subsequent feedback design.
In the next section, existing classical perturbation based ES approach will be
reviewed to give an idea about our proposed design and to later use in
simulation comparisons.
## III Classical Perturbation Based Extremum Seeking for Dynamic Systems
In the classical ES approach shown in Fig.1, a high pass filter, a multiplier,
and a lowpass filter are used to find the extremum. A general single input
nonlinear system is considered in the design of [21]. A multi input ES
approach is examined in [24].
Figure 1: Classic perturbation based ES scheme for multi input dynamic systems
given by [24].
In the approach in [1, 21], the control law (2) feeding the plant (3) is tuned
via the time-varying parameter
$\theta=\begin{bmatrix}\theta_{1},\theta_{2},\cdots,\theta_{N}\end{bmatrix}^{T}$
that is produced by
$\displaystyle\theta(t)=\hat{\theta}(t)+S(t),$ (9)
where
$\displaystyle S(t)$
$\displaystyle=\begin{bmatrix}a_{1}sin(\omega_{1}t)&a_{2}sin(\omega_{2}t)&\cdots&a_{N}sin(\omega_{N}t)\end{bmatrix}^{T},$
(10)
and $\hat{\theta}(t)$ is generated by
$\displaystyle\dot{\hat{\theta}}(t)$ $\displaystyle=k\hat{G}(t),$ (11)
$\displaystyle\dot{\hat{G}}(t)$
$\displaystyle=\omega_{l}M(t)(y(t)-\eta(t))-\omega_{l}\hat{G}(t),$
$\displaystyle\dot{\eta}(t)$
$\displaystyle=\omega_{h}\left(y(t)-\eta(t)\right).$
Perturbation signal is selected as
$M(t)=\begin{bmatrix}\frac{2}{a_{1}}sin(\omega_{1}t)&\frac{2}{a_{2}}sin(\omega_{2}t)&...&\frac{2}{a_{N}}sin(\omega_{N}t)\end{bmatrix}^{T}$.
In the next two sections, we develop RLS estimation based ES scheme with
forgetting factor instead of the approach of Section 3. Our proposed RLS
estimation based adaptive ES scheme will be separately developed for two
cases: for scalar parameter $(N=1)$ systems and for vector parameter $(N>1)$
systems, in Sections 4 and 5, respectively.
## IV RLS based ES Design for Scalar Parameter Systems
### IV-A Static Maps
Consider the static map (1) and the control law (2) for scalar case, $N=1$,
under Assumptions 1 and 2 about the closed-loop system. The proposed scheme is
depicted in Fig. 2.
RLS estimation based ES block shown in Fig. 2 consists of two parts: an RLS
based adaptive parameter identifier estimating the gradient
$h_{\theta}=\frac{\partial y}{\partial\theta}$ and a control law to be fed by
this estimate.
Figure 2: RLS based ES scheme for scalar parameter static maps.
Consider the static map equation (1). In this equation, the time derivative of
the output $y$ is given by
$\dot{y}=h_{\theta}\dot{\theta}.$ (12)
Design of the RLS based estimator to generate $\hat{h}_{\theta}$ considers the
relation (12) that is in the linear parametric model form.
$z=h_{\theta}\phi.$ (13)
where
$z=\dot{y},\quad\phi=\dot{\theta}.$ (14)
If $\dot{y}$ is not available for measurement, then the regressor signals can
be generated as
$\displaystyle
z=\frac{s}{s+\omega_{l}}[y],\quad\phi=\frac{1}{s+\omega_{l}}[\dot{\theta}],$
(15)
i.e.,
$\displaystyle\dot{z}=-\omega_{l}z+\dot{y},\quad\dot{\phi}=-\phi\omega_{l}+\dot{\theta},$
(16)
where $\omega_{l}>0$ is a constant design parameter. The control law
generating $\theta$ is proposed to be
$\dot{\theta}=k\hat{h}_{\theta},\quad k>0.$ (17)
Assuming that the time variation of $h_{\theta}$ is sufficiently slow, we
design an RLS estimator for the parametric model (13) as follows:
$\dot{\hat{h}}_{\theta}=p\epsilon\phi,$ (18) $\dot{p}=\beta p-p^{2}\phi^{2},$
(19) $\epsilon=z-\hat{h}_{\theta}\phi,$ (20)
where $\beta>0$ is forgetting factor and $p$ is the covariance term. The
overall ES scheme producing $\theta(t)$ can be summarized by (17), (18), (19),
and (20).
### IV-B Dynamic Systems
The RLS estimation based ES control scheme (17)-(20) applies to the dynamic
system (3)-(5) for $N=1$ with the control law (2) under Assumptions 3 and 4.
The proposed ES scheme is depicted in Fig. 3.
Figure 3: RLS based ES scheme for scalar parameter dynamic systems.
### IV-C Stability Analysis
In this section, stability proof of the proposed schemes in Sections IV-A and
IV-B will be presented. We know that $\theta^{*}$ is the equilibrium point and
the estimated gradient will be $h_{\theta}=0$ at the equilibrium point
$\theta=\theta^{*}$. We can write our stability result as follows:
###### Theorem IV.1
Consider the RLS estimation based ES scheme given in Figs. 2, 3 and defined in
(17) - (20) with $z$ and $\phi$ as given in (14) or (15), and Assumptions 1 \-
4. For any initial condition $\hat{\theta}(0)\in\mathbb{R}^{N}$ and adaptation
gain $k$, $\theta(t)$ asymptotically converges to small neighborhood of
extremum parameter $\theta^{*}$.
###### Proof:
We consider the Lyapunov function as
$V(\theta(t))=\frac{1}{2}\left(\theta(t)-\theta^{*}\right)^{2}=\frac{1}{2}\tilde{\theta}^{2}.$
(21)
We write the time derivative of $V$ along the solutions of (17) as
$\dot{V}=\dot{\theta}\left(\theta(t)-\theta^{*}\right)=\dot{\theta}\tilde{\theta}.$
(22)
Substituting (17) into (22), we obtain
$\dot{V}=k\hat{h}_{\theta}\tilde{\theta}.$ (23)
For the maximum case, $k>0$. Negative definiteness of (23) depends on the
initial condition $\theta_{0}$ that determines the signs of $\hat{h}_{\theta}$
and $\tilde{\theta}$. If $\theta(0)<\theta^{*}$, then $\hat{h}_{\theta}>0$ and
$\tilde{\theta}<0$. On the other hand, if $\theta(0)>\theta^{*}$, then
$\hat{h}_{\theta}<0$ and $\tilde{\theta}>0$. Hence, for both cases
$\dot{V}<0$. We also need to examine the forgetting factor $\beta$ and the
persistent excitation (PE) of $\phi$. If $\phi$ is PE, then (17) guarantees
that $p\in\mathcal{L}_{\infty}$ and $\theta(t)\to\theta^{*}$ as $t\to\infty$.
When $\beta>0$, the convergence of $\theta(t)\to\theta^{*}$ is exponential
([25]).
∎
## V RLS based ES Design for Vector Parameter Systems
In this section, the proposed RLS estimation based ES scheme is extended to
the systems with vector parameters $(N>1)$. Similar to the classical gradient
based analysis, small sinusoidal perturbation signals with different
frequencies ($\omega_{1},\cdots,\omega_{N}$) are added to the control signals
to provide sufficiently rich excitation.
### V-A Static Maps
Figure 4: RLS based ES scheme for vector parameter static maps.
Consider the block diagram in Fig. 4 for the static map in (1). The time
derivative of (1) is given by
$\dot{y}=h_{\theta}^{T}\dot{\theta},$ (24)
which, similarly to (13), can be written in the linear parametric form
$z=h_{\theta}^{T}\phi,$ (25)
where $z$ and $\phi$ are again defined by either (14) or (15). The control law
(17) is used for updating $\theta$ in the vector case as well. The design of
the RLS estimator to produce $\hat{h}_{\theta}$ is based on the parametric
model (25) and is given as follows ([25]):
$\dot{\hat{h}}_{\theta}=P\epsilon\phi,$ (26) $\dot{P}=\beta P-P\phi\phi^{T}P,$
(27) $\epsilon=z-\hat{h}_{\theta}^{T}\phi,$ (28)
where $\beta$ is the forgetting factor and $P$ is the covariance matrix of the
RLS algorithm. The control law generating $\theta$ is proposed to be
$\dot{\hat{\theta}}=k\hat{h}_{\theta},\quad k>0.$ (29)
$\theta(t)=\hat{\theta}(t)+S(t),$ (30)
where $S(t)$ is defined as in (10). Different from scalar parameter systems,
we use perturbation signals, $S(t)$. The need to use of dither signals in
vector parameter systems is that dither signals with different frequencies can
be implemented on each input signal to achieve overall PE.
### V-B Dynamic Systems
Figure 5: RLS based ES scheme for vector parameter dynamic systems.
The RLS estimation based ES scheme (26) - (29) applies to the dynamic system
(3)-(5) with control law (2) under Assumptions 3 and 4 for vector parameter
systems. Block diagram of the proposed ES scheme is given in Fig.5.
### V-C Stability Analysis
The intuition in (30) is to satisfy persistence of excitation for
$N$-dimensional $\phi$ by introducing at least one distinct dither frequency
for each input, following the standard perturbation based ES control
approaches mentioned in Section III. Similar to the analysis in Section IV-C,
consider the Lyapunov function as
$V(\tilde{\theta}(t))=\frac{1}{2}\tilde{\theta}^{T}\tilde{\theta}.$ (31)
We write the time derivative of $V$ along the solutions of (29) as
$\dot{V}=\tilde{\theta}^{T}\dot{\tilde{\theta}}=\tilde{\theta}^{T}\dot{\theta}.$
(32)
Substituting (30) into (32), we obtain
$\dot{V}=\tilde{\theta}^{T}(k\hat{h}_{\theta}+\dot{S}).$ (33)
The relationship between $\tilde{\theta}$ and $\hat{h}_{\theta}$ in Section
4.3 applies to vector parameter case. The stability again depends on $k$,
initial condition $\theta(0)$, forgetting factor $\beta$, and PE of $\phi$,
that is guaranteed by addition of dither signals in (30). Hence,
$P\in\mathcal{L}_{\infty}$ and $\theta(t)\to\theta^{*}$ as $t\to\infty$.
## VI Simulations
In this section, we present simulation results to show the validity of the
proposed schemes. We will present two examples for scalar parameter and vector
parameter cases with their comparison results with classical ES method in
Section III.
### VI-A Scalar Parameter Simulation Example
Consider the following model
$\displaystyle y$ $\displaystyle=10m(u),$ (34) $\displaystyle m(u)$
$\displaystyle=k_{1}\left(1-e^{-k_{2}u}\right)-k_{3}u$ $\displaystyle u$
$\displaystyle=\theta,$
where $\theta^{*}=0.3$. $\theta_{0}=0.01$ is chosen as initial value for both
schemes. $k_{1}=1.05,k_{2}=23,k_{3}=0.52$ are given. For RLS estimation based
ES scheme, the following parameters are used: $k_{ls}=0.01$, $p_{0}=10^{3}$,
and $\beta=0.98$ are given. For classical ES scheme, the following parameters
are given: $k=0.08$, $\omega_{h}=0.6$, $\omega_{l}=0.8$, $S(t)=0.01\sin 3t$,
and $M(t)=sin3t$. We apply the Gaussian measurement noise as ($\sigma=0.05$)
for both gradient and RLS algorithms. We apply RLS estimation based ES scheme
in Fig.3. The results for this example is given in Fig.6. It is obvious that
proposed scheme can reach a neighborhood of the extremum point
$\theta^{*}=0.3$ at $y^{*}=8.85$ less than 2 second while classical ES finds
the extremum point very late and cannot maintain that extremum point under
measurement noise.
Figure 6: Single parameter RLS estimation based ES results.
### VI-B Vector Parameter Simulation Example
Consider the following model
$\displaystyle y$ $\displaystyle=y_{1}+y_{2},$ (35) $\displaystyle y_{1}$
$\displaystyle=am(u_{1}),\leavevmode\nobreak\
m(u_{1})=(2m^{*}_{1}u^{*}_{1}u_{1})/(u^{*2}_{1}+u^{2}_{1}),$ $\displaystyle
y_{2}$ $\displaystyle=am(u_{2}),\leavevmode\nobreak\
m(u_{2})=(2m^{*}_{2}u^{*}_{2}u_{2})/(u^{*2}_{2}+u^{2}_{2}),$ $\displaystyle u$
$\displaystyle=[u_{1},\leavevmode\nobreak\
u_{2}]=[\theta_{1},\leavevmode\nobreak\ \theta_{2}].$
where $[\theta^{*}_{1},\leavevmode\nobreak\
\theta^{*}_{2}]=[0.2,\leavevmode\nobreak\ 0.3]$. For both schemes, initial
values are given as $u_{0}=[0.1,\leavevmode\nobreak\ 0.1].$ We aim to reach
$y^{*}_{1}(\theta^{*}_{1})=5$ and $y^{*}_{2}(\theta^{*}_{2})=9$. For RLS
estimation based ES scheme, the following parameters are used:
$k=[0.01,\leavevmode\nobreak\ 0.01]$, $P_{0}=10^{4}$, $\beta=0.98$, and
$S(t)=[0.01\sin 7t,\leavevmode\nobreak\ 0.01\sin 10t]$ are given. For
classical ES scheme, the following parameters are given:
$k=[0.02,\leavevmode\nobreak\ 0.01]$, $\omega_{h}=[0.6,\leavevmode\nobreak\
0.6]$, $\omega_{l}=[0.8,\leavevmode\nobreak\ 0.8]$, $S(t)=[0.01\sin
t,\leavevmode\nobreak\ 0.01\sin 2t]$, and $M(t)=[4.5\sin
5t,\leavevmode\nobreak\ 11\sin 5t]$. We apply the Gaussian measurement noise
as ($\sigma=0.05$) for both gradient and RLS algorithms. Simulation results
are given in Fig.7 for both RLS estimation based and classical ES schemes. It
is clear that the results taken with RLS can converge the extremum point and
find the maximized output $y^{*}$ while classical ES scheme has difficulty to
reach the extremum point. One reason for this difficulty is that in classical
ES scheme has many tuning parameters that must be tuned accordingly. For
vector case, we also emphasize the need to apply perturbation terms to the
scheme in order to observe multiple input channels separately. When there is
no perturbation signal applied, the inputs cannot be distinguished and
converge to an average value that caused to reach a value near the maximum.
Similar to scalar case, RLS estimation based ES scheme outweighs classical ES
scheme in terms of reaching extremum under measurement noises.
### VI-C ABS Simulation Example
In this section, we also tested our ES scheme in ABS using MATLAB/Simulink.
Then, we compared its performance with gradient based ES scheme developed by
[1]. The wheel characteristics are given by the following set of equations
Figure 7: Vector parameter RLS estimation based ES results.
$\displaystyle m\dot{\nu}$ $\displaystyle=N\mu(\lambda),$ (36) $\displaystyle
I\dot{\omega}$ $\displaystyle=-B\omega-NR\mu(\lambda)+\tau,$
where $v,\omega,m,N,R,I$ are linear velocity, angular velocity, the mass, the
weight, radius, and the moment of inertia of the wheel, respectively.
$B\omega$ is the bearing friction torque, $\tau$ is braking torque,
$\mu(\lambda)$ is the friction force coefficient. $\lambda$ is the wheel slip
which is defined as
$\lambda(v,\omega)=\frac{R\omega-\nu}{\nu}.$ (37)
Controller design procedure are identical to the design in [1]. The parameters
that are identical in both schemes are given as follows: $m=400kg$, $R=0.3m$,
$I=1.7kgm^{2}$, $B=0.01kg/s$. Perturbation signal amplitude and frequency is
selected as $a=0.01$, $\omega=3$, high pass, low pass and regulation gain are
selected as $\omega_{h}=0.6$, $\omega_{l}=0.8$, $k=6$ in gradient based scheme
equations (9), (10), and (11). $k=-0.01$ is used in ABS case and $\beta=0.95$
is selected for RLS based scheme. The simulation for both gradient and RLS
schemes is performed under the Gaussian noise ($\sigma=0.1$) in longitudinal
acceleration measurement, $\dot{v}$. Initial conditions are selected the same
in both schemes for a fair comparison. We use the approximation model (38) in
simulations to see the effect of the proposed schemes.
$\mu(\lambda)=2\mu_{max}\frac{\lambda^{*}\lambda}{{{\lambda}^{*2}}+\lambda^{2}},$
(38)
where (38) has a maximum at $\lambda=\lambda^{*}$ with
$\mu(\lambda^{*})=\mu_{m}$. For simulation, we choose wet road since it is one
of the safety critical conditions. Simulation results of ABS for gradient/RLS
based scheme comparison are given in Fig.8. Results show that vehicle stopping
time of RLS parameter estimation based ES in an emergency situation is less
than that of gradient one. Slip ratio estimation is almost 2 sec quicker with
RLS parameter estimation, can be seen in Fig. 8(a). RLS based ES scheme gives
better results under measurement noise and can reach the maximum deceleration
in less time.
(a) Friction force coefficient and estimated slip results for ABS.
(b) Braking torque, velocity and deceleration results for ABS.
Figure 8: Wet road comparison results for ABS.
## VII Conclusion
This paper focuses on designing an RLS parameter estimation based ES scheme
for scalar parameter and vector parameter static map and dynamic systems.
Their stability conditions are stated for each case. The proposed ES scheme
does not need perturbation signals for scalar parameter systems; however, the
proposed ES scheme needs perturbation signals with different frequencies for
vector parameter systems. Proposed scheme is applied to different simulation
scenarios and compared to classical gradient estimation based ES under
measurement noise. The results show the validity and effectiveness of RLS
parameter estimation based ES scheme over gradient one.
## References
* [1] K. B. Ariyur and M. Krstic, _Real-time Optimization by Extremum-Seeking Control_. John Wiley & Sons, 2003.
* [2] H. Wang, M. Krstic, and G. Bastin, “Optimizing bioreactors by extremum seeking,” _International Journal of Adaptive Control and Signal Processing_ , vol. 13, no. 651, p. 669, 1999.
* [3] G. Bastin, D. Nesic, Y. Tan, and I. Mareels, “On extremum seeking in bioprocesses with multivalued cost functions,” _Biotechnology Progress_ , vol. 25, no. 3, pp. 683–689, 2009.
* [4] S. Drakunov, U. Ozguner, P. Dix, and B. Ashrafi, “Abs control using optimum search via sliding modes,” _IEEE Transactions on Control Systems Technology_ , vol. 3, no. 1, pp. 79–85, 1995.
* [5] H. Yu and U. Ozguner, “Extremum-seeking control strategy for abs system with time delay,” in _Proc. IEEE American Control Conference_ , vol. 5, 2002, pp. 3753–3758.
* [6] E. Dincmen, B. Guvenc, and T. Acarman, “Extremum-seeking control of abs braking in road vehicles with lateral force improvement,” _IEEE Transactions on Control Systems Technology_ , vol. 22, no. 1, pp. 230–237, 2014\.
* [7] E. Dincmen, “Adaptive extremum seeking scheme for abs control,” in _13th IEEE International Workshop on Variable Structure Systems_ , 2014, pp. 1–6.
* [8] C. Mayhew, R. Sanfelice, and A. Teel, “Robust source-seeking hybrid controllers for autonomous vehicles,” in _Proc. IEEE American Control Conference_ , 2007, pp. 1185–1190.
* [9] C. Zhang and R. Ordonez, “Robust and adaptive design of numerical optimization-based extremum seeking control,” _Automatica_ , vol. 45, no. 3, pp. 634–646, 2009.
* [10] J. Lin, S. Song, K. You, and M. Krstic, “Overshoot-free nonholonomic source seeking in 3-d,” _International Journal of Adaptive Control and Signal Processing_ , vol. 31, no. 9, pp. 1285–1295, 2017.
* [11] E. Biyik and M. Arcak, “Gradient climbing in formation via extremum seeking and passivity-based coordination rules,” _Asian Journal of Control_ , vol. 10, no. 2, pp. 201–211, 2008.
* [12] M. Stankovic and D. Stipanovic, “Stochastic extremum seeking with applications to mobile sensor networks,” in _Proc. IEEE American Control Conference_ , 2009, pp. 5622–5627.
* [13] B. Moore and C. Canudas-de Wit, “Source seeking via collaborative measurements by a circular formation of agents,” in _Proc. IEEE American Control Conference_ , 2010, pp. 6417–6422.
* [14] L. Fu and U. Ozguner, “Extremum seeking with sliding mode gradient estimation and asymptotic regulation for a class of nonlinear systems,” _Automatica_ , vol. 47, no. 12, pp. 2595–2603, 2011.
* [15] B. G. B. Hunnekens, M. A. M. Haring, N. van de Wouw, and H. Nijmeijer, “A dither-free extremum-seeking control approach using 1st-order least-squares fits for gradient estimation,” in _53rd IEEE Conference on Decision and Control_ , 2014, pp. 2679–2684.
* [16] D. Nesic, T. Nguyen, Y. Tan, and C. Manzie, “A non-gradient approach to global extremum seeking: An adaptation of the shubert algorithm,” _Automatica_ , vol. 49, no. 3, pp. 809–815, 2009.
* [17] M. Chioua, B. Srinivasan, M. Guay, and M. Perrier, “Performance improvement of extremum seeking control using recursive least square estimation with forgetting factor,” _IFAC-PapersOnLine_ , vol. 49, no. 7, pp. 424–429, 2016\.
* [18] D. Nesic, A. Mohammadi, and C. Manzie, “A framework for extremum seeking control of systems with parameter uncertainties,” _IEEE Transactions on Automatic Control_ , vol. 58, no. 2, pp. 435–448, 2013.
* [19] P. Blackman, “Extremum-seeking regulators,” in _An exposition of adaptive control_. Macmillan, 1962.
* [20] M. Krstic, “Performance improvement and limitations in extremum seeking control,” _Systems & Control Letters_, vol. 39, no. 5, pp. 313–326, 2000\.
* [21] M. Krstic and H. H. Wang, “Stability of extremum seeking feedback for general nonlinear dynamic systems,” _Automatica_ , vol. 36, no. 4, pp. 595–601, 2000\.
* [22] M. Guay, “A time-varying extremum-seeking control approach for discrete-time systems,” _Journal of Process Control_ , vol. 24, no. 3, pp. 98–112, 2014\.
* [23] M. Guay and D. Dochain, “A time-varying extremum-seeking control approach,” _Automatica_ , vol. 51, pp. 356–363, 2015.
* [24] A. Ghaffari, M. Krstic, and D. Nesic, “Multivariable newton-based extremum seeking,” _Automatica_ , vol. 48, no. 8, pp. 1759–1767, 2012.
* [25] P. Ioannou and B. Fidan, _Adaptive Control Tutorial_. Society for Industrial and Applied Mathematics, 2006.
|
2024-09-04T02:54:58.256546 | 2020-03-09T07:49:06 | 2003.03956 | {
"authors": "Jihyun Bhom, Marcin Chrzaszcz",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26108",
"submitter": "Jihyun Bhom",
"url": "https://arxiv.org/abs/2003.03956"
} | arxiv-papers | # HEPLike: an open source framework for experimental likelihood evaluation
Jihyun Bhom Marcin Chrzaszcz Henryk Niewodniczanski Institute of Nuclear
Physics Polish Academy of Sciences, Krak´ow, Poland
###### Abstract
We present a computer framework to store and evaluate likelihoods coming from
High Energy Physics experiments. Due to its flexibility it can be interfaced
with existing fitting codes and allows to uniform the interpretation of the
experimental results among users. The code is provided with large open
database, which contains the experimental measurements. The code is of use for
users who perform phenomenological studies, global fits or experimental
averages.
###### keywords:
experimental high energy physics , likelihoods
††journal: Computer Physics Communications
PROGRAM SUMMARY/NEW VERSION PROGRAM SUMMARY
Program Title: HEPLike
Licensing provisions(please choose one): GPLv3
Programming language: C++
Supplementary material:
Journal reference of previous version: FIRST VERSION OF PROGRAM
Nature of problem(approx. 50-250 words): Provide a uniform way of store, share
and evaluate experimental likelihoods in a proper statistical manner. The code
can be easily interfaced with existing global fitting codes. In addition a
large database with the measurements is published. The program targets users
who perform in their scientific work: phenomenological studies, global fits or
measurements averages. The HEPLike has been created for FlavBit project[1],
which was used to perform several analysis[2,3] and here we present an updated
version, which can be used in standalone mode.
Solution method(approx. 50-250 words): C++ code that evaluates the statistical
properties of the measurements without user intervention. The large open
database is provided as well. The measurements are stored in YAML files
allowing for easy readability and extensions.
## References
* [1] arXiv: 1705.07933
* [2] arXiv: 1705.07935
* [3] arXiv: 1705.07917
## 1 Introduction
In the High Energy Physics (HEP) the experimental measurements are performed
by several collaborations, which measure various different observables. The
experimental results are presented in various ways; some being as simple as a
measurement with an Gaussian error, some more complicated as multiple
correlated measurements with asymmetric errors or in some places even a full
likelihood function is being published. To make things more complicated in
some cases multiple representations of the same measurement are being
published. All of this makes it hard to directly use and compare various
different results. It also leaves a room for misinterpreting the results by
theorists, which use these inputs to their studies. It happens that the
asymmetric errors are being symmetrized, instead of using the full likelihood
only central value with approximated asymmetric error is being used.
The High Energy Physics Likelihoods (HEPLike) is a computer program that
allows to store and share the likelihoods of various measured quantities. The
published code can be useful for users performing phenomenological studies
using experimental results, global fitting collaborations or experimental
averages. Thanks to its structure it is easy to be interface with existing
codes. It simplifies the work of people as instead of looking up the
appropriate measurement and coding up their own likelihood they can download
the database of measurements and choose the one they need. Furthermore, it
shifts the burden of constructing the proper likelihood functions back to the
experimentalists, which performed the measurement at the first place and are
clearly the most appropriate people to handle this task.
The computer code described in this paper is written in C++, making it useful
for majority of fitting programs available on the market [1, 2, 3, 4, 5, 6].
The library can be used in both the $\chi^{2}$ and likelihood fits. Moreover,
it contains a statistical module with useful functions that can be used in the
statistical analysis. Besides the computer code a database with the
likelihoods is being published. The measurements are stored in the YAML files
making them easy to read by both the machine and human. This database can be
easily extended by adding new YAML files if new measurement becomes available.
With the software we provide useful utilities, which allows to perform
searches inside the database, create BiBtex containing publications, which
have been in the fit, etc.
The paper is organized as follows: in Sec. 2 construction of the likelihood
functions is presented. Sec. 3 explains the detailed code implementations and
data storage, while Sec. 4 describes how to install and use HEPLike software.
## 2 Likelihood constructions
In this section we will present how likelihoods in HEPLike are stored and
constructed. Each measurement is stored in separate YAML file. There are
several ways collaborations published their results depending on the
measurements itself:
* 1.
Upper limits,
* 2.
Single measurement with symmetric uncertainty,
* 3.
Single measurement with asymmetric uncertainty,
* 4.
Multiple measurements with symmetric uncertainty,
* 5.
Multiple measurements with asymmetric uncertainty,
* 6.
One dimensional likelihood function,
* 7.
n-dimensional likelihood function.
In addition, there is growing interest from the community that the
experimental collaborations instead of only the results of the analysis
publish also the dataset that has been used to obtain the result. For this
future occasion we have also implement a way that this data can be directly
used in the fits.
Each of these cases has a different module of HEPLike that is designed to
evaluate the likelihood functions. In this section we will present the
statistical treatment of the above cases and the modules that are responsible
for their evaluation. Each of the YAML files is required to have the following
information (here for example we use the
$R_{\mathup{{{K}}^{\scriptstyle{\ast}}}}$ measurement [7]):
⬇
1BibCite: Aaij:2017vbb
2BibEntry: ’@article{Aaij:2017vbb,
3 author = ”Aaij, R. and others”,
4 title = ”{Test of lepton universality
5 with $B^{0} \rightarrow
6 K^{*0}\ell^{+}\ell^{-}$ decays}”,
7 collaboration = ”LHCb”,
8 journal = ”JHEP”,
9 volume = ”08”,
10 year = ”2017”,
11 pages = ”055”,
12 doi = ”10.1007/JHEP08(2017)055”,
13 eprint = ”1705.05802”,
14 archivePrefix = ”arXiv”,
15 primaryClass = ”hep-ex”,
16 reportNumber = ”LHCB-PAPER-2017-013,
17 CERN-EP-2017-100”,
18 SLACcitation = ”%%CITATION = ARXIV:1705.05802;%%”
19 }
20 ’
21DOI: 10.1007/JHEP08(2017)055
22Process: R_{Kstar^{*}}
23FileName: RKstar.yaml
24Name: RKstar
25Source: HEPDATA
26SubmissionYear: 2017
27PublicationYear: 2018
28Arxiv: 1705.05802
29Collaborations: LHCb
30Kinematics: q2>1.1 && q2<6\.
31HLAuthor: Gal Anonim
32HLEmail<EMAIL_ADDRESS>
33HLType: HL_ProfLikelihood
The above informations are used to store the information relevant for the
bookkeeping. For instance the entries BibCite and BibEntry correspond to the
information that are used to generate a BiBtex citation file with the
measurements that have been used in the studies. The DOI corresponds to the
digital object identifier of the publication. The Decay defines the process
that has been studied. It can also be replaced by the Process entry. The Name
is a unique name of this measurement type. If the measurement gets updated
with more data or by other collaboration the Name entry in the new YAML file
should be the same as in the old one. Source entry corresponds to the source
of the measurement. This can be either a HEPData or the collaboration itself.
The SubmissionYear (PublicationYear) refers to the year of appearance
(publication) of the result. The Arxiv codes the Arxiv number, while the
Collaborations stores the information which experimental collaboration has
performed the measurement. Finally, the Kinematics stores additional
information about the kinematic region that has been measured. The HLAuthor
and HLEmail encode the information about the YAML file author and his email in
case user needs further information about the encoded measurement. Last but
not least the entry HLType contains the information about which HEPLike object
should be used to read the file.
Reading of this content in the YAML is implemented in the HL_Data class. All
other classes that construct the likelihood functions inherit from this class
its capabilities. Please note that if the information is missing in the YAML
file the program will omit reading this entry. The only exception is the
FileName, which is mandatory. If a user wants to be notified by the program
that some informations are missing the HL_debug_yaml variable has to be set to
true (default value is false).
### 2.1 Upper limits
In case where the measurement did not observe a significant access of signal
candidates the collaborations usually report an upper limit on the measured
quantity. Commonly $90\%$ or $95\%$ upper limits are quoted. Experiments use
various statistical approaches to compute this limits. It can be the $\rm
CL_{s}$ method [8], Feldman–Cousins [9] or some variation of Bayesian methods
[10]. Publication of only an upper limits does not provide enough information
to use the result in global fits. However, nowadays experiments besides the
aforementioned upper limits publish a full p-value scans. Examples of such
scans are shown in Fig. 1. The plots are usually available in digital format,
which allows the information to be extracted and used in computer program.
Figure 1: Example of p-value scans for the
$\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mathup{{{\tau}}^{\scriptstyle{-}}}\mathup{{{\tau}}^{\scriptstyle{+}}}$
[11] (left) and $\mathup{{{D}}}\to\mathup{{{e}}}\mathup{{{\mu}}}$ [8] (right).
Please note that the $\rm CL_{s}$ value can be interpreted as p-value as
explained in [12]. The black line corresponds to the observed $\rm
CL_{s}$/p-value.
In HEPLike a class HL_Limit is responsible for handling this type of
measurements. It reads the YAML file that contains the standard information
about the measurement (see Sec. 2 for details). The additional information of
the observed $\rm CL_{s}$/p-value is stored in the YAML file in the following
way111Please note that the besides this information the previous information
from Sec. 2 should be included.:
⬇
1Cls:
2- [0.0, 1.0]
3- [1.0e-10, 0.977091694706]
4- [2.0e-10, 0.954375824297]
5- [3.0e-10, 0.93200355343]
6- [4.0e-10, 0.910630700546]
7- [5.0e-10, 0.889382721809]
The Cls can be replaced in the YAML file by p-value as they correspond to the
same information. The first number in each array is the value of tested
hypothesis (for example branching fraction), while the second is the
corresponding $\rm CL_{s}$/p-value. These values are then interpreted using a
$\chi^{2}$ distribution with one degree of freedom:
$pdf(x)=\frac{1}{2^{1/2}\Gamma(1/2)}x^{1/2-1}e^{-x/2},$ (1)
which had the cumulative distribution function defined as:
$cdf(x)=\frac{1}{\Gamma(1/2)}\gamma(1/2,x/2).$ (2)
In the above equations the $\Gamma(x)$ and $\gamma(k,x)$ correspond to Gamma
and incomplete gamma functions. By revering the $cdf(x)$ one can obtain the
$\chi^{2}$ value:
$\chi^{2}=cdf^{-1}(1-p),$ (3)
where p corresponds to the p-value of a given x hypothesis. This $\chi^{2}$
can be then translated to the log-likelihood via Wilks theorem [13]:
$-\log(\mathcal{L})=\frac{1}{2}\chi^{2},$ (4)
where the $\mathcal{L}$ is the likelihood. The user can choose if he wants to
obtain the $\chi^{2}$, likelihood or a log-likelihood value of a given
hypothesis.
### 2.2 Single measurement with symmetric uncertainties
The simplest case of a published experimental result is a single value with a
symmetric uncertainty. This is for example a typical result of an PDG of HFLAV
average [14, 15]. The measurement is coded in the YAML file as:
⬇
1Observables:
2- [ ”Br_A2BCZ”, 0.1, 0.05, 0.01 ]
The first argument in the array ”Br_A2BCZ” corresponds to the observable name.
Then the first number corresponds to the measured central value. The 2nd and
the 3rd number are the statistical and systematic uncertainties. In case where
only one uncertainty is available the 3rd number should be omitted and it will
be automatically set to 0 in the software. We have decided to keep the plural
Observables to be more uniform in case where more observables are measured.
The module responsible for reading this YAML file is called HL_Gaussian, it
calculates the $\chi^{2}$ for an $x$ hypothesis:
$\chi^{2}=\frac{(x_{obs}-x)^{2}}{\sigma_{stat}^{2}+\sigma_{syst}^{2}},$ (5)
where the $x_{obs}$ correspond to the measured central value in the YAML file
and the $\sigma_{stat}$ and $\sigma_{syst}$ are the statistical and systematic
uncertainties respectively. This can be the again translated to the likelihood
and log-likelihood value using Eq. 4.
### 2.3 Single measurement with asymmetric uncertainties
A simple extension of the Gaussian uncertainty is when an asymmetric
uncertainty is reported. This type of measurements although less frequent
appear in the literature. The publication in this case reports the central
value and two uncertainties: $\sigma_{+}$ and $\sigma_{-}$, which correspond
to the right (for values larger than the measured central value) and left (for
values smaller than the measured central value) uncertainty. In HEPLike we
have created a HL_BifurGaussian class, which reads the following entry in the
YAML file:
⬇
1Observables:
2- [ ”Br_A2BCZ”, 0.1, 0.05, -0.06, 0.01, -0.02 ]
The first argument is again the name of the observable and the second one is
its central value. The third and fourth arguments correspond to the
statistical $\sigma_{+}$ and $\sigma_{-}$ uncertainties, while the fifth and
sixth to the systematical $\sigma_{+}$ and $\sigma_{-}$ uncertainties. It is
important to keep the minus sign before the left side uncertainties. The code
will indicate the error in case of missing sign. In some cases the
systematical uncertainty is reported to be symmetric. In such case the last
number can be omitted in the YAML entry.
In the literature there exist number of ways to interpret asymmetric
uncertainties [16]. We have chosen the most commonly used one which is the so-
called bifurcated Gaussian:
$\displaystyle\chi^{2}=\begin{cases}\frac{(x_{obs}-x)^{2}}{\sigma_{+}^{2}},&\text{if
}x\geq x_{obs}\\\ \frac{(x_{obs}-x)^{2}}{\sigma_{-}^{2}},&\text{if
}x<x_{obs},\\\ \end{cases}$ (6)
where the $\sigma_{\pm}^{2}$ is the sum of squared statistical and systematic
uncertainties for right/left case. Once $\chi^{2}$ is calculated it can be
translated to the log-likelihood using Eq. 4.
### 2.4 Multiple measurements with symmetric uncertainties
Nowadays the most common are simultaneous measurements of various quantities,
which are correlated between each other. For instance cross section
measurements in different kinematic bins, or measurements of angular
coefficients in heavy mesons decays. In HEPLike the class responsible for
handling these cases is called HL_nDimGaussian. It reads the following
information from the YAML file:
⬇
1Observables:
2- [ ”BR1”, 0.1, 0.02]
3- [ ”BR2”, 0.2, 0.01, 0.01]
4- [ ”BR3”, 0.4, 0.04]
5Correlation:
6- [ ”BR1”, ”BR2”, ”BR3”]
7- [ 1. , 0.2 , 0 ]
8- [ 0.2, 1., 0. ]
9- [ 0 , 0., 1. ]
The information in the “Observables” entry is exactly the same as in the
HL_Gaussian class. Please note that similarly to the previous class the
systematic uncertainty is not mandatory and in case if it is not provided the
code will treat it as 0. The next entry in the YAML file is the “Correlation”,
which encodes the correlation matrix. The first ”row” is the names of the
variables it is important to keep the same order of variables as in the
“Observables” entry. The HL_nDimGaussian evaluates the $\chi^{2}$ in the
following way:
$\displaystyle\chi^{2}=V^{T}{\rm Cov}^{-1}V,$ (7)
where V is a column matrix, which is the difference between the measured and
the tested i-th observable value. The $\rm Cov$ is a square matrix,
constructed from the correlation matrix (${\rm Corr}$): ${\rm Cov}_{i,j}={\rm
Corr}_{i,j}\sigma_{i}\sigma_{j}$.
Often a user does not want to use the full set of measured quantities but just
their subset. In this case a function Restrict(vector<string>) can be used. By
passing in a form of vector the list of observables to be used, the program
will create new smaller covariance matrix, which will be used to evaluate the
$\chi^{2}$. In a similar manner the $\chi^{2}$ can be translated to the
likelihood and log-likelihood value by Eq. 4.
### 2.5 Multiple measurements with asymmetric uncertainties
More complicated case is when multiple correlated measurements are reported
with asymmetric uncertainties. The case is similar to the one discussed in
Sec. 2.3 and same statistic comments apply in this case. The YAML file
encoding such a measurement will contain the following entries:
⬇
1Observables:
2- [ ”BR1”, 0.1, +0.02, -0.01, 0.02]
3- [ ”BR2”, 0.2, +0.01, -0.05, +0.03, -0.02]
4- [ ”BR3”, 0.3, +0.04, -0.03, 0.05]
5Correlation:
6- [ ”BR1”, ”BR2”, ”BR3”]
7- [ 1. , 0.1 , 0.2 ]
8- [ 0.1, 1., 0.1 ]
9- [ 0.2 , 0.1, 1. ]
The meaning of the “Observables” entry is the same as in the previous class
(cf. Sec. 2.3) and the “Correlation” encodes the same information as in the
HL_nDimGaussian class (cf.. Sec. 2.4). The rules about the minus sign and
symmetric systematic uncertainty are the same as in case of the
HL_BifurGaussian (cf. Sec. 2.3). The difference arises when one evaluates the
$\chi^{2}$, namely the $\rm cov$ matrix is constructed depending if
$\sigma_{+}$ and $\sigma_{-}$ uncertainty is relevant:
$\displaystyle{\rm Cov}_{i,j}=\begin{cases}{\rm
Corr}_{i,j}~{}\sigma^{i}_{+}\sigma^{j}_{+},&\text{if }x^{i}\geq
x^{i}_{obs}\text{ and }x^{j}\geq x^{j}_{obs}\\\ {\rm
Corr}_{i,j}~{}\sigma^{i}_{+}\sigma^{j}_{-},&\text{if }x^{i}\geq
x^{i}_{obs}\text{ and }x^{j}<x^{j}_{obs}\\\ {\rm
Corr}_{i,j}~{}\sigma^{i}_{-}\sigma^{j}_{+},&\text{if }x^{i}<x^{i}_{obs}\text{
and }x^{j}\geq x^{j}_{obs}\\\ {\rm
Corr}_{i,j}~{}\sigma^{i}_{-}\sigma^{j}_{-},&\text{if }x^{i}<x^{i}_{obs}\text{
and }x^{j}<x^{j}_{obs}\\\ \end{cases}$ (8)
The obtained $\rm Cov$ matrix is then used to calculate the $\chi^{2}$ using
Eq. 7. The rest follows the same procedure as described in Sec. 2.4.
### 2.6 One dimensional likelihood function
The best way a result can be published is by providing the (log-)likelihood
function. This type of results are more and more common in the literature. The
most easy is the one-dimensional likelihood scans as can be presented in form
of a figure, which examples are shown in Fig. 2.
Figure 2: Examples of published one-dimensional likelihoods in the Lepton
Universality Violation of the
$\mathup{{{B}}}\to\mathup{{{K}}^{\scriptstyle{\ast}}}\ell\ell$ [7] (left) and
$\mathup{{{B}}}\to\mathup{{{K}}}\ell\ell$ [17] (right).
The biggest advantage of publishing the results in this form is its
completeness. The (log-)likelihood curve contains all the information about
all the non-Gaussian effects and incorporates the systematic uncertainties.
The technical problem is how to publish such information. Usually plots are
published in the pdf or png formats which makes them hard to be used. Since
experiments are mostly using ROOT [18] framework the plots are saved also in
the C format, which contains the points in the form of arrays. This of course
makes the points accessible however it is not easy to automate retrieving this
data from the C file. The best solution is provided by the HEPData portal
[19]. It allows to download the data in a user preferred format. In HEPLike we
have chosen to use the ROOT format by default, in which the data points are
saved in the form of a TGraph object, which is also the way experimentalists
like to store this information. In the YAML file we specify the path of the
ROOT in the following way:
⬇
1ROOTData: data/HEPData-ins1599846-v1-Table_1.root
2TGraphPath: ”Table 1/Graph1D_y1”
The ROOTData encodes the location of the ROOT file, while the TGraphPath
encodes the location of the TGraph object in that ROOT file. In HEPLike the
class HL_ProfLikelihood is responsible for reading and encoding this
likelihood. The value of the log-likelihood can be ten translated again into
the $\chi^{2}$ with Eq. 4.
### 2.7 n-dimensional likelihood function
The natural extension of one dimensional likelihood is an n-dim likelihood,
where $n\geq 2$. Currently experimental collaborations publish only 2-dim
likelihood functions (cf. Fig. 3).
Figure 3: Examples of published two-dimensional likelihoods. The
$\mathcal{B}(\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mu\mu)$
vs
$\mathcal{B}(\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mu\mu)$
likelihood [20] (left) and
$\sigma(\mathup{{{t}}}\mathup{{\overline{{t}}}}\mathup{{{Z}}})$ vs
$\sigma(\mathup{{{t}}}\mathup{{\overline{{t}}}}\mathup{{{W}}})$ likelihood
[21] (right).
The natural way of encoding such information is a histogram: TH2D or TH3D and
we have chosen this way to store this information. The corresponding entry in
the YAML file looks as following:
⬇
1ROOTData: data/LHCb/RD/Bs2mumu_5fb/histB2mumu.root
2TH2Path: ”h_2DScan”
Similar to the one dimensional likelihood (Sec. 2.6) the ROOTData encodes the
location of the ROOT file, while the TH2Path(TH3Path) encodes the location of
the TH2D(TH3D) object. In the long run the community will have to address the
question how to publish higher dimensional likelihoods and this module
(HL_nDimLikelihood) will have to be extended for such use cases.
### 2.8 Fits to experimental data
It is possible that in the future experimental collaborations besides the
results will made the datasets public. The procedure and the form in which the
data should be published is not decided and there is an ongoing debate if the
published data should correspond to the raw detector data, the final selected
points used in the analysis or something between? Clearly publishing a raw
data is problematic, as people outside the collaboration do not have the
necessary knowledge about the calibration and efficiency correction procedures
or data taking conditions. The most useful way to publish the dataset is to
allow the experimentalists to perform all the selection, all the necessary
efficiency corrections and publish the final dataset that has been used for
analysis. This would allow the theory community to use the dataset directly in
their fits without knowing the technicalities about the experimental data
analysis. For this case in HEPLike we have implemented such a class
HL_ExpPoints.
The data are stored in the TTree structure located in the ROOT file. The YAML
file encodes this information in form:
⬇
1ROOTData: data/toy/data.root
2TTreePath: t
3Observables:
4- [ x ]
5- [ y ]
6- [ z ]
7Weight: w
where the ROOTData points to the ROOT file and the TTreePath stores the
information of the TTree location inside the ROOT file. It is assumed that the
experiments will provide all the corrections in form of event-by-event
weights. The name of the weight inside the TTree is encoded in the Weight
entry. In general the data points are elements of $\mathcal{R}^{n}$ vector
space, which coordinates are stored in the Observables entry.
The only thing that user needs to provide to the HL_ExpPoints object is a
pointer to the function to be fitted. The function should have a form: double
(*fun)(vector<double> par , vector<double> point), where the par vector
encodes the parameters that want to be fitted and the point corresponds to a
data point. The HL_ExpPoints will then evaluate the likelihood:
$\mathcal{L}(\omega)=f(\textbf{x}|\omega)^{w(\textbf{x})}$ (9)
for the whole dataset. In the above the x correspondents to the n-dimensional
point, $\omega$ denotes the parameters that want to be fitted par, and $f$
denotes the fitting function (fun). The HEPLike does not provide a minimalizer
or a scanner tool as it is not purpose of this type of software. It has to be
interfaced with proper scanner tool for example [1]. Again the user can decide
if he/she prefers to perform a $\chi^{2}$ or log-likelihood fit.
The biggest advantage of such format is the compatibility with the
experimental analysis. Experimentalist can in principle publish as well the
function that they have used to fit this data and therefore a theorists
reproduce the experimental result and start where the experimentalists
finished.
## 3 Code implementation
In this section we will discuss the implementation of the code used to create
likelihoods discussed in Sec. 2. The code is build in several classes:
* 1.
HL_Data: base class from which other classes inherit their base functionality.
* 2.
HL_Limit: class that handles the upper limit measurements.
* 3.
HL_Gaussian: class that handles measurements with Gaussian uncertainty.
* 4.
HL_BifurGaussian: class that handles measurements with asymmetric uncertainty.
* 5.
HL_nDimGaussian: class that handles measurements with n-dimensional Gaussian
uncertainties.
* 6.
HL_nDimBifurGaussian: class that handles measurements with n-dimensional
asymmetric uncertainties.
* 7.
HL_ProfLikelihood: class that handles measurements with one-dimensional
likelihood function.
* 8.
HL_nDimLikelihood: class that handles measurements with 2(3)-dimensional
likelihood function.
* 9.
HL_ExpPoints: class that allows to perform the fits to experimental datasets.
In Tab. LABEL:tab:functions we present the functionality of these classes. In
addition we present the hierarchy of the structure of the class inheritance in
Fig. 4.
Table 1: Functions available in the HEPLike software. Function | Description
---|---
HL_Data() | Constructor of the HL_Data class.
HL_Data(string) | Constructor of the HL_Data class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_Limit() | Constructor of the HL_Limit class.
HL_Limit(string) | Constructor of the HL_Limit class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_Gaussian() | Constructor of the HL_Gaussian class.
HL_Gaussian(string) | Constructor of the HL_Gaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_BifurGaussian() | Constructor of the HL_BifurGaussian class.
HL_BifurGaussian(string) | Constructor of the HL_Gaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_nDimGaussian() | Constructor of the HL_nDimGaussian class.
HL_nDimGaussian(string ) | Constructor of the HL_nDimGaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_nDimBifurGaussian() | Constructor of the HL_nDimBifurGaussian class.
HL_nDimBifurGaussian(string) | Constructor of the HL_nDimBifurGaussian class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_ProfLikelihood() | Constructor of the HL_ProfLikelihood class.
HL_ProfLikelihood(string) | Constructor of the HL_ProfLikelihood class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_nDimLikelihood() | Constructor of the HL_nDimLikelihood class.
HL_ProfLikelihood(string) | Constructor of the HL_nDimLikelihood class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
HL_ExpPoints() | Constructor of the HL_ExpPoints class.
HL_ExpPoints(string) | Constructor of the HL_ExpPoints class. The argument that is taken by constructor is the path for the YAML file encoding the measurement.
read_standard() | Function that reads the general information about the measurement from the YAML file.
set_debug_yaml(bool) | Function that enables debugging the YAML file. By default the debugging is switched off and can be switched on by passing a true bool argument to this function. Debugging will print a message that for a given information in the YAML file is missing.
Read() | Function reading the YAML file. The function
GetChi2(double) | Function that returns the $\chi^{2}$ value for a given point (passed to the function as double). Function is available for all classes besides HL_Data.
GetLogLikelihood(double) | Function that returns the log-likelihood value for a given point (passed to the function as double). Function is available for all classes besides HL_Data.
GetLikelihood(double) | Function that returns the likelihood value for a given point (passed to the function as double). Function is available for all classes besides HL_Data.
GetCLs(double) | Function that returns $\rm CL_{s}$ or p-value for a given point (passed to the function as double). The function is a member of the HL_Limit class.
Restrict(vector<string>) | Function that restricts number of observables from the YAML file. Function is a member of the HL_nDimGaussian, HL_nDimBifurGaussian and HL_nDimLikelihood classes.
InitData() | Function of HL_ExpPoints class that reads to the memory the data from the TTree object.
Profile() | Function of HL_nDimLikelihood class that creates the profile log-likelihood projections.
SetFun() | Function of HL_ExpPoints class, that sets the pointer to the function to be fitted.
Figure 4: Diagram of class inheritance of the HEPLike package.
## 4 Installation and usage
In this chapter we will present the requirements and installation for the
HEPLike package. The software is distributed via the github site:
https://github.com/mchrzasz/HEPLike.
In order to compile HEPLike the following packages (and the minimal version)
needs to be installed:
* 1.
git
* 2.
cmake, 2.8
* 3.
yaml-cpp, 1.58.0
* 4.
gsl, 2.1
* 5.
Boost, 1.58.0
* 6.
ROOT, 6.08
The compilation is done in the following way:
⬇
1cd <instalation dir>
2git clone https://github.com/mchrzasz/HEPLike.git
3cd HEPLike
4mkdir build
5cd build
6cmake ..
7make
In the above the make can be replaced with make -jN, where N is the number of
threads that user wants to be used for compilation. Please note that in case
of non standard installation of some packages one might have to provide cmake
with a proper path to the library. After successful compilation a libHEPLike.a
and libHEPLike.solibraries will be created in the build directory.
The HEPLike is provided with seven examples:
* 1.
Br_example.cc: example program showing the usage of the HL_Gaussian class.
* 2.
BrBifurGaussian_example.cc: example program showing the usage of the
HL_BifurGaussian class.
* 3.
Data_Fit_example.cc: example program showing the usage of the HL_ExpPoints
class.
* 4.
Limit_example.cc: example program showing the usage of the HL_Limit class.
* 5.
Ndim_BifurGaussian_example.cc: example program showing the usage of the
HL_nDimBifurGaussian class.
* 6.
Ndim_Gaussian.cc: example program showing the usage of the HL_nDimGaussian
class.
* 7.
Ndim_Likelihood_example.cc: example program showing the usage of the
HL_nDimLikelihood class.
* 8.
ProfLikelihood_example.cc: example program showing the usage of the
HL_ProfLikelihood class.
To compile them a proper variable has to be set during the cmake stage:
⬇
1 cd build
2 cmake -DEXECUTABLE=TRUE ..
3 make
After the compilation in the build directory will contain executables from the
following examples. The HEPLike package comes also with test procedures for
each of the classes. To perform the tests user has to perform the command:
⬇
ctest
or an equivalent:
⬇
make test
If the HEPLike was successfully installed the output will look as following:
⬇
Test project /storage/github/HEPLike/build
Start 1: HL_Test_YAML
1/7 Test #1: HL_Test_YAML ………………… Passed 0.01 sec
Start 2: HL_Limit
2/7 Test #2: HL_Limit ……………………. Passed 0.27 sec
Start 3: HL_Br_example
3/7 Test #3: HL_Br_example ……………….. Passed 0.02 sec
Start 4: HL_BrBifurGaussian_example
4/7 Test #4: HL_BrBifurGaussian_example ……. Passed 0.01 sec
Start 5: HL_Ndim_Gaussian
5/7 Test #5: HL_Ndim_Gaussian …………….. Passed 0.01 sec
Start 6: HL_ProfLikelihood_example
6/7 Test #6: HL_ProfLikelihood_example …….. Passed 0.25 sec
Start 7: HL_Ndim_BifurGaussian_example
7/7 Test #7: HL_Ndim_BifurGaussian_example …. Passed 0.01 sec
100% tests passed, 0 tests failed out of 7
Total Test time (real) = 0.57 sec
### 4.1 Available measurement
The YAML files that contain the stored measurements are located in a second
independent repository. The reason for this separation is that the YAML files
are expected to be updated more frequently then the code itself. It is
expected that users and experiments will contribute to this repository. By
implementing such model it is ensured that the repository will contain the
most up to date measurements.
The repository can be found at: https://github.com/mchrzasz/HEPLikeData. The
repository should be downloaded or cloned:
⬇
1cd <some new dir>
2git clone https://github.com/mchrzasz/HEPLikeData.git
Since the repository contains only YAML files there is no need for any
compilation. The repository contains a directory data, where all the YAML
files are kept. It should be linked by a symbolic link to the HEPLike package.
Inside the data the measurements are grouped by experiments (ex. LHCb, ATLAS,
CMS, etc.). Inside the experiment directory the measurements are grouped
according to type of measurement in the collaborations, for example: RD,
Semileptonic, Charmless, Exotica, etc. The names of the YAML files should be
named accordingly to publication report number. For example: CERN-
EP-2018-331.yaml. If a single publication produced more independent
measurements, user might code them in the independent files and give further
information at the end of the file, for example:CERN-PH-
EP-2015-314_q2_01_0.98.yaml.
Currently we are publishing the measurements that have been used by us in
other projects [22, 23, 24]. The list of YAML files with the context is
presented in Tab. LABEL:tab:yaml.
Table 2: Functions available in the HEPLike software. |
---|---
File | Description
CERN-EP-2017-100.yaml | YAML file encoding the measurement of branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mu\mu$ and $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mu\mu$ decays [20].
PH-EP-2015-314_q2_0.1_0.98.yaml PH-EP-2015-314_q2_11.0_12.5.yaml PH-EP-2015-314_q2_1.1_2.5.yaml PH-EP-2015-314_q2_15.0_19.yaml PH-EP-2015-314_q2_2.5_4.0.yaml PH-EP-2015-314_q2_4.0_6.0.yaml PH-EP-2015-314_q2_6.0_8.0.yaml | YAML files encoding the measurements of the angular coefficients of $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mathup{{{K}}^{\scriptstyle{\ast}}}\mu\mu$ decay in different $q^{2}$ regions [25].
CERN-EP-2016-141_q2_0.1_0.98.yaml CERN-EP-2016-141_q2_11.0_12.5.yaml CERN-EP-2016-141_q2_1.1_2.5.yaml CERN-EP-2016-141_q2_15.0_19.yaml CERN-EP-2016-141_q2_2.5_4.0.yaml CERN-EP-2016-141_q2_4.0_6.0.yaml CERN-EP-2016-141_q2_6.0_8.0.yaml | YAML files encoding the measurements of the branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mathup{{{K}}^{\scriptstyle{\ast}}}\mu\mu$ decay in different $q^{2}$ regions [26].
CERN-EP-2016-215_q2_0.1_0.98.yaml CERN-EP-2016-215_q2_1.1_2.5.yaml CERN-EP-2016-215_q2_2.5_4.yaml CERN-EP-2016-215_q2_4_6.yaml CERN-EP-2016-215_q2_6_8.yaml | YAML files encoding the measurements of the branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{d}}}}}^{\scriptstyle{0}}}\to\mathup{{{K}}}\mathup{{{\pi}}}\mu\mu$ decay in different $q^{2}$ regions [27].
CERN-PH-EP-2015-145_0.1_2.yaml CERN-PH-EP-2015-145_11_12.5.yaml CERN-PH-EP-2015-145_15_19.yaml CERN-PH-EP-2015-145_1_6.yaml CERN-PH-EP-2015-145_2_5.yaml CERN-PH-EP-2015-145_5_8.yaml | YAML files encoding the measurements of the branching fraction of the $\mathup{{{B}}{}_{\scriptstyle{\mathup{{{s}}}}}^{\scriptstyle{0}}}\to\mathup{{{\phi}}}\mu\mu$ decay in different $q^{2}$ regions [27].
CERN-EP-2019-043.yaml | YAML file encoding the measurement of the $R_{K}$ [28].
CERN-EP-2017-100_q2_0.045_1.1.yaml CERN-EP-2017-100_q2_1.1_6.yaml | YAML file encoding the measurement of the $R_{\mathup{{{K}}^{\scriptstyle{\ast}}}}$ [7].
b2sgamma.yaml | YAML file encoding the HFLAV average of the $\mathup{{{b}}}\to\mathup{{{s}}}\mathup{{{\gamma}}}$ [15].
RD_RDstar.yaml | YAML file encoding the HFLAV average of the $R(\mathup{{{D}}})$ and $R(\mathup{{{D}}^{\scriptstyle{\ast}}})$ [15].
HFLAV_2016_157.yaml HFLAV_2016_160.yaml HFLAV_2016_161.yaml HFLAV_2016_162.yaml HFLAV_2016_164.yaml HFLAV_2016_165.yaml HFLAV_2016_166.yaml HFLAV_2016_167.yaml HFLAV_2016_168.yaml HFLAV_2016_169.yaml HFLAV_2016_170.yaml HFLAV_2016_171.yaml HFLAV_2016_176.yaml HFLAV_2016_177.yaml HFLAV_2016_178.yaml HFLAV_2016_179.yaml HFLAV_2016_180.yaml HFLAV_2016_181.yaml HFLAV_2016_182.yaml HFLAV_2016_183.yaml HFLAV_2016_211.yaml HFLAV_2016_212.yaml | YAML files encoding the upper limits of $\tau$ Lepton Flavour Violation decays [27].
As already mentioned the measurements are constantly growing and there is
expected that the community will contribute to develop this repository. When a
new YAML file is wrote before merging it with the repository it should be
checked if it contains all the necessary information. It can be checked with
the Test_YAML.cc program. It can be used in the following way:
⬇
1cd HEPLike
2./build/Test_YAML <PATH_TO_YAML>
If an entry is missing the user will be notified by a printout. The
HEPLikeData repository contains also a template YAML (data/template.yaml) file
which can be used to create new measurements YAML files.
As already mentioned we provide useful utilities for the encoded measurements.
The first is the ability to create BiBtex file for the measurements that have
been used. The user should store the BiBtex items or YAML file names:
⬇
1Aaij:2017vbb
2b2mumu.yaml
To prepare the BiBtex file user should run the make_citations.py script
located in the utils directory:
⬇
1cd utils
2python make_citations.py list.txt
after this command a new file references.bib, will be created, which will
contain the full BiBtex entries. This can be directly used in preparing the
publication.
Another useful feature of HEPLike is the ability to search the measurement
database for relevant measurements. The script allowing for that utility is
also located in the utils. Currently the database can be searched for using
the year of publication, Arxiv number, author of the YAML file or the unique
name of the measurements. The syntax for running a search is the following:
⬇
1python lookup.py –Arxiv 1705.05802
2Found files:
3../data/examples/RKstar_lowq2.yaml
To see all available search options in the following script user can run it
with help option: python lookup.py -h.
## 5 Summary
We have presented a computer program HEPLike that enables to construct and
evaluate experimental likelihoods. The software is designed to handle the
interpretation of wide range of published results. It also allows to perform
direct fits to data once it is provided by the experimental collaborations.
The program can be easily interfaced with other computer programs and is aimed
to help users, who perform fits to experimental results in their scientific
work. It is especially useful for large fitting collaborations, which till now
had to implement the experimental measurements on their own. The measurement
themselves are stored in YAML files in separate repository. This allows for
easy extensions of the database without the need of compilation. Furthermore,
users and experimental collaborations can share their encoded measurements
with the community.
## Acknowledgments
This work is partly supported by the CERN FCC Design Study Program. The
research of M. Chrzaszcz is funded by the Polish National Agency for Academic
Exchange under the Bekker program. M. Chrzaszcz is also grateful to Foundation
for Polish Science (FNP) for its support.
We would like thank Mike Williams, Patrick Koppenburg, Pat Scott, Danny van
Dyk and Maria Moreno Llacer for invaluable comments about our manuscript.
## References
* [1] P. Athron, et al., GAMBIT: The Global and Modular Beyond-the-Standard-Model Inference Tool, Eur. Phys. J. C77 (11) (2017) 784, [Addendum: Eur. Phys. J.C78,no.2,98(2018)]. arXiv:1705.07908, doi:10.1140/epjc/s10052-017-5513-2,10.1140/epjc/s10052-017-5321-8.
* [2] J. C. Costa, et al., Likelihood Analysis of the Sub-GUT MSSM in Light of LHC 13-TeV Data, Eur. Phys. J. C78 (2) (2018) 158. arXiv:1711.00458, doi:10.1140/epjc/s10052-018-5633-3.
* [3] P. Bechtle, K. Desch, P. Wienemann, Fittino, a program for determining MSSM parameters from collider observables using an iterative method, Comput. Phys. Commun. 174 (2006) 47–70. arXiv:hep-ph/0412012, doi:10.1016/j.cpc.2005.09.002.
* [4] F. Mahmoudi, New constraints on supersymmetric models from b —¿ s gamma, JHEP 12 (2007) 026. arXiv:0710.3791, doi:10.1088/1126-6708/2007/12/026.
* [5] T. Feldmann, D. Van Dyk, K. K. Vos, Revisiting $B\to\pi\pi\ell\nu$ at large dipion masses, JHEP 10 (2018) 030. arXiv:1807.01924, doi:10.1007/JHEP10(2018)030.
* [6] J. Kumar, D. London, R. Watanabe, Combined Explanations of the $b\to s\mu^{+}\mu^{-}$ and $b\to c\tau^{-}{\bar{\nu}}$ Anomalies: a General Model Analysis, Phys. Rev. D99 (1) (2019) 015007. arXiv:1806.07403, doi:10.1103/PhysRevD.99.015007.
* [7] R. Aaij, et al., Test of lepton universality with $B^{0}\rightarrow K^{*0}\ell^{+}\ell^{-}$ decays, JHEP 08 (2017) 055. arXiv:1705.05802, doi:10.1007/JHEP08(2017)055.
* [8] R. Aaij, et al., Search for the lepton-flavour violating decay $D^{0}\to e^{\pm}\mu^{\mp}$, Phys. Lett. B754 (2016) 167–175. arXiv:1512.00322, doi:10.1016/j.physletb.2016.01.029.
* [9] G. J. Feldman, R. D. Cousins, A Unified approach to the classical statistical analysis of small signals, Phys. Rev. D57 (1998) 3873–3889. arXiv:physics/9711021, doi:10.1103/PhysRevD.57.3873.
* [10] C. Rover, C. Messenger, R. Prix, Bayesian versus frequentist upper limits, in: Proceedings, PHYSTAT 2011 Workshop on Statistical Issues Related to Discovery Claims in Search Experiments and Unfolding, CERN,Geneva, Switzerland 17-20 January 2011, CERN, CERN, Geneva, 2011, pp. 158–163. arXiv:1103.2987, doi:10.5170/CERN-2011-006.158.
* [11] R. Aaij, et al., Search for the decays $B_{s}^{0}\to\tau^{+}\tau^{-}$ and $B^{0}\to\tau^{+}\tau^{-}$, Phys. Rev. Lett. 118 (25) (2017) 251802. arXiv:1703.02508, doi:10.1103/PhysRevLett.118.251802.
* [12] A. L. Read, Modified frequentist analysis of search results (the $CL_{s}$ method) (CERN-OPEN-2000-205).
URL https://cds.cern.ch/record/451614
* [13] S. S. Wilks, The large-sample distribution of the likelihood ratio for testing composite hypotheses, Ann. Math. Statist. 9 (1) (1938) 60–62. doi:10.1214/aoms/1177732360.
URL https://doi.org/10.1214/aoms/1177732360
* [14] M. Tanabashi, et al., Review of particle physics, Phys. Rev. D 98 (2018) 030001. doi:10.1103/PhysRevD.98.030001.
URL https://link.aps.org/doi/10.1103/PhysRevD.98.030001
* [15] Y. Amhis, et al., Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016, Eur. Phys. J. C77 (12) (2017) 895. arXiv:1612.07233, doi:10.1140/epjc/s10052-017-5058-4.
* [16] R. Barlow, Asymmetric systematic errorsarXiv:physics/0306138.
* [17] R. Aaij, et al., Test of lepton universality using $B^{+}\rightarrow K^{+}\ell^{+}\ell^{-}$ decays, Phys. Rev. Lett. 113 (2014) 151601. arXiv:1406.6482, doi:10.1103/PhysRevLett.113.151601.
* [18] I. Antcheva, et al., ROOT: A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499–2512. arXiv:1508.07749, doi:10.1016/j.cpc.2009.08.005.
* [19] E. Maguire, L. Heinrich, G. Watt, HEPData: a repository for high energy physics data, J. Phys. Conf. Ser. 898 (10) (2017) 102006. arXiv:1704.05473, doi:10.1088/1742-6596/898/10/102006.
* [20] R. Aaij, et al., Measurement of the $B^{0}_{s}\to\mu^{+}\mu^{-}$ branching fraction and effective lifetime and search for $B^{0}\to\mu^{+}\mu^{-}$ decays, Phys. Rev. Lett. 118 (19) (2017) 191801. arXiv:1703.05747, doi:10.1103/PhysRevLett.118.191801.
* [21] M. Aaboud, et al., Measurement of the $t\bar{t}Z$ and $t\bar{t}W$ cross sections in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detectorarXiv:1901.03584.
* [22] F. U. Bernlochner, et al., FlavBit: A GAMBIT module for computing flavour observables and likelihoods, Eur. Phys. J. C77 (11) (2017) 786. arXiv:1705.07933, doi:10.1140/epjc/s10052-017-5157-2.
* [23] P. Athron, et al., Global fits of GUT-scale SUSY models with GAMBIT, Eur. Phys. J. C77 (12) (2017) 824. arXiv:1705.07935, doi:10.1140/epjc/s10052-017-5167-0.
* [24] P. Athron, et al., A global fit of the MSSM with GAMBIT, Eur. Phys. J. C77 (12) (2017) 879. arXiv:1705.07917, doi:10.1140/epjc/s10052-017-5196-8.
* [25] R. Aaij, et al., Angular analysis of the $B^{0}\to K^{*0}\mu^{+}\mu^{-}$ decay using 3 fb-1 of integrated luminosity, JHEP 02 (2016) 104. arXiv:1512.04442, doi:10.1007/JHEP02(2016)104.
* [26] R. Aaij, et al., Measurements of the S-wave fraction in $B^{0}\rightarrow K^{+}\pi^{-}\mu^{+}\mu^{-}$ decays and the $B^{0}\rightarrow K^{\ast}(892)^{0}\mu^{+}\mu^{-}$ differential branching fraction, JHEP 11 (2016) 047, [Erratum: JHEP04,142(2017)]. arXiv:1606.04731, doi:10.1007/JHEP11(2016)047,10.1007/JHEP04(2017)142.
* [27] R. Aaij, et al., Differential branching fraction and angular moments analysis of the decay $B^{0}\to K^{+}\pi^{-}\mu^{+}\mu^{-}$ in the $K^{*}_{0,2}(1430)^{0}$ region, JHEP 12 (2016) 065. arXiv:1609.04736, doi:10.1007/JHEP12(2016)065.
* [28] R. Aaij, et al., Search for lepton-universality violation in $B^{+}\to K^{+}\ell^{+}\ell^{-}$ decaysarXiv:1903.09252.
|
2024-09-04T02:54:58.268761 | 2020-03-09T09:01:53 | 2003.03977 | {
"authors": "Nikhil Iyer, V Thejas, Nipun Kwatra, Ramachandran Ramjee, Muthian\n Sivathanu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26109",
"submitter": "Nikhil Iyer",
"url": "https://arxiv.org/abs/2003.03977"
} | arxiv-papers | # Wide-minima Density Hypothesis and the
Explore-Exploit Learning Rate Schedule
Nikhil Iyer
Microsoft Research India<EMAIL_ADDRESS>V Thejas 11footnotemark: 1
Atlassian India<EMAIL_ADDRESS>Nipun Kwatra
Microsoft Research India<EMAIL_ADDRESS>Ramachandran Ramjee
Microsoft Research India<EMAIL_ADDRESS>Muthian Sivathanu
Microsoft Research India<EMAIL_ADDRESS>Work done during an internship
at Microsoft Research India
###### Abstract
Several papers argue that wide minima generalize better than narrow minima. In
this paper, through detailed experiments that not only corroborate the
generalization properties of wide minima, we also provide empirical evidence
for a new hypothesis that the density of wide minima is likely lower than the
density of narrow minima. Further, motivated by this hypothesis, we design a
novel explore-exploit learning rate schedule. On a variety of image and
natural language datasets, compared to their original hand-tuned learning rate
baselines, we show that our explore-exploit schedule can result in either up
to 0.84% higher absolute accuracy using the original training budget or up to
57% reduced training time while achieving the original reported accuracy. For
example, we achieve state-of-the-art (SOTA) accuracy for IWSLT’14 (DE-EN)
dataset by just modifying the learning rate schedule of a high performing
model.
Keywords: deep learning, generalization, learning rate schedule, optimization
## 1 Introduction
One of the fascinating properties of deep neural networks (DNNs) is their
ability to generalize well, i.e., deliver high accuracy on the unseen test
dataset. It is well-known that the learning rate (learning rate) schedules
play an important role in the generalization performance (Keskar et al., 2016;
Wu et al., 2018; Goyal et al., 2017). In this paper, we study the question,
what are the key properties of a learning rate schedule that help DNNs
generalize well during training?
We start with a series of experiments training Resnet18 on Cifar-10 over 200
epochs. We vary the number of epochs trained at a high learning rate of $0.1$,
called the explore epochs, from 0 to 100 and divide up the remaining epochs
equally for training with learning rates of $0.01$ and $0.001$. Note that the
training loss typically stagnates around 50 epochs with $0.1$ learning rate.
Despite that, we find that as the number of explore epochs increase to 100,
the average test accuracy also increases. We also find that the minima found
in higher test accuracy runs are wider than the minima from lower test
accuracy runs, corroborating past work on wide-minima and generalization
(Keskar et al., 2016; Hochreiter and Schmidhuber, 1997; Jastrzebski et al.,
2017; Wang et al., 2018). Moreover, what was particularly surprising was that,
even when using fewer explore epochs, a few runs out of many trials still
resulted in high test accuracies!
Thus, we not only find that an initial exploration phase with a high learning
rate is essential to the good generalization of DNNs, but that this
exploration phase needs to be run for sufficient time, even if the training
loss stagnates much earlier. Further, we find that, even when the exploration
phase is not given sufficient time, a few runs still see high test accuracy
values.
To explain these observations, we hypothesize that, in the DNN loss landscape,
the density of narrow minima is significantly higher than that of wide minima.
Intuitively, a large learning rate can escape narrow minima easily (as the
optimizer can jump out of them with large steps). However, once it reaches a
wide minima, it is likely to get stuck in it (if the ”width” of the wide
minima is large compared to the step size). With fewer explore epochs, a large
learning rate might still get lucky occasionally in finding a wide minima but
invariably finds only a narrower minima due to their higher density. As the
explore duration increases, the probability of eventually landing in a wide
minima also increases. Thus, a minimum duration of explore is necessary to
land in a wide minimum with high probability.
An observation on the rarity of wide minima has been hinted at by prior work
(Wu et al., 2018; Baldassi et al., 2020) based on theoretical analysis of
simple neural networks (see Section 2). In this paper, we add significant
empirical evidence to these theoretical observations. We believe that all
these results together constitute sufficient evidence for this observation to
now be classified as a hypothesis, that we term the wide-minima density
hypothesis.
The hypothesis helps explain not only our experiments but also the
generalization out-performance of prior heuristic-based learning rate decay
schemes such as cosine decay (Loshchilov and Hutter, 2016). Cosine decay
implicitly maintains a higher learning rate during the first half of training
compared to schemes like linear decay. Based on the hypothesis, the higher
learning rate allows cosine decay to find wider minima with higher
probability, resulting in cosine decay’s better generalization compared to
linear decay.
Apart from helping explain empirical observations, the hypothesis also enables
a principled learning rate schedule design that explicitly accounts for the
requisite explore duration. Motivated by the hypothesis, we design a novel
Explore-Exploit learning rate schedule, where the initial explore phase
optimizes at a high learning rate in order to arrive in the vicinity of a wide
minimum. This is followed by an exploit phase which descends to the bottom of
this wide minimum. We give explore phase enough time so that the probability
of landing in a wide minima is high. For the exploit phase, we experimented
with multiple schemes, and found a simple, parameter-less, linear decay to
zero to be effective. Thus, our proposed learning rate schedule optimizes at a
constant high learning rate for a given duration, followed by a linear decay
to zero. We call this learning rate schedule the Knee schedule.
We extensively evaluate the Knee schedule across a wide range of models and
datasets, ranging from NLP (BERT pre-training, Transformer on WMT’14(EN-DE)
and IWSLT’14 (DE-EN)) to CNNs (ImageNet on ResNet-50, Cifar-10 on ResNet18),
and spanning multiple optimizers: SGD Momentum, Adam, RAdam, and LAMB. In all
cases, Knee schedule improves the test accuracy of state-of-the-art hand-tuned
learning rate schedules, when trained using the original training budget. The
explore duration is a hyper-parameter in Knee schedule but even if we set the
explore duration to a fixed 50% fraction of total training budget, we find
that it still outperforms prior schemes.
We also experimented with reducing the training budget, and found that Knee
schedule can achieve the same accuracy as the baseline under significantly
reduced training budgets. For the BERTLARGE pretraining, WMT’14(EN-DE) and
ImageNet experiments, we are able to train in 33%, 57% and 44% less training
budget, respectively, for the same test accuracy. This corresponds to
significant savings in GPU compute, e.g. savings of over 1000 V100 GPU-hours
for BERTLARGE pretraining.
The main contributions of our work 111Our work is available at:
https://github.com/nikhil-iyer-97/wide-minima-density-hypothesis are:
1. nosep
A hypothesis of lower density of wide minima in the DNN loss landscape, backed
by extensive experiments, that explains why a high learning rate needs to be
maintained for sufficient duration to achieve good generalization.
2. nosep
The hypothesis explains the good performance of heuristic-based schemes such
as cosine decay, and promotes a principled design of learning rate decay
schemes.
3. nosep
Motivated by the hypothesis, we design an Explore-Exploit learning rate
schedule called Knee schedule that outperforms prior heuristic-based learning
rate schedules, including achieving state-of-the-art results on the IWSLT’14
(DE-EN) dataset.
## 2 Related Work
Generalization. There has been a lot of work on understanding the
generalization characteristics of DNNs. Kawaguchi (2016) found that DNNs have
many local minima, but all local minima were also the global minima. It has
been observed by several authors that wide minima generalize better than
narrow minima (Arora et al., 2018; Hochreiter and Schmidhuber, 1997; Keskar et
al., 2016; Jastrzebski et al., 2017; Wang et al., 2018) but there have been
other works questioning this hypothesis as well (Dinh et al., 2017; Golatkar
et al., 2019; Guiroy et al., 2019; Jastrzebski et al., 2019; Yoshida and
Miyato, 2017).
Keskar et al. (2016) found that small batch SGD generalizes better and lands
in wider minima than large batch SGD. However, recent work has been able to
generalize quite well even with very large batch sizes (Goyal et al., 2017;
McCandlish et al., 2018; Shallue et al., 2018), by scaling the learning rate
linearly as a function of the batch size. Jastrzebski et al. (2019) analyze
how batch size and learning rate influence the curvature of not only the SGD
endpoint but also the whole trajectory. They found that small batch or large
step SGD have similar characteristics, and yield smaller and earlier peak of
spectral norm as well as smaller largest eigenvalue. Chaudhari et al. (2019);
Baldassi et al. (2019) propose methods to drive the optimizer to wide minima.
Wang et al. (2018) analytically show that generalization of a model is related
to the Hessian and propose a new metric for the generalization capability of a
model that is unaffected by model reparameterization of Dinh et al. (2017).
Yoshida and Miyato (2017) argue that regularizing the spectral norm of the
weights of the neural network help them generalize better. On the other hand,
Arora et al. (2018) derive generalization bounds by showing that networks with
low stable rank (high spectral norm) generalize better. Guiroy et al. (2019)
looks at generalization in gradient-based meta-learning and they show
experimentally that generalization and wide minima are not always correlated.
Finally, Golatkar et al. (2019) show that regularization results in higher
test accuracy specifically when it is applied during initial phase of
training, similar to the importance of Knee schedule’s explore phase during
initial phase of training. In a similar vein, Li et al. (2019) explain the
regularization benefits of the initial higher learning rate by showing that
higher learning rate helps networks learn easier-to-fit general patterns.
Neural network loss landscapes. The landscape of loss in neural networks have
been extensively studied (Draxler et al., 2018; Freeman and Bruna, 2016;
Garipov et al., 2018; Sagun et al., 2017). These papers point out that the
loss landscape contains both wide and narrow minima, and there may even exist
a path from one minima to another without barriers. However, there are
multiple paths between these minima and some paths indeed face barriers (e.g.,
see Figure 1 in Draxler et al. (2018)). Since we don’t know which path SGD and
other optimizers might follow, even if wide and narrow minima are part of a
single basin, SGD and other optimizers might still require higher learning
rates to navigate from narrow to wide minima.
Lower density of wide minima. Wu et al. (2018) compares the sharpness of
minima obtained by full-batch gradient descent (GD) with different learning
rates for small neural networks on FashionMNIST and Cifar10 datasets. They
find that GD with a given learning rate finds the theoretically sharpest
feasible minima for that learning rate. Thus, in the presence of several
flatter minimas, GD with lower learning rates does not find them, leading to
the conjecture that density of sharper minima is perhaps larger than density
of wider minima. Baldassi et al. (2020) show analytically for simple, two-
layer non-convex networks that wide minima exists and are rare, compared to
narrow minima, local minima and saddle points. In this paper, we add
significant evidence to these theoretical observations based on empirical
results obtained on large-scale, state-of-the-art neural networks through
carefully designed experiments.
## 3 Wide-Minima Density Hypothesis
Many popular learning rate schedules, such as the step decay schedules for
image datasets, start the training with high learning rate, and then reduce
the learning rate periodically. For example, consider the case of Cifar-10 on
Resnet-18, trained using a typical step learning rate schedule of $0.1,0.01,$
and $0.001$ for 100, 50, 50 epochs each. In many such schedules, even though
training loss stagnates after several epochs of high learning rate, one still
needs to continue training at high learning rate in order to get good
generalization.
For example, Figure 2 shows the training loss for Cifar-10 on Resnet-18,
trained with a fixed learning rate of 0.1 (orange curve), compared to a model
trained via a step schedule with learning rate reduced at epoch 50 (blue
curve). As can be seen from the figure, the training loss stagnates after
$\approx$ 50 epochs for the orange curve, and locally it makes sense to reduce
the learning rate to decrease the loss. However, as shown in Table 2,
generalization is directly correlated with duration of training at high
learning rate, with the highest test accuracy achieved when the high learning
rate is used for 100 epochs, well past the point where training loss
stagnates. Note that the final training loss remains similar for all runs.
To understand the above phenomena, we perform another experiment. We train
Cifar-10 on Resnet-18 for 200 epochs, using a high learning rate of $0.1$ for
only 30 epochs and then use learning rate of $0.01$ and $0.001$ for 85 epochs
each. We repeat this training 50 times with different random weight
initializations. On an average, as expected, this training yields a low test
accuracy of $94.81$. However, in 1 of the 50 runs, we find that the test
accuracy reaches $95.24$, even higher than the average accuracy of $95.1$
obtained while training at high learning rate for 100 epochs!
Figure 1: Training loss for Cifar-10 on Resnet-18. Orange plot uses a fixed
learning rate of 0.1, while in blue plot, the learning rate is reduced from
0.1 to 0.01 at epoch 50.
Figure 2: Cifar-10 on Resnet-18 trained for 200 epochs with Momentum. A learning rate of 0.1 is used for the explore epochs. Half the remaining epochs are trained at 0.01 and the other half at 0.001. Reported results are average over 4 runs. Epochs at | Test Accuracy | Train Loss
---|---|---
0.1 LR | Avg. (Std. Dev) | Avg. (Std. Dev.)
0 | 94.34 (0.13) | 0.0017 (8e-5)
30 | 94.81 (0.15) | 0.0017 (8e-5)
40 | 94.91 (0.14) | 0.0018 (9e-5)
60 | 95.01 (0.14) | 0.0018 (1e-4)
80 | 95.05 (0.15) | 0.0019 (1e-4)
100 | 95.10 (0.14) | 0.0021 (1e-4)
### 3.1 Hypothesis
To explain the above observations, i.e., using a high learning rate for short
duration results in low average test accuracy with rare occurrences of high
test accuracy, while using the same high learning rate for long duration
achieves high average test accuracy and frequent occurrences of high test
accuracy, we introduce a new hypothesis. We hypothesize that, in the DNN loss
landscape, the density of narrow minima is significantly higher than that of
wide minima.
Intuitively, a large learning rate can escape narrow minima “valleys” easily
(as the optimizer can jump out of them with large steps). However, once it
reaches a wide minima “valley”, it is likely to get stuck in it (if the
“width” of the wide valley is large compared to the step size). This intuition
is backed by theoretical results from Xie et al. (2020) that show that the
time to escape a minimum using SGD is exponential in the inverse of learning
rate as well as inverse of the sharpness (measured by eigenvalue of the
Hessian at the minima). Thus, large learning rates escape narrow minima
exponentially faster than wide minima.
If wide and narrow minima were uniformly distributed, SGD with a large LR
would be able to quickly escape the narrow minima, land on a wide minima and
get stuck there. Yet, we see that we need to maintain large LR for significant
duration for landing in a wide minima with high probability. On the other
hand, if our hypothesis is true, i.e., wide minima are much fewer than narrow
minima, the probability of landing in a wide minima after escaping a narrow
minima is low, and the optimizer needs to take a lot of steps to have a high
probability of eventually landing in a wide minimum. Thus, the hypothesis is a
better explanation for the observation in Table 2, where the average accuracy
continues to improve as we increase the number of high learning rate training
steps. The hypothesis also explains why very few (just 1) of the 50 runs
trained at $0.1$ learning rate for just 30-epochs also manages to attain high
accuracy—these runs just got lucky in a probabilistic sense and landed in a
wide minimum even with a shorter duration of explore.
| | |
---|---|---|---
(a) Explore 0 | (b) Explore 30 | (c) Explore 60 | (d) Explore 100
Figure 3: Histogram of minima sharpness (Keskar et al., 2016) for 50 random
trials of Cifar-10 on Resnet-18. Each figure shows histograms for runs with
different number of explore epochs. The distribution moves toward lower
sharpness and tightens as the number of explore epochs increase.
| | |
---|---|---|---
(a) Explore 0 | (b) Explore 30 | (c) Explore 60 | (d) Explore 100
Figure 4: Histogram of test accuracy for 50 random trials of Cifar-10 on
Resnet-18. Each figure shows histograms for runs with different number of
explore epochs. The distribution moves toward higher test accuracy and
sharpens as the number of explore epochs increase.
To validate this hypothesis further, we run experiments similar to the one in
Table 2. Specifically, we train Cifar-10 on Resnet-18 model for 200 epochs
using a standard step schedule with learning rate of $0.1,0.01,0.001$. We vary
the number of epochs trained using the high learning rate of 0.1, called the
explore epochs, from 0 to 100 epochs, and divide up the rest of the training
equally between 0.01 and 0.001. For each experimental setting, we conduct 50
random trials and plot the distributions of final test accuracy and the minima
sharpness as defined by the metric in Keskar et al. (2016) (see section 3.2).
If our hypothesis is true, then the more you explore, the higher the
probability of landing (and getting stuck) in a wide minima region, which
should cause the distribution to tighten and move towards wider minima (lower
sharpness), as the number of explore steps increase. This is exactly what is
observed in Figure 4. Also since wide minima correlate with higher test
accuracy, we should see the test accuracy distribution move towards higher
accuracy and sharpen, as the number of explore steps increase. This is
confirmed as well in Figure 4.
Longer training with low learning rate is not sufficient. Finally, to verify
whether explore at high learning rate is essential, we train Cifar-10 for
10,000 epochs at a fixed lower learning rate of 0.001. The training loss
converged but the final test accuracy was only 93.9, compared to an accuracy
of over 95% in 200 epochs in Table 2. Thus, even training $50\times$ longer at
low learning rate is not sufficient to achieve good generalization. Again,
this observation ties in well with the theoretical results from Xie et al.
(2020) where the authors show that the time to escape a minimum using SGD is
exponential in the inverse of learning rate. Thus, this result adds further
evidence to our density hypothesis, since even training $50\times$ longer at a
low learning rate is not sufficient to land in a wide minima.
Multi-scale. Given the importance of explore at high learning rate, a natural
question that may arise is whether explore is necessary at smaller learning
rate as well. To answer this, we train the same network for a total of 200
epochs with an initial high learning rate of $0.1$ for 100 epochs, but now we
vary the number of epochs trained with the learning rate of $0.01$ (we call
this finer-scale explore), and train with learning rate of $0.001$ for the
remaining epochs. As can be seen from Table 1, although the final training
loss remains similar, we find that finer-scale explore also plays a role
similar to the initial explore in determining the final test accuracy. This
indicates that our hypothesis about density of wide/narrow regions indeed
holds at multiple scales.
Table 1: Cifar-10 on Resnet-18 trained for 200 epochs. A learning rate of 0.1 is used for the first 100 epochs. We then vary the number of epochs trained with learning rate of $0.01$ (called finer-scale explore), and train the remaining epochs with a learning rate of $0.001$. We report averages values over 3 runs. Explore Epochs (Finer-scale) | Test Accuracy | Training Loss | Sharpness
---|---|---|---
10 | 94.78 | 0.0031 | 5.48
20 | 94.91 | 0.0026 | 4.47
30 | 95.00 | 0.0023 | 4.02
40 | 95.02 | 0.0021 | 3.91
50 | 95.10 | 0.0021 | 3.54
### 3.2 Minima Sharpness
Our hypothesis predicts that higher explore helps the optimizer land in a
wider minimum, which in turn helps generalization. We demonstrated this
empirically in Figure 4, where we plotted the distribution of the minima
sharpness, as measured by the sharpness metric introduced by (Keskar et al.,
2016). In this section, we describe Keskar’s sharpness metric in detail. We
also introduce a simple projected gradient ascent scheme to compute this
metric efficiently, which scales well to large networks. Finally, we also
evaluate our hypothesis with a different metric for minima sharpness, the
Fisher Score, which is based on the Fisher information matrix.
#### 3.2.1 Keskar’s Sharpness Metric
Keskar’s sharpness metric is based on measuring the maximum jump in the
network’s output function $F$ in a small neighborhood around the minimum.
After a few simplifications, Keskar’s metric for sharpness around a point $x$
can be written as:
$S_{x,F}(\epsilon):=\frac{(max_{y\in
C_{\epsilon}(x)}F(x+y))-F(x)}{1+F(x)}\times 100,$ (1)
where $C_{\epsilon}(x)$ is an $\epsilon$ neighborhood around $x$. Keskar et
al. (2016) mentions that under certain conditions and for small values of
$\epsilon$, $S_{x,F}$ is proportional to the largest eigenvalue of the
Hessian. Please see Keskar et al. (2016) for more details. For our
measurements we choose an $\epsilon$ of $1e^{-4}$.
For solving the maximization problem in Equation 1, Keskar et al. (2016) uses
a second-order L-BFGS-B (Byrd et al., 2003) optimization scheme. However, in
our experiments we found the method to be very slow. To combat this, Keskar et
al. (2016) limited their runs to 10 iterations but we found that results were
suboptimal using few iterations. Instead, we employed a projected gradient
ascent scheme to solve Equation 1. In each optimization step, we took a small
step with a learning rate of 0.001 in the gradient direction and projected the
updated point to lie inside $C_{\epsilon}(x)$. Because of the first order
nature, this method is much faster. We found that even 1000 iterations were
fast to compute and the results were much better than the second order method
in all cases we evaluated.
Using Keskar’s sharpness metric, we had shown in Figure 4 that the
distribution of minima sharpness moves towards lower values as the number of
explore epochs increase. In Table 2, we also report the average sharpness of
the minima for varying explores. As predicted by our hypothesis, average
sharpness decreases as number of explore epochs increase.
Table 2: Keskar’s sharpness metric for Cifar-10 on Resnet-18 trained for 200
epochs with Momentum. A learning rate of 0.1 is used for the explore epochs.
Half the remaining epochs are trained at 0.01 and the other half at 0.001. We
report the average sharpness over 50 different trials.
Explore Epochs | Sharpness
---|---
0 | 10.56
30 | 5.43
60 | 3.86
100 | 3.54
#### 3.2.2 Fisher Score
The maximum Eigen value of the Fisher Information Matrix (FIM) estimates the
highest curvature at a point, and is used as another metric to measure minima
sharpness (Sokol and Park, 2018). We used an unbiased estimate of the true
Fisher matrix (see Kunstner et al. (2019)) using 10 unbiased samples per
training data. Table 3 shows the average Fisher scores for the Cifar-10
experiments at varying explores. Again, the sharpness measured by the Fisher
score decreases as the number of explore epochs increase.
Table 3: Fisher Score for Cifar-10 on Resnet-18 trained for 200 epochs with
Momentum. A learning rate of 0.1 is used for the explore epochs. Half the
remaining epochs are trained at 0.01 and the other half at 0.001. We report
the average Fisher score over 10 different trials.
Explore Epochs | FIM score
---|---
0 | 0.051
30 | 0.046
60 | 0.043
100 | 0.042
## 4 Explore-Exploit Learning Rate Schedule
Given that we need to explore at multiple scales for good generalization, how
do we go about designing a good learning rate schedule? The search space of
the varying learning rate steps and their respective explore duration is
enormous.
Fortunately, since the explore at the initial scale is searching over the
entire loss surface while explore at finer-scales is confined to exploring
only the wide-minima region identified by the initial explore, the former is
more crucial. In our experiments as well, we found that the initial portion of
training is much more sensitive to exploration and needs a substantial number
of explore steps, while after this initial phase, several decay schemes worked
equally well. This is similar to the observations in (Golatkar et al., 2019)
where the authors found that regularization such as weight-decay and data
augmentation mattered significantly only during the initial phase of training.
The above observations motivate our Explore-Exploit learning rate schedule,
where the explore phase first optimizes at a high learning rate for some
minimum time in order to land in the vicinity of a wide minima. We should give
the explore phase enough time (a hyper-parameter), so that the probability of
landing in a wide minima is high. After the explore phase, we know with a high
probability, that the optimizer is in the vicinity of a wide region. We now
start the exploit phase to descend to the bottom of this wide region while
progressively decreasing the learning rate. Any smoothly decaying learning
rate schedule can be thought of as doing micro explore-exploit at
progressively reduced scales. A steady descent would allow more explore
duration at all scales, while a fast descent would explore less at higher
learning rates. We experimented with multiple schedules for the exploit phase,
and found a simple linear decay to zero, that does not require any hyper-
parameter, to be effective in all the models/datasets we tried. We call our
proposed learning rate schedule which starts at a constant high learning rate
for some minimum time, followed by a linear decay to zero, the Knee schedule.
Note that any learning rate decay scheme incorporates an implicit explore
during the initial part, where the learning rate stays high enough. To
evaluate the benefit of an explicit explore phase, we compare Knee schedule
against several decay schemes such as linear and cosine. Interestingly, the
results depend on the length of training. For long budget experiments, simple
decay schemes perform comparable to Knee schedule in some experiments, since
the implicit explore duration is also large, helping these schemes achieve
good generalization. However for short budget experiments, these schemes
perform significantly worse than Knee schedule, since the implicit explore
duration is much shorter. See Table 4 , 5 and 6 for the comparison.
Warmup. Some optimizers such as Adam use an initial warmup phase to slowly
increase the learning rate. However, as shown in Liu et al. (2019), learning
rate warmup is needed mainly to reduce variance during initial training stages
and can be eliminated with an optimizer such as RAdam. Learning rate warmup is
also used for large-batch training (Goyal et al., 2017). Here, warmup is
necessary since the learning rate is scaled to a very large value to
compensate for the large batch size. This warmup is complementary and can be
incorporated into Knee schedule. For example, we do this for BERTLARGE
pretraining experiment where a large 16k batch size was used.
## 5 Evaluation
In this section we present extensive empirical evaluation of Knee schedule on
multiple models and datasets across various optimizers, and compare Knee
schedule against the original hand-tuned learning rate baselines. We first
provide an overview of our main results followed by detailed experimental
results. We then run further experiments to validate our wide-minima density
hypothesis, as well as run sensitivity analysis of seed learning rate on the
Knee schedule.
Note that, for completeness, we present a detailed comparison of Knee schedule
with many other learning rate schedules in literature such as linear decay,
cosine decay (Loshchilov and Hutter, 2016), one-cycle (Smith, 2018) in
Appendix A.
### 5.1 Experiments
We evaluate Knee schedule on multiple models and datasets spanning both vision
and NLP problems. The training of these models spanned various optimizers
including SGD Momentum, Adam (Kingma and Ba, 2014a), RAdam (Liu et al., 2019)
and LAMB (You et al., 2019). For all experiments, we used an out of the box
policy, where we only change the learning rate schedule, without modifying
anything else. We evaluate on multiple image datasets – Imagenet on Resnet-50,
Cifar-10 on Resnet-18; as well as various NLP datasets – pretraining BERTLARGE
on Wikipidea+BooksCorpus and fine-tuning it on SQuADv1.1; and WMT’14 (EN-DE),
IWSLT’14 (DE-EN) on Transformers.
### 5.2 Results Overview
In all our experiments, we find that Knee schedule shows an improvement in
test accuracy over the original hand-tuned learning rate baseline as well as
various other learning rate schedules in the literature. Further, we also find
that Knee schedule can achieve the same accuracy as the baseline with a much
reduced training budget.
Table 4: We report the top-1 accuracy for ImageNet and Cifar-10, BLEU score for IWSLT’14 and WMT’14 and F1 score for BERT on SQuAD. All values are averaged over multiple runs for each experiment. Experiment details are mentioned in the individual sections of the experiments. | | | Knee | | | |
---|---|---|---|---|---|---|---
Experiment | Training | Knee | Schedule | Baseline | One-Cycle | Cosine | Linear
| Budget | Schedule | (Fixed 50% | | Decay | Decay |
| (epochs) | | explore) | | | |
ImageNet | 90 | 76.71 | 76.58 | 75.87 | 75.39 | 76.41 | 76.54
Cifar-10 | 200 | 95.26 | 95.26 | 95.10 | 94.09 | 95.23 | 95.18
IWSLT | 50 | 35.53 | 35.23 | 34.97 | 34.77 | 35.21 | 34.97
WMT’14 | 70 | 27.53 | 27.41 | 27.29 | 27.19 | 27.35 | 27.29
BERTLARGE | 31250 (iters) | 91.51 | 91.51 | 91.34 | - | - | 91.34
Table 5: Shorter budget training: Test accuracy on all learning rate schedules tried in this paper, but trained with a shortened budget. We report same metrics as Table 4. Knee schedule achieves the same accuracy as baseline schedules using much lower budget, saving precious GPU-hours. | Shortened Training | Knee | | Cosine | Linear | Saving
---|---|---|---|---|---|---
Experiment | Budget | Schedule | One-Cycle | Decay | Decay | ( V100 GPU
| (epochs) | | | | | hours)
ImageNet | 50 | 75.92 | 75.36 | 75.71 | 75.82 | 27
Cifar-10 | 150 | 95.14 | 93.84 | 95.06 | 95.02 | 0.25
IWSLT | 35 | 35.08 | 34.43 | 34.46 | 34.16 | 0.75
WMT’14 | 30 | 27.28 | 26.80 | 26.95 | 26.77 | 80
BERTLARGE | 20854 (iters) | 91.29 | - | - | 90.64 | 1002
Table 6: Epochs required by different LR schedules to reach the target accuracy. The target accuracy is chosen based on Knee schedule’s results with a reduced budget. Experiment | Target BLEU Score | Knee schedule | Cosine Decay | Linear Decay
---|---|---|---|---
IWSLT | 35.08 | 35 | 45 | 60
WMT’14 | 27.28 | 30 | 60 | 70
Table 4 shows the test accuracies of the various experiments, when trained
with the original budget; while Table 5 shows the results when trained with a
reduced budget. As shown, for the original budget runs, Knee schedule improves
on the test accuracies in all experiments. Note that in Knee schedule, the
explore duration is a hyperparameter. To avoid tuning this hyperparameter, we
experimented with a fixed 50% explore duration for the full budget runs. Even
the fixed 50% explore Knee schedule outperforms all the other baselines. Also
noteworthy is that Knee schedule is able to achieve the same test accuracies
as the baseline’s full budget runs with a much lower training budget, saving
precious GPU cycles (Table 5).
While the difference in accuracy values between the various schedules might
appear deceptively small in absolute terms, achieving these gains require a
large amount of compute. For example, the number of epochs needed by each
scheme to reach the target BLEU score for IWSLT’14 DE-EN and WMT’14 EN-DE with
the Transformer network is shown in Table 6. One can see that Knee schedule is
significantly more efficient as compared to say Cosine Decay, which takes 100%
more training time to achieve the same accuracy for WMT‘14 EN-DE. Thus, the
accuracy and/or compute gains achieved by Knee schedule is significant.
A summary of our main experimental results is as follows:
1. 1.
Imagenet on Resnet-50: We show an absolute gain of 0.8% in top-1 accuracy
against the competitive step schedule baseline for this model. Also, Knee
schedule can achieve the same accuracy as baseline in $\sim$45% less training
epochs.
2. 2.
BERTLARGE pre-training on Wikipedia+BooksCorpus dataset: Compared to the
baseline of You et al. (2019), we improve the F1 score on SQuAD v1.1 fine-
tuning task by 0.2% (91.51 compared to 91.34). Also, we were able to achieve
similar accuracy as baseline in 33% less training steps (a saving of
$\sim$1002 V100 GPU-hours!).
3. 3.
WMT’14 and IWSLT machine translation on Transformers: Compared to competitive
baselines, we were able to improve the BLEU scores by 0.24 and 0.56 points for
the two tasks. Moreover, Knee schedule was able to achieve the same accuracy
as baselines in 57% and 30% less training times.
4. 4.
State of the Art (SOTA) Results: We also attain state of the art results on
the IWSLT’14(DE-EN) machine translation dataset by simply replacing the
learning rate schedule of the current SOTA model (Shen et al., 2020) with
Knee. We were able to improve the BLEU score by 0.18, reaching a new SOTA
score of 37.78. Moreover, Knee can achieve the current SOTA baseline value in
30% less training time.
### 5.3 Detailed Results
We now describe each of our main experimental results in detail.
#### 5.3.1 ImageNet Image Classification on Resnet-50
We train ImageNet dataset (Russakovsky et al., 2015) on Resnet-50 network (He
et al., 2016) which has 25 million parameters, with a batch size of 256 and a
seed learning rate of 0.1. Random cropping and random horizontal flipping
augmentations were applied to the training dataset. We use SGD optimizer with
momentum of 0.9 and weight decay of $1e^{-4}$. For baseline runs, we used the
standard hand-tuned step learning rate schedule of 0.1, 0.01 and 0.001 for 30
epochs each. For Knee schedule we used a seed learning rate of 0.1 (same as
baseline). We trained with the original budget of 90 epochs as well as with a
reduced budget of 50 epochs. We used 30 explore epochs for the two
experiments. 222We used the opensource implementation at:
https://github.com/cybertronai/imagenet18_old
Table 7 shows the training loss and test accuracies for our experiments. Knee
schedule comfortably beats the test accuracy of baseline in the full budget
run (with absolute gains of 0.8% and 0.4% in top-1 and top-5 accuracy,
respectively), while meeting the baseline accuracy even with a much shorter
budget. The fact that the baseline schedule takes almost $80\%$ more training
time than Knee schedule for the same test accuracy, shows the effectiveness of
our Explore-Exploit scheme. See Figure 6 in Appendix B for training curves.
Table 7: ImageNet on Resnet-50 results. We report mean (stddev) over 3 runs. LR Schedule | Test Top 1 Acc. | Test Top 5 Acc. | Training Loss | Training Epochs
---|---|---|---|---
Baseline | 75.87 (0.035) | 92.90 (0.015) | 0.74 (1e-3) | 90
Knee | 76.71 (0.097) | 93.32 (0.031) | 0.79 (1e-3) | 90
Knee (short budget) | 75.92 (0.11) | 92.90 (0.085) | 0.90 (3e-3) | 50
#### 5.3.2 Cifar-10 Image Classification on Resnet-18
We train Cifar-10 dataset (Krizhevsky et al., 2009) on Resnet-18 network (He
et al., 2016) which has around 11 million parameters. SGD optimizer is used
with momentum of 0.9 and weight decay of $5e^{-4}$. Random cropping and random
horizontal flipping augmentations were applied to the training dataset. 333We
used the open-source implementation at: https://github.com/kuangliu/pytorch-
cifar.
For baseline, we used the hand-tuned step learning rate schedule of 0.1, 0.01
and 0.001 for 100, 50, 50 epochs, respectively. With Knee schedule, we train
the network with the original budget of 200 epochs, as well as a reduced
budget of 150 epochs. We used 100 explore epochs for both runs, and a seed
learning rate of 0.1 (same as baseline). Table 8 shows the training loss and
test accuracies for the experiments. Knee schedule beats the test accuracy of
baseline in the full budget run, while meeting the baseline test accuracy in
$25\%$ less budget. Refer to figure 7 in Appendix B for detailed comparisons
of training loss, test accuracy, and learning rate.
Table 8: Training loss and Test accuracy for Cifar-10 on Resnet-18. We report mean (stddev) over 7 runs. LR Schedule | Test Accuracy | Training Loss | Training Epochs
---|---|---|---
Baseline | 95.10 (0.14) | 0.002 (1e-4) | 200 epochs
Knee | 95.26 (0.11) | 0.002 (1e-4) | 200 epochs
Knee (short budget) | 95.14 (0.18) | 0.004 (3e-4) | 150 epochs
#### 5.3.3 BERTLARGE Pre-training
We pretrain on BERTLARGE on Wikipedia+BooksCorpus dataset with LAMB optimizer
(You et al. (2019)). BERTLARGE has around 330 million parameters and the pre-
training is divided into two phases with different sequence lengths. The first
phase consists of 90% steps with sequence length of 128 and the second phase
consists of the remaining 10% steps with sequence length of 512 (Devlin et al.
(2018)). We used a batch size of 16384 in both phases of training 444We used
the open-source implementation at:
https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT.
We use the same training budget of 31250 steps mentioned in (You et al.
(2019)). We also train the model on a shortened training budget of 2/3rd the
original steps (20854 steps).
Since large batch training requires learning rate warmup (see Goyal et al.
(2017)), we incorporate it into the Knee schedule by first doing a warmup of
10% as suggested in (You et al., 2019) followed by the explore-exploit phases.
We used an explore of 50% of the total steps available for both phases of BERT
training. For baseline, we use the warmup (10%) + linear decay (90%) schedule
(You et al., 2019; Devlin et al., 2018). The pre-trained models are evaluated
on the SQuAD v1.1 (Rajpurkar et al., 2016) dataset by fine-tuning on the
dataset for 2 epochs. See Table 9 for the results. For the full budget run,
Knee schedule improves the baseline by 0.2%, while for the reduced budget we
achieved similar fine-tuning accuracy as baseline. The baseline schedule
achieves a much lower accuracy with shorter budget training, showing the
efficacy of Knee schedule. BERT pre-training is extremely compute expensive
and takes around 47 hours on 64 V100 GPUs (3008 V100 GPU-hrs) on cloud VMs.
The reduced budget amounts to a saving of approximately 1002 V100 GPU-hours!
Table 9: BERTLARGE results. We report the pre-training train loss, and the test F1 accuracy on SQuAD v1.1 after fine-tuning. See figure 9 in Appendix B for training curves. LR Schedule | F1 score on SQuAD v1.1 | Training loss | Total Training Steps
---|---|---|---
Knee | 91.51 | 1.248 | 31250
Baseline (You et al., 2019) | 91.34 | - | 31250
Baseline (short budget) | 90.64 | 1.336 | 20854
Knee (short budget) | 91.29 | 1.275 | 20854
#### 5.3.4 Machine Translation on Transformer Network with WMT’14 and IWSLT
In the second NLP task, we train the Transformer (base model) (Vaswani et al.,
2017) on the IWSLT’14 (De-En) (Cettolo et al., 2014) and WMT’14 (En-De) (Bojar
et al., 2014) datasets with the RAdam (Liu et al., 2019) optimizer.
##### WMT’14 (EN-DE):
We use the default implementation provided by the fairseq package (Ott et al.,
2019) 555https://github.com/pytorch/fairseq. We train WMT’14 (EN-DE) dataset
on the TransformerBASE (Vaswani et al., 2017) model which has around 86
million parameters and use the RAdam (Liu et al., 2019) optimizer with
$\beta_{1}$ of 0.9 and $\beta_{2}$ of 0.999. Label smoothed cross entropy was
used as the objective function with an uncertainty of 0.1. A dropout of 0.1,
clipping norm of 25 and weight decay of $1e^{-4}$ is used. Each training batch
contains approximately 30000 tokens.
The baseline schedule uses a linear decay for 70 epochs (Liu et al., 2019).
With Knee schedule, we trained with the original budget of 70 epochs, as well
as a reduced budget of 30 epochs. We used 50 and 25 explore epochs for the two
runs, respectively and a seed learning rate of $3e^{-4}$ for both Knee
schedule and baseline. In all cases we use the model checkpoint with least
loss on the validation set for computing BLEU scores on the test set. Table 10
shows the training loss and test accuracy averaged over 3 runs. Knee schedule
improves the test BLEU score of baseline in the full budget run by 0.24
points. In the shorter budget run, Knee schedule matches the test accuracy of
the baseline while taking $57\%$ less training time (a saving of 80 V100 GPU-
hours!). See Figure 10 in Appendix B for training curves.
##### IWSLT’14 (DE-EN):
For IWSLT’14 (DE-EN) we use the same configuration as WMT’14 (EN-DE), except
for a dropout of 0.3 following Fairseq’s out-of-box implementation. Each
training batch contains approximately 4000 tokens. For Knee schedule, we
choose explore as 30 epochs for short budget runs and 40 epochs for full
budget runs.
The baseline schedule uses a linear decay for 50 epochs (Liu et al., 2019).
With Knee schedule, we trained with the original budget of 50 epochs, as well
as a reduced budget of 35 epochs. We used 40 and 30 explore epochs for the two
runs, respectively and a seed learning rate of $3e^{-4}$ for both Knee
schedule and baseline. In all cases we use the model checkpoint with least
loss on the validation set for computing BLEU scores on the test set. Knee
schedule improves the baseline test BLEU score by 0.56 points in the full
budget run. In the shorter budget run, Knee schedule matches the test accuracy
of the baseline schedule while taking $30\%$ less training time. See Figure 11
in Appendix B for training curves.
Table 10: Results for WMT’14 (EN-DE) on Transformer networks. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report mean (stdev) over 3 runs. LR Schedule | Test BLEU | Train | Validation | Training
---|---|---|---|---
Score | Perplexity | Perplexity | Epochs
Baseline | 27.29 (0.06) | 3.87 (0.017) | 4.89 (0.02) | 70
Knee | 27.53 (0.12) | 3.89 (0.017) | 4.87 (0.006) | 70
Knee (short budget) | 27.28 (0.17) | 4.31 (0.02) | 4.92 (0.007) | 30
Table 11: Training, validation perplexity and test BLEU scores for IWSLT on Transformer networks. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU | Train | Validation | Training
---|---|---|---|---
Score | Perplexity | Perplexity | Epochs
Baseline | 34.97 (0.035) | 3.36 (0.001) | 4.91 (0.035) | 50
Knee | 35.53 (0.06) | 3.00 (0.044) | 4.86 (0.02) | 50
Knee (short budget) | 35.08 (0.12) | 3.58 (0.049) | 4.90 (0.063) | 35
#### 5.3.5 SQuAD-v1.1 fine-tuning on BERTBASE
We also evaluate Knee schedule on the task of fine-tuning BERTBASE model
Devlin et al. (2018) on SQuAD v1.1 Rajpurkar et al. (2016) with the Adam
Kingma and Ba (2014b) optimizer 666We used the implementation at:
https://github.com/huggingface/transformers. BERT fine-tuning is prone to
overfitting because of the huge model size compared to the small fine-tuning
dataset, and is typically run for only a few epochs. For baseline we use the
linear decay schedule mentioned in Devlin et al. (2018). We use a seed
learning rate of $3e^{-5}$ and train for 2 epochs. For Knee schedule, we train
the network with 1 explore epoch with the same seed learning rate of
$3e^{-5}$. Table 12 shows our results over 3 runs. We achieve a mean EM score
of 81.4, compared to baseline’s 80.9, a 0.5% absolute improvement. We don’t do
a short budget run for this example, as the full budget is just 2 epochs.
Please refer to Figure 14 in Appendix B for the training loss, test accuracy
and learning rate curves.
Table 12: SQuAD fine-tuning on BERTBASE. We report the average training loss, and average test EM, F1 scores over 3 runs. LR Schedule | EM | F1 | Train Loss | Training Epochs
---|---|---|---|---
Baseline | 80.89 (0.15) | 88.38 (0.032) | 1.0003 (0.004) | 2
Knee schedule | 81.38 (0.02) | 88.66 (0.045) | 1.003 (0.002) | 2
#### 5.3.6 State of the Art Result
To further demonstrate the effectiveness of Knee schedule, we took a recent
high performing model, Cutoff (Shen et al., 2020)777We used the code available
at https://github.com/dinghanshen/Cutoff, which had reported state-of-the-art
accuracy on the IWSLT’14 (DE-EN) dataset. They reported a BLEU score of 37.6
when trained with an inverse square root learning rate schedule for 100
epochs, with the first 6000 steps allocated for warmup. We simply retrained
the model with our Knee schedule, and achieved a new SOTA BLEU score of 37.78
(an absolute increase of 0.18). See Table 13 for the BLEU scores, training and
validation perplexities.
We also show that Knee schedule can train the model in 30% less training time
(70 epochs), while achieving slightly better accuracy of 37.66 BLUE score
compared to the 100 epoch baseline. The baseline schedule when run for 70
epochs achieves a much worse accuracy of 37.31.
For both the full budget (100 epochs) and the short budget (70 epochs) Knee
runs, we choose 50% of the total training epochs as explore epochs. We also
perform warmup for the same number of steps as baseline. For all runs (Knee
and baseline), we report the BLEU score obtained by averaging the last 5
checkpoints and computing on the test set. See Figure 12 and 13 in Appendix B
for training curves.
Table 13: Training, validation perplexity and test BLEU scores for IWSLT’14 DE-EN on Cutoff. The test BLEU scores are computed by averaging the last 5 checkpoints LR Schedule | Test BLEU | Train | Validation | Training
---|---|---|---|---
Score | Perplexity | Perplexity | Epochs
Inv. Sqrt | 37.60 | 3.46 | 4.24 | 100
Knee | 37.78 | 3.29 | 4.13 | 100
Inv. Sqrt (short budget) | 37.31 | 3.76 | 4.29 | 70
Knee (short budget) | 37.66 | 3.48 | 4.18 | 70
### 5.4 Hypothesis Validation with Knee schedule on Language Tasks
For validating our hypothesis on the density of wide minima vs narrow minima,
we did multiple experiments on vision tasks, most of which were discussed in
Section 3. To summarize, in Figures 4 and 4, we showed that for Cifar-10 on
Resnet-18, as the number of explore steps increase, the distribution of minima
width and test accuracy sharpens and shifts towards wider minima and better
accuracy, respectively.
Table 14: IWSLT’14 (DE-EN) on the Transformer network trained with the Knee schedule. The explore duration is varied, while keeping the total training budget fixed at 50 epochs. We report averages over 3 runs. Explore Epochs | Test BLEU score | Training Perplexity
---|---|---
5 | 34.93 | 3.29
10 | 35.02 | 3.22
15 | 35.08 | 3.11
20 | 35.10 | 3.08
25 | 35.23 | 3.02
30 | 35.28 | 2.99
40 | 35.53 | 3.00
We now perform similar experiments on the IWSLT’14 German to English dataset
(Cettolo et al., 2014) trained on Transformer networks (Vaswani et al., 2017)
to demonstrate that our hypothesis holds even on a completely different NLP
dataset and network architecture. We train with the Knee schedule for a total
budget of 50 epochs with explore lr as $3e^{-4}$, but keep varying the number
of explore epochs. As shown in Table 14, the test BLEU score increases as we
increase the number of explore epochs. Further, we found that among multiple
trials, a 20 epoch explore run had a high BLEU score of 35.29, suggesting that
the run got lucky. Thus, these results on the IWSLT’14 (DE-EN) dataset add
more evidence to the wide-minima density hypothesis.
### 5.5 Learning Rate Sensitivity for Knee schedule
We performed sensitivity analysis of the starting learning rate, referred to
as the seed learning rate, for Knee schedule. We trained the Cifar-10 dataset
on Resnet-18 with the Knee schedule for a shortened budget of 150 epochs,
starting at different seed learning rates. For each experiment, we do a simple
linear search to find the best explore duration. The test accuracies and
optimal explore duration for the different seed learning rate choices is shown
in Table 15. As shown, the seed learning rate can impact the final accuracy,
but Knee schedule is not highly sensitive to it. In fact, we can achieve the
target accuracy of 95.1 with multiple seed learning rates of 0.05, 0.075,
0.0875 and 0.115, as compared to the original seed learning rate of 0.1, by
tuning the number of explore epochs.
Another interesting observation is that the optimal explore duration varies
inversely with the seed learning rate. Since a bigger learning rate has higher
probability of escaping narrow minima compared to a lower learning rate, it
would, on an average, require fewer steps to land in a wide minima. Thus,
larger learning rates can explore faster, and spend more time in the exploit
phase to go deeper in the wide minimum. This observation is thus consistent
with our hypothesis and further corroborates it.
We also note that by tuning both seed learning rate and explore duration, we
can achieve the twin objectives of achieving a higher accuracy, as well as a
shorter training time – e.g. here we are able to achieve an accuracy of 95.34
in 150 epochs (seed learning rate 0.075), compared to 95.1 achieved by the
baseline schedule in 200 epochs.
Table 15: Seed learning rate sensitivity analysis. Cifar-10 on Resnet-18 trained for 150 epochs with Knee schedule. We vary the seed learning rate and explore epochs to get the best test accuracy for the particular setting. We report averages over 3 runs. Seed LR | Test Accuracy | Optimal Explore Epochs
---|---|---
0.03 | 95.07 | 120
0.05 | 95.12 | 120
0.0625 | 95.15 | 120
0.075 | 95.34 | 100
0.0875 | 95.22 | 100
0.1 | 95.14 | 100
0.115 | 95.20 | 60
0.125 | 95.06 | 60
0.15 | 95.04 | 30
## 6 Conclusions
In this paper, we make an observation that an initial explore phase with a
high learning rate is essential for good generalization of DNNs. Further, we
find that a minimum explore duration is required even if the training loss
stops improving much earlier. We explain this observation via our hypothesis
that in the DNN loss landscape, the density of wide minima is significantly
lower than that of narrow minima. Motivated by this hypothesis, we present an
Explore-Exploit based learning rate schedule, called the Knee schedule. We do
extensive evaluation of Knee schedule on multiple models and datasets. In all
experiments, the Knee schedule outperforms prior hand-tuned baselines,
including achieving SOTA test accuracies, when trained with the original
training budget, and achieves the same test accuracy as the baseline when
trained with a much shorter budget.
## 7 Acknowledgement
We would like to thank Sanjith Athlur for his help in setting up the VM
cluster for large training runs and Harshay Shah for helpful discussions on
minima width computation.
## References
* Arora et al. (2018) Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. _arXiv preprint arXiv:1802.05296_ , 2018.
* Baldassi et al. (2019) Carlo Baldassi, Fabrizio Pittorino, and Riccardo Zecchina. Shaping the learning landscape in neural networks around wide flat minima. _CoRR_ , abs/1905.07833, 2019. URL http://arxiv.org/abs/1905.07833.
* Baldassi et al. (2020) Carlo Baldassi, Fabrizio Pittorino, and Riccardo Zecchina. Shaping the learning landscape in neural networks around wide flat minima. _Proceedings of the National Academy of Sciences_ , 117(1):161–170, 2020.
* Bojar et al. (2014) Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ale s Tamchyna. Findings of the 2014 workshop on statistical machine translation. In _Proceedings of the Ninth Workshop on Statistical Machine Translation_ , pages 12–58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W14/W14-3302.
* Byrd et al. (2003) Richardh Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. _SIAM Journal on Scientific Computing_ , 16, 02 2003. doi: 10.1137/0916069.
* Cettolo et al. (2014) Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign, iwslt 2014. In _Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam_ , page 57, 2014.
* Chaudhari et al. (2019) Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, and Riccardo Zecchina. Entropy-sgd: Biasing gradient descent into wide valleys. _Journal of Statistical Mechanics: Theory and Experiment_ , 2019(12):124018, 2019.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ , 2018.
* Dinh et al. (2017) Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , pages 1019–1028. JMLR. org, 2017.
* Draxler et al. (2018) Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred Hamprecht. Essentially no barriers in neural network energy landscape. In _International conference on machine learning_ , pages 1309–1318. PMLR, 2018.
* Freeman and Bruna (2016) C Daniel Freeman and Joan Bruna. Topology and geometry of half-rectified network optimization. _arXiv preprint arXiv:1611.01540_ , 2016.
* Garipov et al. (2018) Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, and Andrew Gordon Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. _arXiv preprint arXiv:1802.10026_ , 2018.
* Golatkar et al. (2019) Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Time matters in regularizing deep networks: Weight decay and data augmentation affect early learning dynamics, matter little near convergence. _arXiv preprint arXiv:1905.13277_ , 2019.
* Goyal et al. (2017) Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. _arXiv preprint arXiv:1706.02677_ , 2017.
* Guiroy et al. (2019) Simon Guiroy, Vikas Verma, and Christopher Pal. Towards understanding generalization in gradient-based meta-learning. _arXiv preprint arXiv:1907.07287_ , 2019.
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778, 2016.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. _Neural Computation_ , 9(1):1–42, 1997.
* Jastrzebski et al. (2017) Stanisław Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors influencing minima in sgd. _arXiv preprint arXiv:1711.04623_ , 2017.
* Jastrzebski et al. (2019) Stanisław Jastrzebski, Zachary Kenton, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amost Storkey. On the relation between the sharpest directions of DNN loss and the SGD step length. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=SkgEaj05t7.
* Kawaguchi (2016) Kenji Kawaguchi. Deep learning without poor local minima. In _Advances in neural information processing systems_ , pages 586–594, 2016.
* Keskar et al. (2016) Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. _arXiv preprint arXiv:1609.04836_ , 2016.
* Kingma and Ba (2014a) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014a.
* Kingma and Ba (2014b) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014b.
* Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
* Kunstner et al. (2019) Frederik Kunstner, Lukas Balles, and Philipp Hennig. Limitations of the empirical fisher approximation. _arXiv preprint arXiv:1905.12558_ , 2019.
* Li et al. (2019) Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial large learning rate in training neural networks. _arXiv preprint arXiv:1907.04595_ , 2019.
* Liu et al. (2019) Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. _arXiv preprint arXiv:1908.03265_ , 2019.
* Loshchilov and Hutter (2016) Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_ , 2016.
* McCandlish et al. (2018) Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. _arXiv preprint arXiv:1812.06162_ , 2018.
* Ott et al. (2019) Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In _Proceedings of NAACL-HLT 2019: Demonstrations_ , 2019.
* Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. _arXiv preprint arXiv:1606.05250_ , 2016.
* Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. _International journal of computer vision_ , 115(3):211–252, 2015.
* Sagun et al. (2017) Levent Sagun, Utku Evci, V Ugur Güney, Yann Dauphin, and Léon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. iclr 2018 workshop contribution. _arXiv preprint arXiv:1706.04454_ , 2017.
* Shallue et al. (2018) Christopher J Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E Dahl. Measuring the effects of data parallelism on neural network training. _arXiv preprint arXiv:1811.03600_ , 2018.
* Shen et al. (2020) Dinghan Shen, Mingzhi Zheng, Yelong Shen, Yanru Qu, and Weizhu Chen. A simple but tough-to-beat data augmentation approach for natural language understanding and generation. _arXiv preprint arXiv:2009.13818_ , 2020.
* Smith (2017) Leslie N Smith. Cyclical learning rates for training neural networks. In _2017 IEEE Winter Conference on Applications of Computer Vision (WACV)_ , pages 464–472. IEEE, 2017.
* Smith (2018) Leslie N Smith. A disciplined approach to neural network hyper-parameters: Part 1–learning rate, batch size, momentum, and weight decay. _arXiv preprint arXiv:1803.09820_ , 2018.
* Sokol and Park (2018) Piotr A Sokol and Il Memming Park. Information geometry of orthogonal initializations and training. _arXiv preprint arXiv:1810.03785_ , 2018.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008, 2017.
* Wang et al. (2018) Huan Wang, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. Identifying generalization properties in neural networks. _arXiv preprint arXiv:1809.07402_ , 2018.
* Wu et al. (2018) Lei Wu, Chao Ma, and E Weinan. How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective. In _Advances in Neural Information Processing Systems_ , pages 8279–8288, 2018.
* Xie et al. (2020) Zeke Xie, Issei Sato, and Masashi Sugiyama. A diffusion theory for deep learning dynamics: Stochastic gradient descent exponentially favors flat minima. _arXiv e-prints_ , pages arXiv–2002, 2020.
* Yoshida and Miyato (2017) Yuichi Yoshida and Takeru Miyato. Spectral norm regularization for improving the generalizability of deep learning. _arXiv preprint arXiv:1705.10941_ , 2017.
* You et al. (2019) Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. In _International Conference on Learning Representations_ , 2019.
## A Comparisons with Other Baseline Learning Rate Schedules
In this section we compare Knee schedule against several other learning rate
schedules – one-cycle, linear decay and cosine decay.
One-Cycle: The one-cycle learning rate schedule was proposed in Smith (2018)
(also see Smith (2017)). This schedule first chooses a maximum learning rate
based on an learning rate range test. The learning rate range test starts from
a small learning rate and keeps increasing the learning rate until the loss
starts exploding (see figure 5). Smith (2018) suggests that the maximum
learning rate should be chosen to be bit before the minima, in a region where
the loss is still decreasing. There is some subjectivity in making this
choice, although some blogs and libraries888See e.g.
https://towardsdatascience.com/finding-good-learning-rate-and-the-one-cycle-
policy-7159fe1db5d6 and https://sgugger.github.io/how-do-you-find-a-good-
learning-rate.html. Also see https://docs.fast.ai/callbacks.lr_finder.html and
https://docs.fast.ai/callbacks.one_cycle.html suggest using a learning rate
one order lower than the one at minima. We go with this choice for all our
runs.
Once the maximum learning rate is chosen, the one-cycle schedule proceeds as
follows. The learning rate starts at a specified fraction999See div_factor in
https://docs.fast.ai/callbacks.one_cycle.html. We chose the fraction to be 0.1
in our experiments. of the maximum learning rate and is increased linearly to
the maximum learning rate for 45 percent of the training budget and then
decreased linearly for the remaining 45. For the final 10 percent, the
learning rate is reduced by a large factor (we chose a factor of 10). We used
an opensource implementation
101010https://github.com/nachiket273/One_Cycle_Policy for our experiments.
(a) LR range test for CIFAR-10
(b) LR range test for IWSLT’14 DE-EN
(c) LR range test for WMT’14 EN-DE
(d) LR range test for ImageNet
Figure 5: learning rate range test for selecting the maximum learning rate. A
good choice is the learning rate is a bit before the minima in a region where
the loss is still decreasing.
Linear Decay: The linear decay learning rate schedule simply decays the
learning rate linearly to zero starting from a seed learning rate.
Cosine Decay: The cosine decay learning rate schedule decays the learning rate
to zero following a cosine curve, starting from a seed learning rate.
### A.1 Cifar-10
Figure 5(a) shows the learning rate range test for Cifar-10 with the Resnet-18
network. The minima occurs around learning rate of 0.09, and we choose
$9e^{-3}$ as the maximum learning rate for the One-Cycle runs. For linear,
cosine decay schedules we start with a seed learning rate of 0.1 as used in
the standard baselines. The training loss and test accuracy for the various
schedules are shown in Table 16 for the full budget runs (200 epochs), and in
Table 17 for the short budget runs (150 epochs).
Table 16: Cifar-10 on Resnet-18 full budget training (200 epochs): Training loss and Test accuracy for more learning rate schedules. We report the mean and standard deviation over 7 runs. LR Schedule | Test Accuracy | Train Loss
---|---|---
One-Cycle | 94.08 (0.07) | 0.0041 (6e-5)
Cosine Decay | 95.23 (0.11) | 0.0023 (9e-5)
Linear Decay | 95.18 (0.15) | 0.0018 (7e-5)
Knee schedule | 95.26 (0.11) | 0.0023 (1e-4)
Table 17: Cifar-10 on Resnet-18 short budget training (150 epochs): Training loss and Test accuracy for more learning rate schedules. We report the mean and standard deviation over 7 runs. LR Schedule | Test Accuracy | Train Loss
---|---|---
One-Cycle | 93.84 (0.082) | 0.0052 (7e-5)
Cosine Decay | 95.06 (0.16) | 0.0030 (2e-4)
Linear Decay | 95.02 (0.10) | 0.0021 (1e-4)
Knee schedule | 95.14 (0.18) | 0.0044 (3e-4)
### A.2 ImageNet
Figure 5(d) shows the learning rate range test for ImageNet with the Resnet-50
network. The minima occurs around learning rate of 2.16, and we choose $0.216$
as the maximum learning rate for One-Cycle runs. For linear, cosine decay
schedules we start with a seed learning rate of 0.1 as used in the standard
baselines. The training loss and test accuracy for the various schedules are
shown in Table 18 for the full budget runs (90 epochs), and in Table 19 for
the short budget runs (50 epochs).
Table 18: ImageNet with ResNet-50 full budget training (90 epochs): Training loss, Test Top-1 and Test Top-5 for more learning rate schedules. We report the mean and standard deviation over 3 runs. LR Schedule | Test Top-1 | Test Top-5 | Train Loss (av)
---|---|---|---
One Cycle | 75.39 (0.137) | 92.56 (0.040) | 0.96 (0.003)
Cosine Decay | 76.41 (0.212) | 93.28 (0.066) | 0.80 (0.002)
Linear decay | 76.54 (0.155) | 93.21 (0.051) | 0.75 (0.001)
Knee schedule | 76.71 (0.097) | 93.32 (0.031) | 0.79 (0.001)
Table 19: ImageNet with ResNet-50 short budget training (50 epochs): Training loss, Test Top-1 and Test Top-5 for more learning rate schedules. We report the mean and standard deviation over 3 runs. LR Schedule | Test Top-1 | Test Top-5 | Train Loss (av)
---|---|---|---
One Cycle | 75.36 (0.096) | 92.53 (0.079) | 1.033 (0.004)
Cosine Decay | 75.71 (0.116) | 92.81 (0.033) | 0.96 (0.002)
Linear decay | 75.82 (0.080) | 92.84 (0.036) | 0.91 (0.002)
Knee schedule | 75.92 (0.11) | 92.90 (0.085) | 0.90 (0.003)
### A.3 WMT’14 EN-DE
Figure 5(c) shows the learning rate range test for WMT’14 EN-DE on the
transformer networks. The minima occurs near $1.25e^{-3}$. For the maximum
learning rate, we choose $2.5e^{-4}$ for the default one-cycle policy. For
linear, cosine decay schedules we start with a seed learning rate of $3e^{-4}$
as used in the standard baselines The training, validation perplexity and BLEU
scores for the various schedules are shown in Table 20 for the full budget
runs (70 epochs), and in Table 21 for the short budget runs (30 epochs).
Table 20: WMT’14 (EN-DE) on Transformer networks full budget training (70 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl
---|---|---|---
One-Cycle | 27.19 (0.081) | 3.96 (0.014) | 4.95 (0.013)
Cosine Decay | 27.35 (0.09) | 3.87 (0.011) | 4.91 (0.008)
Linear Decay | 27.29 (0.06) | 3.87 (0.017) | 4.89 (0.02)
Knee schedule | 27.53 (0.12) | 3.89 (0.017) | 4.87 (0.006)
Table 21: WMT’14 (EN-DE) on Transformer networks short budget training (30 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl
---|---|---|---
One-Cycle | 26.80 (0.2) | 4.38 (0.017) | 5.02 (0.007)
Cosine Decay | 26.95 (0.23) | 4.32 (0.013) | 4.99 (0.011)
Linear Decay | 26.77 (0.12) | 4.36 (0.092) | 5.02 (0.01)
Knee schedule | 27.28 (0.17) | 4.31 (0.02) | 4.92 (0.007)
### A.4 IWSLT’14 DE-EN
Figure 5(b) shows the learning rate range test for IWSLT on the transformer
networks. The minima occurs near $2.5e^{-3}$. For the maximum learning rate,
we choose $2.5e^{-4}$ for the default one-cycle policy. For linear, cosine
decay schedules we start with a seed learning rate of $3e^{-4}$ as used in the
standard baselines The training, validation perplexity and BLEU scores for the
various schedules are shown in Table 22 for the full budget runs (50 epochs),
and in Table 23 for the short budget runs (35 epochs).
Table 22: IWSLT’14 (DE-EN) on Transformer networks full budget training (50 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl
---|---|---|---
One-Cycle | 34.77 (0.064) | 3.68 (0.009) | 4.97 (0.010)
Cosine Decay | 35.21 (0.063) | 3.08 (0.004) | 4.88 (0.014)
Linear Decay | 34.97 (0.035) | 3.36 (0.001) | 4.92 (0.035)
Knee schedule | 35.53 (0.06) | 3.00 (0.044) | 4.86 (0.02)
Table 23: IWSLT’14 (DE-EN) on Transformer networks short budget training (35 epochs): Training, validation perplexity and test BLEU scores for more learning rate schedules. The test BLEU scores are computed on the checkpoint with the best validation perplexity. We report the mean and standard deviation over 3 runs. LR Schedule | Test BLEU Score | Train ppl | Validation ppl
---|---|---|---
One-Cycle | 34.43 (0.26) | 3.98 (0.028) | 5.09 (0.017)
Cosine Decay | 34.46 (0.33) | 3.86 (0.131) | 5.06 (0.106)
Linear Decay | 34.16 (0.28) | 4.11 (0.092) | 5.14 (0.066)
Knee schedule | 35.08 (0.12) | 3.58 (0.063) | 4.90 (0.049)
### A.5 SQuAD-v1.1 finetuning with BERTBASE
We choose $1e^{-5}$ as the maximum learning rate for One-Cycle runs as the
minima occurs close to $1e^{-4}$ . For linear, cosine decays we start with a
seed learning rate of $3e^{-5}$ as used in standard baselines. Table 24 show
the average training loss, average test EM and F1 scores for the various
schedules. We did not do a short budget training for this dataset, as the full
budget is just 2 epochs.
Table 24: SQuAD-v1.1 fine-tuning on BERTBASE for more learning rate schedules. We report the average training loss, average test EM, F1 scores over 3 runs. LR Schedule | EM (av) | F1 (av) | Train Loss (av)
---|---|---|---
One Cycle | 79.9 (0.17) | 87.8 (0.091) | 1.062 (0.003)
Cosine Decay | 81.31 (0.07) | 88.61 (0.040) | 0.999 (0.003)
Linear decay | 80.89 (0.15) | 88.38 (0.042) | 1.0003 (0.004)
Knee schedule | 81.38 (0.02) | 88.66 (0.045) | 1.003 (0.002)
## B Detailed Plots
Figure 6: ImageNet on Resnet-50 trained with Momentum. Shown are the training
loss, top-1/top-5 test accuracy and learning rate as a function of epochs, for
the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is
split into 3 parts to permit higher fidelity in the y-axis range.
Figure 7: Cifar-10 on Resnet-18 trained with Momentum. Shown are the training
loss, test accuracy and learning rate as a function of epochs, for the
baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is split
into 3 parts to permit higher fidelity in the y-axis range.
Figure 8: BERTLARGE pretraining for batch size of 16k with LAMB optimizer for
the short budget runs. Shown are the training loss and learning rate as a
function of steps, for the baseline scheme short budget (orange) vs the Knee
schedule scheme short budget (blue). The plot is split into 2 parts to give a
clear picture of the two phases of training Devlin et al. (2018). Note that
even though the training loss curves look similar for the two runs, we see a
significant gap in F1 score obtained when we fine-tune the model checkpoints
on SQuAD-v1.1 Rajpurkar et al. (2016). See Table 9 for details.
LR Schedule | F1 - Trial 1 | F1 - Trial 2 | F1 - Trial 3 | F1 avg. | F1 max
---|---|---|---|---|---
Baseline (short budget) | 90.39 | 90.64 | 90.53 | 90.52 | 90.64
Knee schedule ( short budget ) | 91.22 | 91.29 | 91.18 | 91.23 | 91.29
Knee schedule ( full budget ) | 91.45 | 91.41 | 91.51 | 91.46 | 91.51
Figure 9: SQuAD fine-tuning on BERTLARGE. We report F1 scores for 3 different
trials as well as the maximum and average values.
Figure 10: WMT’14 (EN-DE) on TransformerBASE network trained with RAdam. Shown
are the training perplexity, validation perplexity and learning rate as a
function of epochs, for the baseline scheme (orange) vs the Knee schedule
scheme (blue). The plot is split into 3 parts to permit higher fidelity in the
y-axis range.
Figure 11: IWSLT’14 (DE-EN) on TransformerBASE network trained with RAdam.
Shown are the training perplexity, validation perplexity and learning rate as
a function of epochs, for the baseline scheme (orange) vs the Knee schedule
scheme (blue). The plot is split into 3 parts to permit higher fidelity in the
y-axis range.
Figure 12: IWSLT’14 (DE-EN) on the SOTA model Cutoff(Shen et al., 2020),
trained with Adam. Shown are the training perplexity, validation perplexity
and learning rate as a function of epochs, for the baseline scheme (orange) vs
the Knee schedule scheme (blue).
Figure 13: IWSLT’14 (DE-EN) on the SOTA model Cutoff(Shen et al., 2020),
trained with Adam with a reduced training budget of 70 epochs. Shown are the
training perplexity, validation perplexity and learning rate as a function of
epochs, for the baseline scheme (orange) vs the Knee schedule scheme (blue).
Figure 14: SQuAD-v1.1 fine-tuning on BERTBASE trained with Adam. Shown are the
training loss, test EM score, and learning rate as a function of epochs, for
the baseline scheme (orange) vs the Knee schedule scheme (blue). The plot is
split into 2 parts to permit higher fidelity in the y-axis range. It is clear
that with Knee schedule the network starts to overfit after the 2nd epoch,
where the testing loss continues to go down, but generalization suffers. We
saw similar behavior with different seeds, and thus need to train with Knee
schedule for only 2 epochs.
|
2024-09-04T02:54:58.282887 | 2020-03-09T09:26:23 | 2003.03988 | {
"authors": "Nasir Ahmad, Luca Ambrogioni, Marcel A. J. van Gerven",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26110",
"submitter": "Nasir Ahmad",
"url": "https://arxiv.org/abs/2003.03988"
} | arxiv-papers | Original Article Journal Section STDWI, Spike-Timing-Dependent Weight
Inference; LIF, Leaky Integrate and Fire; RDD, Regression Discontinuity
Design. Nasir Ahmad<EMAIL_ADDRESS>
# Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight
Inference
Nasir Ahmad Department of Artificial Intelligence, Donders Institute for
Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
Luca Ambrogioni Department of Artificial Intelligence, Donders Institute for
Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
Marcel van Gerven Department of Artificial Intelligence, Donders Institute
for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the
Netherlands
###### Abstract
We propose a solution to the weight transport problem, which questions the
biological plausibility of the backpropagation algorithm. We derive our method
based upon a theoretical analysis of the (approximate) dynamics of leaky
integrate-and-fire neurons. Our results demonstrate that the use of spike
timing alone outcompetes existing biologically plausible methods for synaptic
weight inference in spiking neural network models. Furthermore, our proposed
method is more flexible, being applicable to any spiking neuron model, is
conservative in how many parameters are required for implementation and can be
deployed in an online-fashion with minimal computational overhead. These
features, together with its biological plausibility, make it an attractive
mechanism for weight inference at single synapses.
###### keywords:
Weight Transport Problem, Biologically Plausible Learning, Spiking Neural
Network
## 1 Introduction
Backpropagation of error is a successful approach for training rate-based
neural network models [1, 2]. However, since its inception it has been
criticised for its lack of biological plausibility [3, 4]. In particular, in
order to update individual synaptic connections weights within a network,
information is required about distant error signals and the weights of other
synaptic connections of the network – information which is not available
locally to the synapse. However, backpropagation’s flexibility, unrelenting
success in application-based research, and most significantly its capacity for
modelling and reproducing neural response statistics has contributed to a
recent re-examination of its potential role and plausibility in neural systems
[5, 6, 7, 8, 9].
A number of attempts have been made to explain mechanisms by which
backpropagation’s implausibilities can be addressed. These can be divided into
methods which propose alternative implementations of backpropagation, namely
energy-based and dynamical systems methods which converge to backpropagation
of error [10, 11, 12], for an overview see [6], and methods which show that
components which are considered implausible can be approximated using
alternative and plausible computations [13, 14, 15, 16, 17]. We focus on the
latter approaches in this study.
Figure 1: The weight transport problem in backpropagation of error. A. The
computations involved in the forward-pass of an example feedforward neural
network model. B. The backpropagation of error method. Specifically, the
derivative of the loss function can be computed with respect to each weight
matrix in our example network. Observe that the derivative of the loss
function with respect to a weight matrix ($W_{1}$) deep in the network depends
upon the weight matrices in the higher layers ($W_{2}$). C. Backpropagation of
error requires a copy of the weights of the forward network.
One particularly difficult-to-reconcile component of backpropagation is the
need to propagate error signals backwards through a network (see Fig. 1). This
requires that the backward propagating error signals between layers of neurons
is weighted according to the forward synaptic connection weights, leading to a
situation in which feedback weight matrices are copies of the feedforward
matrices. This duplication of weights has been identified as particularly
troubling in terms of a plausible biological implementation and is known as
the weight transport problem [3].
Early attempts to address the weight transport problem included proposals that
the feedback weights can converge to the values of the feedforward weights by
applying the same weight changes to both matrices during training (see [13]).
This explanation was criticised for simply shifting the problem from
transporting weights to transporting weight changes in a network. More
recently, feedback-alignment was proposed as a method to completely sidestep
the need for weight symmetry [14]. It was empirically shown that by having
fixed random feedback weight matrices between the layers of a network, the
feedforward weight matrices are modified by backpropagation such they they
come into alignment with the feedback matrices. This approach can also be
implemented with a randomly weighted direct feedback error to every layer
(direct feedback alignment, [18]), a method which has also been applied in
spiking neural networks [19]. Though such an error distribution process is
biologically plausible, the effectiveness of the approach is limited to
shallow networks and the accuracy of deep networks appears to suffer severely
under such a protocol [20]. Beyond static feedback matrices, matrices with
arbitrary magnitudes but alignment of the signs of weights (i.e. positive
feedforward weights are mirrored with positive feedback weights and vice
versa) show greatly improved performance over feedback alignment [21, 22].
However, propagating the sign of feedback weights is itself a transport
problem and performance with this method is less than optimal.
Recently, methods have been proposed by which the symmetric feedback weight
matrices could be learned by biologically plausible methods (using local only
information). Specifically, methods have emerged which carry out a process of
synaptic weight inference [15, 16]. In essence the backwards synaptic
connections (which would propagate the error) attempt to infer the feedforward
weight between two neurons by observation of their activity alone. This is a
process in which, based upon the activity patterns of a pair of neurons, a
feedback synapse can infer (and thereby copy) the strength of the feedforward
synapse. Such a method was successfully applied in a rate-based neural network
by Akrout et al. [15] (hereafter referred to as the Akrout method). This
method makes use of inference phases during which neurons are randomly
stimulated and their activation is correlated in order to infer synaptic
weights. Alternative rate-based methods are available though we do not
consider them given their non-locality [17]. A more recent proposal [16]
considers a spiking neural network model and makes use of the spiking
threshold of neurons to implement a quasi-experimental, causal inference
method known as regression discontinuity design (we hereafter refer to this
method as RDD, also see [23]). This method similarly uses inference phases in
between training epochs in order to infer the backward synaptic weight
matrices.
These inference methods have proven successful in inferring the feedforward
synaptic weights for use in the feedback weight matrices but also suffer from
a number of drawbacks. First, the Akrout method operates on firing rates and
requires a demeaning process which is carried out in batches. This demeaning
and batching process is particularly troublesome when applied to spiking
networks where the learning must therefore be carried out offline and firing
rates measured by aggregating spikes at specific intervals. In the RDD method,
weight inference requires a piece-wise linear fitting process in order to
infer the synaptic weights. This procedure requires the storage of four times
more parameters per synapse (than just the synaptic weight), a second state
variable per neuron and a high computational complexity per update. Though
these components and the calculation protocols might be possible for a neuron
to compute, they incur a significant computational cost.
To overcome these issues, we propose a spike-timing-dependent weight inference
(STDWI) mechanism for solving the weight transport problem in spiking neural
networks. Our method is motivated by analysis of the time-to-spike of various
neuron models under the influence of an incident spikes. In order to estimate
this in a biologically plausible and computationally efficient manner, we make
use of local information for this computation, in particular just the spike
times of the pre- and post-synaptic neurons. We show that under a number of
conditions our method outperforms both the Akrout and RDD methods when applied
to weight estimation in spiking neural network models. We also compare our
method to an optimal Bayesian update rule for an integrate-and-fire neuron
with stochastic input. Our rule proves effective as an approximation of this
update rule. Furthermore, for networks in which the neurons emit action
potentials at random times (i.e. without a correlation structure), our
learning rule can analytically be shown to approximate a rate-based learning
rule similar to the Akrout method. Finally, the update rule we propose is
computationally cheap and can be applied in an online fashion.
## 2 Methods
To address the weight transport problem, it has been proposed that network
weights can be inferred from activity [15, 16]. We can formulate this problem
as follows: Consider two neurons, labelled “A” and “B”, embedded in a larger
network structure. Amongst other connections, there exists a ‘forward’
synaptic connection from neuron A to neuron B. Therefore, the activity of
neuron B is dependent upon some internal dynamics as well as the network
activity as a whole, including the activity of neuron A, via incoming synaptic
connections. Let us also now consider a pseudo synaptic connection from B to
A, a connection meant to carry error information backward through the network
(note that this work and prior work do not describe this synapse as having an
impact upon the network activity during inference). According to the
backpropagation of error algorithm, the optimal value of this synaptic
connection weight should be equivalent to the weight of the forward synapse
from A to B. How the forward synaptic weight can be copied to the backward
synaptic connection is the problem at hand.
Here we address how to infer the forward synaptic weight value at the
backwards synapse given knowledge of the spike times (indexed $k$) of the
neurons A and B ($t_{A}^{k}$ and $t_{B}^{k}$ respectively) and by accounting
for some average impact from all other synapses. We derive a computationally
simple and biologically plausible method which, by use of appropriate
approximations, achieves this aim and could be employed at the synaptic level
to learn feedback weights for error propagation.
### 2.1 Derivation of the weight inference method
In order to derive our proposed weight inference rule, we analyse a simplified
deterministic leaky integrate-and-fire (LIF) neuron with instantaneous
synaptic inputs from a single source and drift (where drift is a placeholder
for the unknown impact of all other incident synapses) and then consider the
impact of noise upon this model.
A deterministic LIF neuron with drift $\mu$ has voltage dynamics
$\tau_{m}\frac{dv(t)}{dt}=v_{r}-v(t)+\mu\,.$ (1)
In the absence of any input spikes, this equation can be solved, for an
arbitrary initial condition $v_{0}$, at time $t_{0}$, yielding
$v(t)=(v_{r}+\mu)\left(1-e^{-(t-t_{0})/\tau_{m}}\right)+v_{0}e^{-(t-t_{0})/\tau_{m}}\,.$
(2)
With this expression we can now consider two cases, one in which the neuron is
not stimulated by any incoming spikes from neuron $j$ and, beginning at
voltage $v_{0}$ at time $t_{0}$, it spikes with some time delay $\hat{T}$
(purely under the influence of drift). The other case is one in which the
neuron received an additional instantaneous voltage injection of magnitude $w$
at time $t_{0}$ (i.e. a spike arrives and stimulates the neuron) and it spikes
with a different time delay, $T$ (such that the second case involves
replacement of $v_{0}$ with $v_{0}+w$). These cases can be subtracted at
threshold in order to give an expression for $w$, the stimulation magnitude,
of the form
$w={e^{T/\tau_{m}}}(v_{r}+\mu-
v_{0})\left(e^{-T/\tau_{m}}-e^{-\hat{T}/\tau_{m}}\right).$ (3)
Equation (3) provides an exact solution for determining the amount of
instantaneous voltage ($w$) injected to a neuron at some time $t_{0}$ given
that its spike time was modified from an expected time $\hat{T}$ to the time
$T$. This is under the assumption that other than the instantaneous voltage
injection and a background drift, there are no other inputs to the neuron
during this time.
We wish to make use of this deterministic solution for application to noisy
conditions. In particular, when the background drift is considered as due to
input from many other neurons it would inherently be noisy (unlike our
derivation above). However, the current expression includes a number of terms
which are highly susceptible to noise. First, the exponential term,
$e^{T/\tau_{m}}$ is a strictly positive function which is exponential in the
time that the neuron took to spike. If we consider a case in which $T$ is
noisy, this term scales our noise exponentially but never changes sign.
Second, the expected time to spike, $\hat{T}$ is difficult to estimate in a
noisy neuron. However, this term is crucial for our ability to accurately
identify positive and negative weights and it must, therefore, be
approximated.
First we consider the exponential term $e^{T/\tau_{m}}$. Though this term
might (in the noiseless case) aid in producing a highly accurate weight
estimation, in the face of noise it introduces a significant error.
Furthermore, in the noiseless case (where a single estimate of the weight is
exact), its biggest function is to scale the estimated weight based upon the
time taken to spike. This, in essence, reweighs cases in which the neuron
dynamics take excess time to reach threshold – due to beginning far from
threshold (low $v_{0}$), having a small drift, and/or when the incident spike
arrives from a synapse with a small/negative weight. This is therefore a
mechanism to ensure that, for cases in which our system setup results in
dynamics which take some time to reach threshold, the weight scale is treated
sensibly. However, in the coming derivation we intend to sample over multiple
instances of such an estimation in a noisy system such that there is an
unreliable signal of ‘time to spike’. And given that this term is heavily
influenced by noise we wish to ignore it. Therefore, given its function, our
intention to sample, and its susceptibility to noise, we test in this work the
removal of this term from our weight estimation and instead propose weight
estimation without this scaling. We empirically find this approach successful.
Thus, our approach to (approximate) weight estimation can be described as
$\tilde{w}=C(v_{r}+\mu-
v_{0})\left(e^{-T/\tau_{m}}-e^{-\hat{T}/\tau_{m}}\right)$ (4)
where $\tilde{w}$ is an approximate estimation of the weight (ignoring a
rescaling based upon time to spike) and we have introduced a general constant
$C$ to allow linear rescaling of weight estimates.
Next, we wish to approximate $\hat{T}$ in the face of noisy samples. For this
purpose, let us average our estimate of the weight over $K$ observations. In
particular, let us consider a set of samples $T^{k}$, indexed by $k$, each of
which correspond to the time to spike given that the ‘output’ neuron started
from some initial voltage $v_{0}^{k}$ at the moment of an incident spike. For
each of these samples, there exists an “expected” time from incident spike to
neuron spike, $\hat{T}^{k}$, which corresponds to when the neuron would have
spiked if not for this incident spike. Taking an average of the weight
estimate over these $K$ samples yields an estimated weight
$\tilde{w}^{K}=\frac{C}{K}\sum_{k=0}^{K}\left(v_{r}+\mu-
v_{0}^{k}\right)\left(e^{-T^{k}/\tau_{m}}-e^{-\hat{T}^{k}/\tau_{m}}\right)$
(5)
with $K$ indicating the number of observations/samples taken. If we assume
that our $K$ samples are chosen independently of the incident activity (i.e.
the incident spikes are random), then the values of the initial voltage,
$v_{0}^{k}$, and expected times to spike, $\hat{T}^{k}$, are both independent
of the sampling process (and of $w^{k}$ and $T^{k}$). Therefore, these can be
independently averaged and, hence, replaced with $\langle v_{0}\rangle$ and
$\langle\hat{T}\rangle$. Thus, we arrive at an expression
$\tilde{w}^{K}=\frac{D}{K}\sum_{k=0}^{K}\left(e^{-T^{k}/\tau_{m}}-e^{-\langle\hat{T}\rangle/\tau_{m}}\right)\,,$
(6)
where $D=C(v_{r}+\mu-\langle v_{0}\rangle)$ combines the various constants and
scales our estimate of the weights.
If we now finally consider how we ought to update our estimate of $w$ when we
receive an additional $(K+1)$-th sample, we arrive at
$\Delta\tilde{w}=\tilde{w}^{K+1}-\tilde{w}^{K}=\frac{1}{K+1}\left(D\left(e^{-T^{K+1}/\tau_{m}}-e^{-\langle\hat{T}\rangle/\tau_{m}}\right)-w^{K}\right).$
(7)
Inspecting our derived update rule, the first exponential term in Eq. (7) is
exponential in the time since an incident spike arrived. Given this, it is
equivalent to sampling a trace which continuously measures the (fast
exponential) instantaneous firing rate of the neuron from which the incident
spike is arriving. The second exponential term is exponential in the average
time since incident spikes ‘should’ arrive if the weight had been zero,
$\langle\hat{T}\rangle$, an measure of the incident spike-rate. This term can
be approximated as a sampling of a slow exponential measure of the average
rate of the neuron from which incident spikes arrive. Finally, the constant
term $D=C(v_{r}+\mu-\langle v_{0}\rangle)$, has a factor of the drift term
$\mu$. In our model assumption, this drift is background input aside from the
synapse under inference and affects the baseline time to spike of our output
unit. This drift therefore scales up with the output neuron’s average firing
rate. With these observations, we can make appropriate replacements in order
to describe a local spike-timing-dependent weight inference rule.
### 2.2 Spike-timing-dependent weight inference
We propose a spike-timing-dependent rule for the purpose of weight inference
(STDWI) which can be deployed for parallel and online updates with minimal
computational complexity. Our method maintains multiple online estimates of
neuron firing rates through eligibility traces [24, 25] and makes use of these
for synaptic weight estimation. In particular, each neuron (indexed $j$)
maintains a fast trace $\epsilon_{j}^{f}(t)$ and a slow trace
$\epsilon_{j}^{s}(t)$. The dynamics of the fast and slow traces are calculated
for each neuron as
$\tau_{f}\frac{d\epsilon_{j}^{f}(t)}{dt}=-\epsilon_{j}^{f}(t)+S_{j}(t)\quad\text{and}\quad\tau_{s}\frac{d\epsilon_{j}^{s}(t)}{dt}=-\epsilon_{j}^{s}(t)+\frac{\tau_{f}}{\tau_{s}}S_{j}(t)\,,$
(8)
where $\tau_{f}$ and $\tau_{s}$ are the decay constants of the fast and slow
traces respectively, and $S_{j}(t)$ is the spike train of the $j$th neuron.
These traces are computed during simulation in a time-stepping method with the
analytic (exponential) solution to these traces computed at every timestep.
This spike train is computed from the set of $k$ spike times of the $j$th
neuron, $t^{k}_{j}$, such that $S_{j}(t)=\sum_{k}\delta(t-t^{k}_{j})$, where
$\delta(\cdot)$ is the Dirac delta function. Note that these two traces have
an equal area (across time) when they both start with an initial value of zero
due to the normalization of the slow trace spike-train, scaling factor
$\tau_{f}/\tau_{s}$. This property ensures that both eligibility traces act to
measure the firing rate of the neurons with the same scale. Having defined
these eligibility traces, we define our weight inference rule as
$\frac{dw_{ji}}{dt}=\alpha
S_{i}(t)\left(\epsilon_{i}^{s}(t)\left(\epsilon_{j}^{f}(t)-\epsilon_{j}^{s}(t)\right)-\eta
w_{ji}\right)\,,$ (9)
where this rule describes inference of the weight of the forward synapse at
the backward synapse (from neuron $i$ to neuron $j$), $w_{ji}$, with $\alpha$
as the learning rate and $\eta$ as the relative level of weight decay (both
constant hyper-parameters). This learning rule and the fast and slow measures
of the neuron’s firing rates are inspired by the synaptic inference rule
derived in Section 2.1. Note that though this rule is given as a differential
equation, since updates are gated by neuron $i$’s spike times, it is
implemented as updating the synaptic weights such that $w_{ji}\leftarrow
w_{ji}+dw_{ji}/dt$ at every timepoint where neuron $i$ spikes.
Figure 2: A) Illustration of the difference between our derived method for
weight inference by analysis of a deterministic LIF neuron (left) versus our
proposed STDWI method (right) which uses a fast trace to measure the
instantaneous firing rate (the first exponential term in Eq. (7)) and a slow
trace to measure the average firing rate (second exponential term in Eq. (7)).
B) Assuming regular neuron firing conditions, our method can be interpreted as
an STDP rule of the form shown inset, where $T$ is the post minus pre-synaptic
neuron spike time. Note pre and post-synaptic are termed relative to the
backward synaptic connection.
The formulation for the weight update given in Eq. (7) and our proposed STDWI
rule given in Eq. (9) have corresponding terms, see Figure 2A. Both of these
formulations include updates which occur only upon our pre-synaptic neuron
spikes. Note we use the terms post-synaptic/pre-synaptic relative to the
backward synaptic connection for application to the weight transport problem.
In our approximation, we replace the first exponential term of Eq. (7) (an
exponential measure of the time since the post-synaptic neuron’s last spike)
with a fast timescale measure of the post-synaptic neuron’s firing rate (the
fast trace) and we use a slow timescale measure of the post-synaptic neuron’s
firing rate (the slow trace) to approximate the second exponential term (which
computes a trace tracking the average post-synaptic neuron’s firing rate).
Finally, we include a slow measure of the pre-synaptic neuron’s firing rate as
a multiplicative factor, which is intended to capture the dependence of the
weight estimate upon the pre-synaptic neuron drift. Figure 2A depicts how
updates are calculated upon pre-synaptic neuron spikes for both the
deterministic LIF and STDWI update, highlighting both the similarities and key
differences between these implementations.
Note that the learning rule being proposed here relates in a curious form to
traditional Spike-Timing Dependent Plasticity (STDP) rules. In particular, the
sign of the weight update is determined by the spike-timings and firing-rates
of the pre and post-synaptic units. In general, if we assume some fixed
regular firing rate of the post-synaptic neuron, then depending upon the
spike-time of the pre-synaptic neuron relative to this regular firing, we
encounter positive or negative weight estimation. This rule therefore appears
in a mirrored-form to the commonly cited STDP observations [26], see Figure
2B.
### 2.3 Spiking neuron model
For simulations in this study, we consider neurons with membrane leakage and
conductance-based synaptic kernels whose membrane voltage dynamics can be
described by
$\tau_{m}\frac{dv_{i}(t)}{dt}=(v_{r}-v_{i}(t))+\frac{g_{D}}{g_{L}}\bigg{(}\sum_{j}w_{ij}\kappa_{j}(t)-v_{i}(t)\bigg{)}\,,$
(10)
where $\tau_{m}$ is the leakage time constant, $v_{r}$ is the rest voltage,
$g_{D}$ and $g_{L}$ are the dendritic and somatic leakage conductances,
respectively, $w_{ij}$ is the weighting of the forward synaptic connection
from the $j$th neuron to the $i$th neuron and $\kappa_{j}$ describes a
filtered form of the $j$th neuron’s spike train. The form of the synaptic
filtering kernel is taken as a double exponential with a fast rise and slow
decay, such that
$\kappa_{j}(t)=\frac{1}{\tau_{2}-\tau_{1}}\sum_{k}H(t-t^{k}_{j})\left(e^{-\frac{t-t^{k}_{j}}{\tau_{2}}}-e^{-\frac{t-t^{k}_{j}}{\tau_{1}}}\right)\,,$
(11)
where $\tau_{1}$ and $\tau_{2}$ are the timescales of the fast rise and slow
decay, taken to be 3ms and 10ms respectively, and $H(\cdot)$ is the Heaviside
step function.
When the membrane voltage, $v_{i}(t)$, reaches a threshold, $\theta$, an
action potential is recorded and propagated. The membrane voltage is
thereafter reset to a reset voltage $v_{\text{reset}}$. For the simulations in
this study, we do not implement a refractory period explicitly. This is not
expected to cause much deviation of the analysis in our low firing-rate
regime.
### 2.4 Comparison against alternative weight inference methods
In Section 3.2, we compare our method (STDWI) to alternative methods (RDD and
Akrout methods) proposed for the local inference of weights in a network. The
inference is carried out in networks composed of a group of spiking input
neurons connected via a single forward weight matrix to a group of spiking
output neuron. The spiking neuron dynamics are equivalent to those used in the
work which introduced RDD as a causal weight inference method [16], see
Section 2.3. The network is stimulated by selectively exciting input neurons,
collecting the responses of input and output neurons, and applying the set of
techniques to these neural data.
During simulation, some percent of the input neurons are randomly sampled
every 100ms and these are excited with background random Poisson distributed
spike trains (with a fixed positive synaptic connection weight from
stimulation nodes to the input neurons). Every 100ms the input neurons being
stimulated are re-sampled. During this stimulation, non-selected neurons are
left unstimulated with zero input. The STDWI and RDD methods are applied in a
continuous form, paying no attention to the 100ms stimulation periods. The
Akrout method was proposed for rate-based neural networks and makes use of a
batch-wise de-meaning of firing rates. Therefore, the 100ms stimulation
periods are considered as individual ’stimuli’ for the Akrout method, and the
firing rates computed for each of these ‘stimuli’. These individual stimuli
are then grouped into batches (batch-size chosen by grid search) and used to
update the inferred weight according to the Akrout method.
The spiking dynamics with stimulation, as described above, were simulated for
2500s. During weight inference, these 2500s of dynamics were then looped ten
times in order to produce a total 25,000s training time. This looping was
necessary due to memory and storage constraints and can be interpreted as ten
epochs of training.
All methods were trained with a learning rate of $5\times 10^{-5}$. This
learning rate was chosen by iteratively reducing the learning rate until
stable weight inference was clear for all methods. Conclusions are (and should
be) drawn based upon asymptotic performance and not speed given that this
hyperparameter was not tuned on a method-by-method basis. Free parameters were
optimized for by measurement of sign-accuracy and Pearson correlation (best
average performance) using a grid search carried out with a single seed of the
network simulation. Selected parameters were then applied for inference to
five other network seeds, and results collected. See Appendix B for the grid
search results.
## 3 Results
To validate our approach, we compare it against a Bayes-optimal method for a
simple neuron model that affords an analytical solution. Furthermore, we
compare it to two state-of-the-art synaptic weight inference methods for
estimation of the connectivity of simulated spiking neural networks (see
models described in Section 2.3). Code to reproduce results is available at
https://github.com/nasiryahm/STDWI.
### 3.1 Comparison of STDWI to a Bayesian optimal method
To verify the validity of our proposed STDWI rule and demonstrate its
flexibility, we compare it against a Bayes-optimal method for inferring
synaptic inputs to a neuron with internal state modelled by a Wiener process
(Figure 3). Unlike a stochastic LIF neuron model, this model has a tractable
hitting-time analysis and thereby we can form a Bayesian update rule for
estimating the size of a synaptic input given a subsequent output neuron spike
time. A detailed derivation of the Bayes-optimal method is provided in
Appendix A.
Figure 3: Weight inference accuracy of Bayesian and STDWI approaches applied
to a pure Wiener process with jumps. Panels A and B show scatter plots of the
true and inferred weights for the Bayesian and STDWI approach, respectively,
at the end of the training time ($t=50s$). Panels C and D show how Pearson
correlation and sign alignment between the true and inferred weights evolve
through the training process. The standard deviation of the measures across 10
random network seeds are shown as envelopes about the curves.
The Bayesian update rule occurs upon every pre-synaptic neuron spike and is
based upon knowledge of when the last post-synaptic spike occurred (rather
than knowledge of all past post-synaptic spikes), it would be an improper
comparison to test the optimal Bayesian method against our full STDWI rule
(which makes use of all previous spikes in its eligibility traces). Therefore,
to ensure a fair comparison we modify our STDWI rule (Eq. 9) to use only
single spikes. To do this we replaced the slow eligibility traces,
$\epsilon_{j}^{s}(t)$, with a constant (optimally set as the average of the
fast traces), and replaced the fast trace, $\epsilon_{j}^{f}(t)$, with a term
which is exponential in the time since the last spike alone (rather than a
decaying trace of all past post-synaptic spikes). This modification is
equivalent to Eq. 7 if we treat the second exponential terms as a constant and
use an arbitrary learning rate.
We repeatedly simulated stochastic neurons, each with a single forward
synaptic input connection but with varying synaptic connection strengths
across simulations. We simulated the systems for 50s and thereafter used the
network activity in this time period for synaptic weight inference. We
repeated this analysis for synaptic weight strengths over a wide range to
attempt inference of many different synaptic strengths. Figure 3 shows various
measures of the similarity between the true and inferred jump widths for this
simulation when using either the Bayesian or our derived method for weight
inference. Both the scatter plots and learning curves show that the STDWI
method closely matches the Bayes-optimal results, supporting the theoretical
soundness of our approach.
### 3.2 Comparison of STDWI to alternative weight inference methods
Figure 4: Weight inference accuracy comparison between the RDD, Akrout, and
STDWI approaches for a network of LIF neurons with conductance-based synapses.
Panels A, B and C show scatter plots of the true and inferred weights for each
method at the end of training for a single network. Panels D and E show
Pearson correlation and sign alignment between the inferred and true weights.
Solid lines show the mean of these measures across ten randomly seeded
networks and the shaded areas show the standard deviation across these
networks. Panel F shows the convergence of the inferred weights for each
method. The inferred weights for the 75% largest inferred weight (by
magnitude) were collected, individually normalized to their final value and
their average plot with standard deviation shown, as before, by the shaded
area.
We also compared our proposed STDWI approach to two existing methods for
synaptic weight inference. In particular, we compare against the RDD and
Akrout methods. Details of both methods are provided in Appendix B.
To simulate a neural network model which is amenable to all of these weight
inference methods, we use the same neural network models and setup as that
described in [16]. This network is composed of LIF neurons with kernel-
filtered, conductance-based synaptic inputs. We simulate two-layer network
models with an input layer of 100 LIF neurons fully connected to an output
layer of 10 LIF neurons. The synaptic weight matrix connecting these is drawn
from a normal distribution with a small but positive mean. It is this weight
matrix which must be inferred by the range of methods.
The network is stimulated by selectively exciting input neurons.
Some percentage of the input neurons are randomly sampled every 100ms and
these are excited with background Poisson distributed input spike trains (with
a fixed positive synaptic connection weight from stimulation nodes to the
neurons). Every 100ms the input neurons being stimulated are re-sampled.
During this stimulation process, non-selected neurons are left unstimulated
with zero input.
Figure 4 shows the result of weight inference with the range of methods
discussed above for networks in which 20% of input neurons are simultaneously
stimulated (sparse random stimulation). Scatter plots of the inferred vs true
weights (see Panels 4A-C) show the strength of the STDWI method, which
produces a tighter distribution of weights than competitors. Note that the
scale of the synaptic weights inferred differs from the true weights for all
methods, relating to the approximate nature of the methods. In practice, a
rescaling could be applied to improve the correspondence to the scale of the
true weights though none of our measures were sensitive to inferred weight
scale and therefore this was disregarded.
Panels 4D and E show the evolution of the Pearson correlation and sign
alignment between the true and inferred weights through training for the range
of methods. As can be seen, our proposed STDWI method outperforms both the RDD
and Akrout methods, though the difference in the Pearson correlation of all
three methods is small. Note that RDD outperforms Akrout in terms of sign
accuracy.
Finally, Panel 4F shows the successful convergence of inferred weights for all
three methods. This plot shows the normalized weights (normalised through a
division by the final converged weight value) of the top 75% largest magnitude
network weights. These weights had a relatively unambiguous sign, hence their
selection. This plot is provided to support the argument that the
hyperparameter selections made were sufficient for stable inference by these
methods.
### 3.3 The impact of stimulation protocol on weight inference
It is also instructive to investigate how different stimulation protocols
affect weight inference. To this end, and in contrast to the sparse
stimulation in the previous section, we assume that all input neurons are
stimulated (dense stimulation). Furthermore, we investigate how input timing
correlations affect weight inference. Since input neurons are stimulated by
random Poisson spike trains, we can create correlation between individual
Poisson spike trains by a thinning process (via a Single Process Interaction
Model, see [27]). Figure 5 shows results for this dense stimulation regime.
Figure 5: Weight inference accuracy comparison between the RDD, Akrout, and
STDWI approaches for a network of LIF neurons with conductance-based synapses
when all input neurons are stimulated. Panels A, B and C show scatter plots of
the true and inferred weights for each method at the end of training for a
single network. Panels D and E show Pearson correlation and sign alignment
between the inferred and true weights in the uncorrelated spiking case during
training. Panels F and G show the final results (post-training) under varying
input spike-time correlation. Solid lines (points) show the mean of these
measures across five randomly seeded networks and the shaded areas (error
bars) show the standard deviation across these networks.
Scatter plots of the true vs inferred weights (see Panels 5A-C) again show
that STDWI produces a tighter distribution of weights than its competitors.
This highlights the smaller impact of stimulation density upon the STDWI
inference method compared with the Akrout or RDD methods. These scatter plots
show inferred weights for the dense stimulation case with zero correlation in
timing between the various input stimulation spike-trains.
Panels 5D and E show that the STDWI method remains most successful (as
measured by the Pearson correlation and sign alignment mean) when compared
with RDD and Akrout methods under dense stimulation. However, the Akrout
method benefits significantly from dense stimulation (whereas the RDD method
appears to suffer somewhat). Thus, the RDD method does not systematically
outperform the Akrout method as previously reported (cf. Panels 4D and 5E).
Panels 5F and G demonstrate how weight inference is affected by input timing
correlations. STDWI remains largely successful, however as input spike timing
correlation increases, the RDD method performs favourably. This may be
expected as unlike the STDWI and Akrout methods, the RDD method compares only
events which are near-threshold to establish synaptic weights. This filtering
of events by which inference is done may be favourable in the regime of high
input spike timing correlation, though the benefit only exists for some
parameter range.
## 4 Discussion
Our results demonstrate the efficacy of STDWI for synaptic weight inference
across a range of network models and stimulation protocols. We have shown that
our approach successfully approximates Bayes-optimal results in a simple
neuron model and outperforms existing methods for weight inference. Our
results also highlight the attention that must be paid to the employed
stimulation protocols since the efficacy of different synaptic weight
inference method has been shown to crucially depend on these.
Existing methods cannot be so indiscriminately applied to arbitrary neuron
models. For example, the RDD method requires a neuron model which has a second
state variable mimicking the membrane voltage. This state variable should
relax to the same value as the membrane voltage when the neuron is not spiking
and otherwise should reflect how “driven” the neuron is when it spikes.
However, such a state variable is not necessarily constructable for an
arbitrary neuron model. In contrast, STDWI makes use of spike timing alone and
is therefore agnostic to the neuron dynamics being simulated.
In our analyses, we reported both on Pearson correlation and sign accuracy.
STDWI systematically outperformed the alternative approaches on both measures
for a range of parameters. One exception is in the investigation of timing-
based correlations (Figure 5F and G), in which RDD outperformed the other
methods for inference in the case of the medium correlation regime. This
suggests a particular regime might favour the RDD method, however its failure
in other regimes suggests that the current formulation of RDD for analysis
about a spike-threshold may not be most effective.
It is also important to realize that the number of variables stored per
synaptic connection is greater for RDD compared to either the STDWI or Akrout
methods. RDD requires a fitting process using data-points corresponding to
events within which the neuron’s membrane voltage was near the spiking
threshold. Aside from the selection of events close to threshold, the RDD
method also uses four variables per synaptic connection to characterise a
piece-wise linear function (with linear functions fit above and below the
spiking threshold). By comparison, STDWI uses two variables for the fast and
slow eligibility traces of each neuron and the Akrout method uses two
variables storing the firing rate and mean firing rate of each unit within a
batch.
To derive our learning rule, we made use of a deterministic analysis of a LIF
neuron and considered the spike times of a single input neuron. Our
deterministic analyses later required approximations in order to remove terms
which are highly affected by noise. Ideally we would instead have carried out
a stochastic process analysis for a LIF neuron. The particular stochastic
process to which our leaky neuron model corresponds is known as an Ornstein-
Uhlenbeck (OU) process. Unfortunately a general analysis of the OU process
that describes when we ought to expect such a neuron to spike (the hitting
time) is non-trivial [28]. Nonetheless, the efficacy of our assumptions is
validated by the quality of our results. Furthermore, under a rate-based
analysis of our proposed STDWI rule, we can show a correspondence to the
Akrout rule (see Appendix C).
A limitation of our approach is that the inference process considers the spike
times of a single input neuron. Instead, a multivariate approach which would
take into account the spike-times of all input neurons to infer synaptic
weights could prove even more powerful and accurate for weight inference.
Indeed multivariate analyses, often making use of such multi-neuron spiking
information along with cross-correlation measures and statistical significance
testing, have been applied previously in approaches which aim to infer neural
circuit connectivity from neural data [29, 30, 31, 32]. These approaches,
however, make use of globally available network information and are not
concerned with whether this information is locally available at the synapse.
Instead, we took a simplified but powerful approach which could plausibly be
implemented at the single synapse level, providing a candidate solution to the
weight transport problem.
Concluding, we have shown that STDWI outperforms existing approaches for
solving the weight transport problem. Moreover, it is more flexible, being
capable of application to any spiking network data, while requiring minimal
computational overhead. The benefits of data efficiency and online computation
along with its computational simplicity and accuracy make STDWI a promising
biologically plausible mechanism for gradient-based learning in spiking neural
networks.
## acknowledgements
We thank Blake Richards and Jordan Guerguiev for their correspondence and for
providing us with the code they used for the RDD method.
## References
* LeCun et al. [2015] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015 May;521(7553):436–444.
* Schmidhuber [2014] Schmidhuber J. Deep Learning in Neural Networks: An Overview. ArXiv 2014 Apr;1404.7828.
* Grossberg [1987] Grossberg S. Competitive learning: From interactive activation to adaptive resonance. Cogn Sci 1987 Jan;11(1):23–63.
* Crick [1989] Crick F. The recent excitement about neural networks. Nature 1989 Jan;337(6203):129–132.
* Richards et al. [2019] Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, et al. A deep learning framework for neuroscience. Nat Neurosci 2019 Nov;22(11):1761–1770.
* Whittington and Bogacz [2019] Whittington JCR, Bogacz R. Theories of Error Back-Propagation in the Brain. Trends Cogn Sci 2019 Mar;23(3):235–250.
* Lillicrap and Santoro [2019] Lillicrap TP, Santoro A. Backpropagation through time and the brain. Curr Opin Neurobiol 2019 Apr;55:82–89.
* Yamins and DiCarlo [2016] Yamins DLK, DiCarlo JJ. Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci 2016 Mar;19(3):356–365.
* Güçlü and van Gerven [2015] Güçlü U, van Gerven MAJ. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream. J Neurosci 2015 Jul;35(27):10005–10014.
* Whittington and Bogacz [2017] Whittington JCR, Bogacz R. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity. Neural Comput 2017 May;29(5):1229–1262.
* Guerguiev et al. [2017] Guerguiev J, Lillicrap TP, Richards BA. Towards deep learning with segregated dendrites. Elife 2017 Dec;6.
* Sacramento et al. [2018] Sacramento J, Ponte Costa R, Bengio Y, Senn W. Dendritic cortical microcircuits approximate the backpropagation algorithm. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R, editors. Advances in Neural Information Processing Systems 31 Curran Associates, Inc.; 2018.p. 8721–8732.
* Kolen and Pollack [1994] Kolen JF, Pollack JB. Backpropagation without weight transport. In: Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), vol. 3 IEEE; 1994. p. 1375–1380.
* Lillicrap et al. [2016] Lillicrap TP, Cownden D, Tweed DB, Akerman CJ. Random synaptic feedback weights support error backpropagation for deep learning. Nat Commun 2016 Nov;7:13276.
* Akrout et al. [2019] Akrout M, Wilson C, Humphreys P, Lillicrap T, Tweed DB. Deep learning without weight transport. In: Advances in Neural Information Processing Systems; 2019. p. 974–982.
* Guerguiev et al. [2019] Guerguiev J, Kording KP, Richards BA. Spike-based causal inference for weight alignment. ArXiv 2019 Oct;1910.01689.
* Kunin et al. [2020] Kunin D, Nayebi A, Sagastuy-Brena J, Ganguli S, Bloom J, Yamins DLK. Two Routes to Scalable Credit Assignment without Weight Symmetry. ArXiv 2020 Feb;2003.01513.
* Nøkland [2016] Nøkland A. Direct Feedback Alignment Provides Learning in Deep Neural Networks. ArXiv 2016 Sep;1609.01596.
* Samadi et al. [2017] Samadi A, Lillicrap TP, Tweed DB. Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights. Neural Comput 2017 Mar;29(3):578–602.
* Bartunov et al. [2018] Bartunov S, Santoro A, Richards B, Marris L, Hinton GE, Lillicrap T. Assessing the scalability of biologically-motivated deep learning algorithms and architectures. In: Advances in Neural Information Processing Systems; 2018. p. 9368–9378.
* Moskovitz et al. [2018] Moskovitz TH, Litwin-Kumar A, Abbott LF. Feedback alignment in deep convolutional networks. ArXiv 2018 Dec;1812.06488.
* Xiao et al. [2018] Xiao W, Chen H, Liao Q, Poggio T. Biologically-plausible learning algorithms can scale to large datasets. ArXiv 2018 Nov;1811.03567.
* Lansdell and Kording [2019] Lansdell BJ, Kording KP. Neural spiking for causal inference. BioRxiv 2019 Oct;p. 253351.
* Izhikevich [2007] Izhikevich EM. Solving the distal reward problem through linkage of STDP and dopamine signaling. Cereb Cortex 2007 Oct;17(10):2443–2452.
* Gerstner et al. [2018] Gerstner W, Lehmann M, Liakoni V, Corneil D, Brea J. Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules. Front Neural Circuits 2018 Jul;12:53.
* Bi and Poo [1998] Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 1998 Dec;18(24):10464–10472.
* Kuhn et al. [2003] Kuhn A, Aertsen A, Rotter S. Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput 2003 Jan;15(1):67–101.
* Lipton and Kaushansky [2018] Lipton A, Kaushansky V. On the First Hitting Time Density of an Ornstein-Uhlenbeck Process. ArXiv 2018 Oct;1810.02390.
* Van Bussel et al. [2011] Van Bussel F, Kriener B, Timme M. Inferring synaptic connectivity from spatio-temporal spike patterns. Front Comput Neurosci 2011 Feb;5:3.
* Timme and Casadiego [2014] Timme M, Casadiego J. Revealing networks from dynamics: an introduction. J Phys A: Math Theor 2014 Aug;47(34):343001.
* Kobayashi et al. [2019] Kobayashi R, Kurita S, Kurth A, Kitano K, Mizuseki K, Diesmann M, et al. Reconstructing neuronal circuitry from parallel spike trains. Nat Commun 2019 Oct;10(1):4468.
* Gerhard et al. [2013] Gerhard F, Kispersky T, Gutierrez GJ, Marder E, Kramer M, Eden U. Successful reconstruction of a physiological circuit with known connectivity from spiking activity alone. PLoS Comput Biol 2013 Jul;9(7):e1003138.
* Morrison et al. [2008] Morrison A, Diesmann M, Gerstner W. Phenomenological models of synaptic plasticity based on spike timing. Biol Cybern 2008 Jun;98(6):459–478.
## Appendix A Bayesian weight estimation for a stochastic neuron model
As a method of verification of our proposed STDWI rule and an exhibition of
its flexibility, we compare it against an optimal Bayesian method for
inferring a single synaptic input to a neuron with internal state modelled by
Brownian motion with drift and diffusion (a Wiener process). Unlike a
stochastic leaky integrate and fire neuron model, this model has a tractable
hitting-time analysis and thereby we can form an optimal Bayesian update rule
for estimating the size of a synaptic input given a subsequent output neuron
spike time. This synaptic weight inference analysis for this simple neuron
model and its similarity to our STDWI rule is described in the following
section.
### A.1 Bayesian estimation of synaptic weights
We wish to estimate the weight of synaptic connection given local-only
information. In particular, this involves estimating the weight of a synaptic
connection given the spike times of an input and output neuron (input and
output relative to the forward synaptic connection) as well as the output
neuron’s membrane voltages.
Constraining this further, let us estimate a synaptic connection weight, $w$,
between two neurons given a single input spike time, $t_{\text{in}}$, and the
first output spike time which follows this input spike, $t_{\text{out}}$ where
$t_{\text{out}}>t_{\text{in}}$. If we carry out all analysis relative to the
input spike time, $t_{\text{in}}$, we can define the key dependent factors.
First, the output neuron’s time to spike (the hitting time), following the
input neuron spike, is a key measure which we define as
$T=t_{\text{out}}-t_{\text{in}}$. The initial state of the output neuron is
also a determining factor in this analysis as it defines the distance to
threshold $\Delta$, which we elaborate on below. Given this setup and by
Bayes’ rule, we aim to compute
$p(w\mid T,\Delta)\propto p(T\mid w,\Delta)p(w)\,.$ (12)
The likelihood term $p(T\mid w,\Delta)$ can be computed through analysis of
the neural dynamics. To compute it, we must account for the impact of spikes
from all other input neurons. In general this is non-trivial. To simplify this
analysis, we consider the case of a non-leaky integrate-and-fire neuron driven
by random input.
### A.2 Stochastic neuron model
We consider a spiking neural network of neurons with membrane voltage under
the effect of Brownian motion. As such, changes in the membrane voltage,
$v(t)$, can be described by
$\frac{dv(t)}{dt}=I(t)\,,$ (13)
where $I(t)$ is the total input to the cell at time $t$. Notably, this change
in membrane voltage is agnostic to the current voltage $v(t)$ (meaning there
is no leakage effect). When this membrane voltage meets the threshold,
$\theta$, an action potential is emitted and the membrane voltage is directly
reset to the reset voltage $v_{\text{reset}}$.
Let us consider the input, $I(t)$, as composed of input from the single
synaptic connection and some background stochastic process. The synaptic
connection is modelled as a producing instantaneous voltage injections which
occur upon the spike times of the input neurons. The amplitudes of the
instantaneous voltage injections induced by input spikes are equal to the
weight of the synaptic connection from input to output neuron, $w$.
Aside from these synaptic inputs, we also consider some background input which
is a stochastic process. Assuming that there are a large number of randomly
spiking input neurons, we can approximate their impact as a random Gaussian
input with some mean and variance. This describes a stochastic process, known
as a Wiener process, with some drift (mean input) and a diffusion constant
(variance). This approximation for a neuron’s membrane voltage is valid in the
limit of a large number of synaptic connections with small synaptic weight
magnitudes.
The above details are all approximations but provide us with a simple
description of the neural dynamics such that
$dv(t)=w\delta(t-t_{\text{in}})dt+\sqrt{D}dX_{i}(t)\,,$ (14)
where $\delta(\cdot)$ is the Dirac-delta function and $X(t)$ is a Wiener
process with drift $\mu$ and variance scaled by $D$.
### A.3 The hitting time of a non-leaky neuron
We can now attempt to determine the “hitting time” of this system, i. e., the
time $T$ at which it makes contact with our neuron membrane voltage threshold.
The hitting-time density for a Wiener process with drift (by which we are
approximating our non-leaky neuron) can be calculated as;
$f(T\mid\Delta)=\frac{\Delta}{\sqrt{2D\pi T^{3}}}\exp\left(-\frac{(\Delta-\mu
T)^{2}}{2DT}\right)\,,$ (15)
where $\Delta=\theta-v_{0}$ is the membrane voltage distance to threshold (
where $v_{0}=v(t_{\text{in}})$), $T=t_{\text{out}}-t_{\text{in}}$ is defined
as above, $\mu$ is the drift of our Wiener process, and $D$ is the variance of
our Wiener process. In our neuron model, $\Delta$ corresponds to the
difference between some initial membrane voltage $v_{0}$ and the threshold
$\theta$, whereas $\mu$ corresponds to the average input to the output neuron
from all input synapses in volts.
The description assumes that the membrane voltage starts at some value $v_{0}$
and is under constant drift. However, instead we wish to assume that at the
initial time, $t_{0}=t_{\text{in}}$, our input neuron fired and added some
unknown voltage $w$ to the membrane voltage. Furthermore, rather than
computing a probability distribution over the possible times at which the
output neuron might spike, we instead know the next spike time of the output
neuron, $t_{\textrm{out}}$, and wish to use it to infer the weight, $w$.
We can therefore assume that for a given pair of input and output spikes, we
have a fixed hitting time, $T$, as described above. Furthermore, under our
synapse description for the non-leaky neuron (where synaptic inputs cause an
instantaneous change in output neuron membrane voltage of size proportional to
the synaptic weight) our initial membrane voltage, $v_{0}$, can be represented
as the membrane voltage just prior to the input spike, plus the synaptic
weight. That is, we take the limit of $v_{0}$ from below, i.e.,
$v_{0}=\lim_{t\to t_{0}^{-}}v(t)+w.$ This allows us to augment our first-
passage density in terms of $w$ such that
$f(T\mid w,\Delta)=\frac{(\Delta-w)}{\sqrt{2D\pi
T^{3}}}\exp\bigg{(}-\frac{(\Delta-w-\mu T)^{2}}{2DT}\bigg{)}\,,$ (16)
where we now define $\Delta=\theta-\lim_{t\to t_{0}^{-}}\>v(t)$. With this
formulation of the hitting-time density, we can compute an estimate of the
weight $w$ given a particular set of input and output neuron spike times.
Thereafter we can update our estimate of the synaptic weight of interest
through Eq. (12).
To make our inference of $w$ tractable, we first take a Laplace approximation
of Eq. (16). This produces a Gaussian with mean weight
$\hat{w}=\Delta-\frac{\mu T+\sqrt{(\mu T)^{2}+4DT}}{2}\,,$ (17)
calculated as the maximum of our likelihood $f(T\mid w,\Delta)$, and a
variance
$\hat{\sigma}=1/((\Delta-\hat{w})^{-2}+(DT)^{-1})\,.$ (18)
Since we have Gaussian distributions for our likelihood, we can take a
Gaussian conjugate prior with mean $\mu_{0}$ and variance $\sigma^{2}_{0}$ and
obtain a closed-form solution to our posterior weight when given a single
input-output spike pair as
${w}_{p}=\frac{1}{\sigma_{0}^{-2}+\hat{\sigma}^{-2}}\bigg{(}\frac{w_{0}}{\sigma_{0}^{2}}+\frac{\hat{w}}{\hat{\sigma}^{2}}\bigg{)}\,.$
(19)
Similarly, we can compute the posterior variance as
${\sigma}_{p}^{2}=\bigg{(}\sigma_{0}^{-2}+\sigma^{-2}\bigg{)}^{-1}\,.$ (20)
### A.4 Weight estimation under negligible drift
Let us assume that the diffusion term, $D$, is sufficiently small compared to
the drift $\mu$ (such that $\mu\gg D$). This allows us to ignore the diffusion
term in the numerator of Eq. (17). Having assumed this small diffusion scale,
we can then describe the maximum likelihood estimate of the weight as
$\hat{w}\approx\Delta-\mu T\,.$ (21)
Furthermore, recall that $\Delta$ is the distance to threshold when the input
neuron spikes, $\Delta=\theta-v(t_{\textrm{in}})$. By dividing this distance,
$\Delta$, by the drift, $\mu$, we can calculate the expected time of the
output spike under drift alone, $\hat{T}$, such that
$\frac{\Delta}{\mu}=\hat{T}\implies\Delta=\mu\hat{T}\,.$ (22)
Given these assumptions, we can approximate Eq. (17) as
$\hat{w}\approx\mu\hat{T}-\mu T=\mu(\hat{T}-T).$ (23)
This formulation can be understood well if we consider a non-leaky neuron
under the effect of drift alone (without any stochastic input) and a single
input neuron providing instantaneous voltage injections. In such a case, with
knowledge of the initial membrane voltage and drift of the output neuron, we
have a deterministic system which will spike at a specific time, $\hat{T}$. If
we perturb this system with a spike from an input neuron (which causes a jump
in the membrane voltage), we can decode the synaptic weight by simply
measuring the effect on the timing of the output neuron spike time. The
induced change in the output spike time is linearly proportional to the
synaptic weight.
## Appendix B Details on baseline methods
The STDWI method is compared to existing methods for synaptic weight
inference. We provide more details on these methods below.
### B.1 The Akrout method
In our simulations of LIF neurons, we compare against the Akrout method [15].
This rate-based method makes use of an inference phase in which neurons are
stimulated (with mean zero) and then the levels of activity of input and
output neurons are correlated to form a weight estimate. This approach was
shown to be highly successful for weight inference and thereby training of
rate-based neural network models. However, since we simulate spiking neurons,
which cannot have a negative firing rate, we instead demean the neuron firing
ratesand randomly stimulate the input neurons (post-synaptic from the
perspective of the backward synapse). In particular, we use an update rule of
the form
$\Delta w_{ji}=\eta(r_{i}-\langle r_{i}\rangle)(r_{j}-\langle
r_{j}\rangle)-\eta\lambda w_{ji}\,,$ (24)
where $\Delta w_{ji}$ is the update to backward synaptic weight, from a neuron
indexed $j$ to a neuron indexed $i$, which is attempting to estimate the
weight of the forward synaptic connection, $w_{ij}$. $r_{i}$ and $r_{j}$
denote the firing rates of the $i$th and $j$th neurons, and
$\langle\cdot\rangle$ indicates an average of these over time. Parameters
$\eta$ and $\lambda$ are the learning rate and the weight decay respectively.
The learning rate is fixed at with value $\eta=0.0001$ and the weight decay
determined by grid search, see below. The firing rates $r_{i}$ and $r_{j}$ are
calculated by computing the firing rates within the non-overlapping 100ms
stimulation periods of the network. These stimulation periods are then grouped
into batches (of size again determined by grid search) for calculation of the
mean firing rates for this batch ($\langle r_{j}\rangle$ and $\langle
r_{i}\rangle$ respectively) according to the weight-mirror gradient descent
method described in [15].
### B.2 Regression discontinuity design
We also compare against the regression discontinuity design (RDD) method,
which was proposed for application in spiking neural networks [16]. It makes
use of all times at which a neuron spiked or almost spiked (i.e. its membrane
voltage came within some margin of the spiking threshold but never reached
it). It thereafter separately fits the almost-spiked and spiked events
linearly against the membrane voltage. Notably, for the spiking events, a non-
reset version of the membrane voltage is used for the linear fitting.
Following a fitting process, the discontinuity of these linear fits at the
spiking threshold is used as a measure of the synaptic weight. For full
details of the RDD implementation, see [16].
### B.3 Grid-based optimization of free parameters
The methods compared have a number of free parameters that can be optimized
for. In case of STDWI these are the time constants of the fast ($\tau_{f}$)
and slow ($\tau_{s})$ traces. In case of RDD these are the distance to
threshold at which samples are initiated and the window duration of a sample.
For the Akrout method, the weight decay scaling and the batch-size are
hyperparameters.
These parameters are chosen from a grid-search using a single test network’s
spike trains. The parameters producing highest average sign accuracy and
Pearson correlation between the inferred and true weights are then chosen for
analysis of a further four networks (each with a different random seed for
input stimulation and the synaptic weight matrix).
Figure 6: Variation in the performance of the STDWI, RDD, and Akrout methods
with changes in the method parameters. The best parameter sets are highlighted
with a black box. These were the parameter used to analyse all other seeded
networks and produce the main results.
Figure 6 shows the parameters maps for the grid searches carried out to select
parameters for Figure 4. The same grid search parameter sweeps were repeated
in order to choose parameters for Figure 5.
## Appendix C Rate-based analysis of the STDWI rule
To appreciate the effect of STDWI rule, we can consider its approximate rate-
based form under the assumption of random Poisson process sampled neuron
spikes (for a review of the rationale of such methods see [33]). This produces
an update rule based upon the firing rates of the neurons. Note that below, as
in Section 2.2, we refer to pre/post-synaptic relative to a ‘backward’
synapse.
In our case, the dependence upon the post-synaptic firing rate has two forms
which correspond to a quickly-adapting exponential average,
${\lambda}_{\textrm{j}}^{\textrm{f}}$, and a slowly-adapting exponential
average, ${\lambda}_{\textrm{j}}^{\text{s}}$. Similarly there is a dependence
upon the pre-synaptic firing rate as a slowly-adapting exponential average,
${\lambda}_{\textrm{i}}^{\text{s}}$. Taking the assumption of Poisson random
spiking, we can describe our weight update in a rate-based form as
$\frac{d\hat{w}_{ji}}{dt}=\alpha
S_{i}(t)\left(\lambda_{\textrm{i}}^{\textrm{s}}({\lambda}_{\textrm{j}}^{\textrm{f}}-{\lambda}_{\textrm{j}}^{\textrm{s}})-\eta\hat{w}_{ji}\right).$
(25)
We can solve this equation for its fixed point ($\frac{d\hat{w_{ji}}}{dt}=0$),
producing an expression for the fixed-point weight as
$\hat{w}_{ji}^{*}=\frac{1}{\eta}{\lambda}_{\textrm{i}}^{\textrm{s}}\left({\lambda}_{\textrm{j}}^{\textrm{f}}-{\lambda}_{\textrm{j}}^{\textrm{s}}\right)$
(26)
when $S_{i}(t)$ is non-zero.
For networks with solely positive firing rates, Akrout et al. [15] proposed
correlating the demeaned firing rates of pre and post-synaptic neurons in
order to estimate synaptic weights. If we here interpret the slow firing rate
measure of the input neuron activity as an approximation of its average value,
then our method similarly correlates pre-synaptic firing rate with the
demeaned post-synaptic neuron firing rate. Though this rate-based analysis
shows similarities to the Akrout method, our spike timing implementation is
unique in that it makes use of asymmetric causal kernels and has a demeaning
process which slowly tracks the firing rates of neurons (rather than making
use of batches). We attribute our performance gains to these features.
Furthermore, given the spike-timing-dependent feature of the rule, weight
updates can be computed in an event-driven fashion and with minimal
communication between neurons (weight updates requiring communication only
upon spike times).
If we compare Eqs. (26) and (23) then we can also appreciate the
correspondence of the STDWI rule and the Bayesian estimate. The STDWI update,
instead of making use of an estimate of the a drift, $\mu$, makes use of the
pre-synaptic neuron firing rate as a proxy. This is appropriate given the
linear relationship between drift and firing rate for a non-leaky neuron.
Furthermore, rather than directly comparing the expected and true time to
spike, $\hat{T}$ and $T$ respectively, the STDWI rule keeps track of a slow
and fast estimate of the post-synaptic neuron firing rate, through
$\lambda_{\textrm{j}}^{\text{s}}$ and $\lambda_{\textrm{j}}^{\text{f}}$
respectively. The subtraction of these firing rate estimates in Eq. (26)
provides a measure with a similar form to the subtraction of expected and true
spike times ($\hat{T}-T$). Specifically, an earlier than average spike time
induces a positive weight estimate and a later than average spike time induces
a negative weight estimate.
|
2024-09-04T02:54:58.299400 | 2020-03-09T09:34:20 | 2003.03994 | {
"authors": "Divyam Aggarwal, Dhish Kumar Saxena, Thomas B\\\"ack, Michael Emmerich",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26111",
"submitter": "Divyam Aggarwal",
"url": "https://arxiv.org/abs/2003.03994"
} | arxiv-papers | WGMwgm QEqe EPep PMSpms BECbec DEde
[orcid=0000-0003-0740-780X]
[orcid=0000-0001-7809-7744] [1]
[orcid=0000-0001-6768-1478]
[orcid=0000-0002-7342-2090]
[cor1]Corresponding author; Email Address<EMAIL_ADDRESS>Postal
Address: Room No.-231, East Block, MIED, IIT Roorkee, Roorkee,
Uttarakhand-247667, India; Phone: +91-8218612326
# Airline Crew Pairing Optimization Framework for Large Networks with Multiple
Crew Bases and Hub-and-Spoke Subnetworks
Divyam Aggarwal<EMAIL_ADDRESS>Department of Mechanical & Industrial
Engineering (MIED), Indian Institute of Technology Roorkee, Roorkee,
Uttarakhand-247667, India Dhish Kumar Saxena<EMAIL_ADDRESS>Thomas Bäck<EMAIL_ADDRESS>Leiden Institute of Advanced
Computer Science (LIACS), Leiden University, Niels Bohrweg 1, 2333 CA Leiden,
the Netherlands Michael Emmerich<EMAIL_ADDRESS>
###### Abstract
Crew Pairing Optimization aims at generating a set of flight sequences (crew
pairings), covering all flights in an airlines’ flight schedule, at minimum
cost, while satisfying several legality constraints. CPO is critically
important for airlines’ business viability considering that the crew operating
cost is second only to the fuel cost. It poses an NP-hard combinatorial
optimization problem, to tackle which, the state-of-the-art relies on relaxing
the underlying Integer Programming Problem (IPP) into a Linear Programming
Problem (LPP), solving the latter through Column Generation (CG) technique,
and integerization of the resulting LPP solution. However, with the growing
scale and complexity of the airlines’ networks (those with large number of
flights, multiple crew bases and/or multiple hub-and-spoke subnetworks), the
efficacy of the conventionally used exact CG-implementations is severely
marred, and their utility has become questionable. This paper proposes an
Airline Crew Pairing Optimization Framework, $AirCROP$, whose constitutive
modules include the Legal Crew Pairing Generator, Initial Feasible Solution
Generator, and an Optimization Engine built on heuristic-based CG-
implementation. $AirCROP$’s novelty lies in not just the design of its
constitutive modules but also in how these modules interact. In that, insights
in to several important questions which the literature is otherwise silent on,
have been shared. These relate to sensitivity analysis of $AirCROP^{\prime}s$
performance in terms of final solutions’ cost quality and run-time, with
respect to - sources of variability over multiple runs for a given problem;
cost quality of the initial solution and the run-time spent to obtain it; and
termination parameters for LPP-solutioning and IPP-solutioning. In addition,
the efficacy of the $AirCROP$ has been: (a) demonstrated on real-world airline
flight networks with an unprecedented conjunct scale-and-complexity, marked by
over 4200 flights, 15 crew bases, and billion-plus pairings, and (b) validated
by the research consortium’s industrial sponsor. It is hoped that with the
emergent trend of conjunct scale and complexity of airline networks, this
paper shall serve as an important milestone for affiliated research and
applications.
###### keywords:
Airline Crew Scheduling Crew Pairing Combinatorial Optimization Column
Generation Mathematical Programming Heuristics
## 1 Introduction
Airline scheduling poses some of the most challenging optimization problems
encountered in the entire Operations Research (OR) domain. For a large-scale
airline, the crew operating cost constitutes the second-largest cost
component, next to the fuel cost, and even its marginal improvements may
translate to annual savings worth millions of dollars. Given the potential for
huge cost-savings, Airline Crew Scheduling is recognized as a critical
planning activity. It has received an unprecedented attention from the
researchers of the OR community over the last three decades. Conventionally,
it is tackled by solving two problems, namely, Crew Pairing Optimization
Problem (CPOP) and Crew Assignment Problem, in a sequential manner. The former
problem is aimed at generating a set of flight sequences (each called a crew
pairing) that covers all flights from an airlines’ flight schedule, at minimum
cost, while satisfying several legality constraints linked to federations’
rules, labor laws, airline-specific regulations, etc. These optimally-derived
crew pairings are then fed as input to the latter problem, which is aimed to
generate a set of pairing sequences (each sequence is a schedule for an
individual crew member), while satisfying the corresponding crew requirements.
Being the foremost step of the airline crew scheduling, CPOP is the main focus
of this paper, and interested readers are referred to Barnhart . (2003) for a
comprehensive review of the airline crew scheduling.
CPOP is an NP-hard111For NP-hard (NP-complete) problems, no polynomial time
algorithms on sequential computers are known up to now. However, verification
of a solution might be (can be) accomplished efficiently, i.e., in polynomial
time. combinatorial optimization problem (Garey Johnson, 1979). It is modeled
as either a set partitioning problem (SPP) in which each flight is allowed to
be covered by only one pairing, or a set covering problem (SCP) in which each
flight is allowed to be covered by more than one pairing. In CPOP, a crew
pairing has to satisfy hundreds of legality constraints (Section 2.2) to be
classified as legal, and it is imperative to generate legal pairings in a
time-efficient manner to assist optimization search. Several legal pairing
generation approaches, based on either a flight- or a duty-network, have been
proposed in the literature (Aggarwal ., 2018). Depending upon how the legal
pairing generation module is invoked, two CPOP solution-architectures are
possible. In the first architecture, all possible legal pairings are
enumerated a priori the CPOP-solutioning. However, this is computationally-
tractable only for small-scale CPOPs (with $\approx$¡1000 flights).
Alternatively, legal pairings are generated during each iteration of the CPOP-
solutioning, but only for a subset of flights, so the CPOP solution could be
partially improved before triggering the next iteration. Such an architecture
mostly suits medium- to large-scale CPOPs (with $\approx\geq$1000 flights)
involving millions/billions of legal pairings, whose complete-enumeration is
computationally-intractable.
In terms of solution-methodologies, heuristic-based optimization techniques
and mathematical programming techniques, are commonly employed (Section 2.3).
In the former category, Genetic Algorithms (GAs) which are population-based
randomized-search heuristics (Goldberg, 2006) are most commonly used. However,
they are found to be efficient only for tackling very small-scale CPOPs
(Ozdemir Mohan, 2001). Alternatively, several mathematical programming based
approaches do exist to solve CPOPs of varying-scales. CPOP is inherently an
Integer Programming Problem (IPP), and some approaches have used standard
Integer Programming (IP) techniques to find a best-cost pairing subset from a
pre-enumerated pairings’ set (Hoffman Padberg, 1993). However, these
approaches have proven effective only with small-scale CPOPs with up to a
million pairings. This perhaps explains the prevalence of an altogether
different strategy, in which the original CPOP/IPP is relaxed into a Linear
Programming Problem (LPP); the LPP is solved iteratively by invoking a LP
solver and relying on Column Generation (CG) technique to generate new
pairings as part of the pricing sub-problem; and finally, the resulting LPP
solution is integerized using IP techniques and/or some special connection-
fixing heuristics. The challenge associated with this strategy is that even
though the LPP solver may lead to a near-optimal LPP solution, the scope of
finding a good-cost IPP solution is limited to the pairings available in the
LPP solution. To counter this challenge, heuristic implementations of branch-
and-price framework (Barnhart . (1998)) in which CG is utilized during the
integerization phase too, have been employed to generate new legal pairings at
nodes of the IP-search tree. However, the efficacy of such heuristic
implementations depends on a significant number of algorithmic-design choices
(say, which branching scheme to adopt, or how many CG-iterations to perform at
the nodes). Furthermore, it is noteworthy that the scale and complexity of
flight networks have grown alarmingly over the past decades. As a result, an
inestimably large number of new pairings are possible under the pricing sub-
problem, given which most existing solution methodologies are rendered
computationally-inefficient. Recognition of such challenges have paved the way
towards domain-knowledge driven CG strategies to generate a manageable, yet
crucial part of the overall pairings’ space under the pricing sub-problem
(Zeren Özkol, 2016). Though rich in promise, the efficacy of this approach is
yet to be explored vis-$\grave{a}$-vis the emergent large-scale and complex
flight networks characterized by multiple crew bases and/or multiple hub-and-
spoke subnetworks where billions of legal pairings are possible.
In an endeavor to address airline networks with conjunct scale and complexity,
this paper proposes an Airline Crew Pairing Optimization Framework ($AirCROP$)
based on domain-knowledge driven CG strategies, and:
* •
presents not just the design of its constitutive modules (including Legal Crew
Pairing Generator, Initial Feasible Solution Generator, and Optimization
Engine powered by CG-driven LPP-solutioning and IPP-solutioning), but also how
these modules interact
* •
discusses how sensitive its performance is to - sources of variability over
multiple runs for a given problem; cost quality of the initial solution and
the run-time spent to obtain it; and termination parameters for LPP-
solutioning and IPP-solutioning. Such an investigation promises important
insights for researchers and practitioners on critical issues which are
otherwise not discussed in the existing literature.
* •
presents empirical results for real-world, large-scale (over 4200 flights),
complex flight network (over 15 crew bases and multiple hub-and-spoke
subnetworks) for a US-based airline, the data for which has been provided by
the research consortium’s industrial partner.
The outline of the remaining paper is as follows. Section 2 discusses the
underlying concepts, related work, and problem formulation; Section 3 entails
the details of the proposed $AirCROP$; Section 4 presents the results of the
computational experiments along with the corresponding observations; and
Section 5 concludes the paper as well as briefly describes the potential
future directions.
## 2 Crew Pairing Optimization: Preliminaries, Related Work and Problem
Formulation
This section first describes the preliminaries, including the associated
terminology, pairings’ legality constraints, and pairings’ costing criterion.
Subsequently, the related work is presented in which the existing CPOP
solution approaches are discussed. Lastly, the airline CPOP formulation is
presented.
### 2.1 Associated Terminology
In airline crew operations, each crew member is assigned a fixed (home)
airport, called a crew base. A crew pairing (or a pairing) is a flight
sequence operated by a crew, that begins and ends at the same crew base, and
satisfies the given pairing legality constraints (detailed in Section 2.2). An
example of a crew pairing with the Dallas (DAL) airport as the crew base is
illustrated in Figure 1. In a crew pairing, the legal sequence of flights
operated by a crew in a single working day (not necessarily equivalent to a
calendar day) is called a crew duty or a duty. A sit-time or a connection-time
is a small rest-period, provided between any two consecutive flights within a
duty for facilitating operational requirements such as aircraft changes by the
crew, turn-around operation for the aircraft, etc. An overnight-rest is a
longer rest-period, provided between any two consecutive duties within a
pairing. Moreover, two short-periods, provided in the beginning and ending of
any duty within a pairing, are called briefing and de-briefing time,
respectively. The total time elapsed in a crew pairing, i.e., the time for
which a crew is away from its crew base is called the time away from base
(TAFB). Sometimes, it is required for a crew to be transported at an airport
to fly their next flight. For this, the crew travels as passenger in another
flight, flown by another crew, to arrive at the required airport. Such a
flight is called a deadhead flight or a deadhead for the crew traveling as
passenger. It is desired by an airline to minimize the number of deadheads
(ideally zero), as it affects the airline’s profit in two-folds. Firstly, the
airline suffers a loss of the revenue on the passenger seat being occupied by
the deadhead-ing crew, and secondly, the airline has to pay the hourly wages
to the deadhead-ing crew even when it is not operating the flight.
Figure 1: An example of a crew pairing starting from Dallas (DAL) crew base
### 2.2 Crew Pairing: Legality Constraints and Costing Criterion
To govern the safety of crew members, airline federations such as European
Aviation Safety Agency, Federal Aviation Administration, and others, have laid
down several rules and regulations, which in addition to the airline-specific
regulations, labor laws, etc. are required to be satisfied by a pairing to be
“legal”. These legality constraints could be broadly categorized as follows:
* •
Connection-city constraint ($\mathcal{C}_{connect}$): this constraint requires
the arrival airport of a flight (or the last flight of a duty) within a
pairing to be same as the departure airport of its next flight (or the first
flight of its next duty).
* •
Sit-time ($C_{sit}$) and Overnight-rest ($\mathcal{C}_{night}$) constraints:
these constraints imposes the respective maximum and minimum limits on the
duration of sit-times and overnight-rests, where these limits are governed by
airlines and federations’ regulations.
* •
Duty constraints ($\mathcal{C}_{duty}$): these constraints govern the
regulations linked to the crew duties. For instance, they impose maximum
limits on the– number of flights allowed in a duty of a pairing; duty elapsed-
time and the corresponding flying-time; number of duties allowed in a pairing,
etc.
* •
Start- and end-city constraint ($\mathcal{C}_{base}$): this constraint
requires the beginning airport (departure airport of the first flight) and
ending airport (arrival airport of the last flight) of a pairing, to be the
same crew base.
* •
Other constraints ($\mathcal{C}_{other}$): Airlines formulate some specific
constraints, according to their operational requirements, so as to maximize
their crew utilization. For example, a pairing is refrained from involving
overnight-rests at the airports that belong to the same city as the crew base
from which the pairing started, etc.
Considering the multiplicity of the above constraints, it is critical to
develop a time-efficient legal crew pairing generation approach, enabling
their prompt availability, when their requirement arises during the
optimization.
In general, a pairing’s cost could be split into the flying cost and non-
flying (variable) cost. The flying cost is the cost incurred in actually
flying all the given flights, and is computed on hourly-basis. The variable
cost is the cost incurred during the non-flying hours of the pairing, and is
made up of two sub-components, namely, hard cost and soft cost. The hard cost
involves the pairing’s hotel cost, meal cost, and excess pay– the cost
associated with the difference between the guaranteed hours of pay and the
actual flying hours. Here, the pairing’s hotel cost is the lodging cost
incurred during its overnight-rests, and its meal cost is computed as a
fraction of its TAFB. The soft cost is the undesirable cost associated with
the number of aircraft changes (during flight-connections) in the pairing,
etc.
### 2.3 Related Work
As mentioned in Section 1, the existing CPOP solution approaches are based on
either heuristic or mathematical programming techniques. Among the heuristic-
based approaches, GA is the most widely adopted technique, and Beasley Chu
(1996) is the first instance to customize a GA (using guided GA-operators) for
solving a general class of SCPs. In that, the authors validated their proposed
approach on small-scale synthetic test cases (with over 1,000 rows and just
10,000 columns). The important details of the GA-based CPOP solution
approaches, available in the literature, are reported in Table 1.
Table 1: Key facts around the GA-based CPOP solution approaches, available in
the literature
Literature Studies | Modeling | Timetable | Airline Test Cases* | Airlines
---|---|---|---|---
Levine (1996) | Set Partitioning | - | 40R; 823F; 43,749P | -
Ozdemir Mohan (2001) | Set Covering | Daily | 28R; 380F; 21,308P | Multiple Airlines
Kornilakis Stamatopoulos (2002) | Set Covering | Monthly | 1R; 2,100F; 11,981P | Olympic Airways
Zeren Özkol (2012) | Set Covering | Monthly | 1R; 710F; 3,308P | Turkish Airlines
Deveci Demirel (20181) | Set Covering | - | 12R; 714F; 43,091P | Turkish Airlines
R represents the number of real-world test cases considered; F and P
represents the maximum number of flights and pairings covered, therein.
Notably, the utility of the studies reported in the table, have been
demonstrated on CPOPs with reasonably small number of flights, leading to
relatively smaller number of pairings. Though, CPOPs with 2,100 and 710
flights have been tackled by Kornilakis Stamatopoulos (2002) and Zeren Özkol
(2012) respectively, only a subset of all possible legal pairings has been
considered by them for finding the reported solutions. Zeren Özkol (2012)
proposed a GA with highly-customized operators, which efficiently solved
small-scale CPOPs but failed to solve large-scale CPOPs with the same search-
efficiency. Furthermore, Aggarwal, Saxena . (20202) tackled a small-scale CPOP
(with 839 flights and multiple hub-and-spoke sub-networks) using a customized-
GA (with guided operators) as well as mathematical programming techniques. The
authors concluded that customized-GAs are inefficient in solving complex
versions of even small-scale flight networks, compared to a mathematical
programming-based solution approach.
Several mathematical programming-based CPOP solution approaches have been
proposed in the literature over past few decades, and based on the size and
characteristics of the flight network being tackled, these approaches have
been categorized into either of the three general classes. In the first class
of approaches, all legal pairings or a subset of good pairings are enumerated
prior to the CPOP-solutioning, and the corresponding CPOP/IPP model is solved
using standard IP techniques (such as branch-and-bound algorithm (Land Doig,
1960)). Gershkoff (1989) proposed an iterative solution approach, which is
initialized using a set of artificial pairings (each covering a single flight
at a high pseudo-cost). In that, each iteration involves selection of very few
pairings (5 to 10); enumeration of all legal pairings using the flights
covered in the selected pairings; optimization of the resulting SPP to find
the optimal pairings; and lastly, replacement of the originally selected
pairings with the optimal pairings, only if the latter offers a better cost.
The search-efficiency of such an approach is highly dependent on the sub-
problem-size (handled up to 100 flights and 5,000 pairings), as the length and
breadth of the branching tree increases drastically with an increase in sub-
problem-size. Hoffman Padberg (1993) proposed an alternative approach to
tackle SPPs with up to 825 flights and 1.05 million pairings in which all
possible pairings are enumerated a priori, and the resulting SPP is solved to
optimality using a branch-and-cut algorithm222The branch-and-cut algorithm was
first proposed by Padberg Rinaldi (1991) to solve Mixed Integer Programs
(MIP), by integrating the standard branch-and-bound and cutting-plane
algorithms. For comprehensive details of the MIP solvers, interested readers
are referred to Lodi (2009); Linderoth Lodi (2011); Achterberg Wunderling
(2013).. Such approaches are efficient only in tackling small-scale CPOPs,
that too with up to a million pairings. However, even small-scale CPOPs may
involve large number of pairings (an instance reported in Vance . (1997) had
250 flights and over five million pairings), rendering it computationally-
intractable to use such approaches.
The second class of approaches relies on relaxing the integer constraints in
the original CPOP/IPP to form an LPP, which is then solved iteratively by–
invoking an LP solver and generating new pairings using CG; and integerizing
the resulting LPP solution. In any iteration of the LPP-solutioning (referred
to as an LPP iteration), an LP solver (based on either a simplex method or an
interior-point method333The class of interior-point methods was first
introduced by Karmarkar (1984). In that, a polynomial-time algorithm, called
Karmarkar’s algorithm, was proposed, which, in contrary to simplex method,
searches for the best solution by traversing the interior of the feasible
region of the search space.) is invoked on the input pairing set to find the
LPP solution and its corresponding dual information (shadow price
corresponding to each flight-coverage constraint), which are then utilized to
generate new pairings as part of the pricing sub-problem, promising the
corresponding cost-improvements. For the first LPP iteration, any set of
pairings covering all the flights becomes the input to the LP solver, and for
any subsequent LPP iteration, the current LPP solution and the set of new
pairings (from the pricing sub-problem) constitute the new input. For more
details on how new pairings are generated under the pricing sub-problem in the
CG technique, interested readers are referred to Vance . (1997); Lübbecke
Desrosiers (2005). As cited in Zeren Özkol (2016), the CG technique has
several limitations, out of which the prominent ones are– heading-in effect
(poor dual information in initial LPP iteration leads to generation of
irrelevant columns), bang-bang effect (dual variables oscillate from one
extreme point to another, leading to poor or slower convergence), and tailing-
off effect (the cost-improvements in the later LPP iterations taper-off).
While, different stabilization techniques are available for CG in the
literature Du Merle . (1999); Lübbecke (2010), the use of interior point
methods is gaining prominence. Anbil . (1991) presented the advancements at
the American Airlines, and enhanced the approach proposed by Gershkoff (1989)
(discussed above), by leveraging the knowledge of dual variables to screen-
out/price-out the pairings from the enumerated set at each iteration, enabling
it to solve larger sub-problems (up to 25 flights and 100,000 pairings). As an
outcome of a collaboration between IBM and American Airlines, Anbil . (1992)
proposed an iterative global solution approach (though falling short of global
optimization) in which an exhaustive set of pairings ($\approx$5.5 million) is
enumerated a priori. Several thousands of these pairings are used to
initialize the iterative procedure, and in each of its iterations, a sub-
problem is solved to obtain optimal dual variables, which are then used to
price-out all 5.5 million pairings to select a sufficiently-sized set of good
pairings ($\approx$5,000 pairings). For integerization of the LPP solution,
the literature points to two prominent strategies. The first strategy is based
on utilizing either a branch-and-bound, or a branch-and-cut algorithm. The
other strategy utilizes some special “connection-fixing” heuristics either
solely for integerization (Anbil ., 1992; Marsten, 1994), or during the
iterations of the LPP-solutioning (Zeren Özkol, 2016) to boost the performance
of the subsequent MIP solver (in some cases, may even get integer solution
without using the MIP solver). These heuristics eliminate some irrelevant
pairings by exploiting the knowledge of their linear variables and fixing some
specific flight-connections. The limitation in this class of approaches is
that even though, a good IPP solution to the original CPOP may exist and the
LPP-solutioning leads to a near-optimal LPP solution, the pairings available
in it may not fit well together to constitute a good-cost IPP solution.
The third class of approaches share a similar solution-architecture as of the
preceding class, however, differs in terms of the integerization of the LPP
solution. In that, a heuristic branch-and-price framework444The branch-and-
price algorithm was originally proposed by Barnhart . (1998) as an exact
algorithm to solve then-known large-scale IPPs, and has been utilized to solve
a variety of combinatorial optimization problems in transportation such as
Desrosiers . (1984); Desrochers Soumis (1989); Desrochers . (1992). is
adopted, wherein, CG is utilized during the integerization phase too, to
generate new legal pairings at nodes of the MIP-search tree. Desrosiers .
(1991) is the first instance that solved CPOP using a branch-and-price
framework. However, given the inestimable number of legal pairings possible
for even medium-scale CPOPs, numerous branch-and-price based heuristic-
approaches have been proposed over the last three decades (Desaulniers .,
1997; Vance ., 1997; Anbil ., 1998; Desaulniers Soumis, 2010). Notably, the
development of these approaches, being heuristic in nature, require a
significant number of algorithmic-design choices to be taken empirically,
which may vary with the characteristics of the flight networks being solved
for. To name a few such decisions, which branching scheme should be employed
(branching on linear variables, branching on flight-connections, or others),
should CG be performed on each node of the MIP-search tree, how many CG
iterations to be performed each time, etc. Furthermore, the commercial LP and
MIP solvers are not much open to modifications, making it difficult for the
new researchers to implement a computationally- and time-efficient branch-and-
price framework from scratch. For further details of the existing CPOP
solution approaches, interested readers are referred to recent survey
articles– Kasirzadeh . (2017); Deveci Demirel (20182).
In addition to the above classification of solution approaches, the literature
differs on the notion of how the pricing sub-problem is modeled and solved to
generate new legal pairings during the LPP iterations. However, the focus of
this paper is not on the solution to the pricing sub-problem step, but on the
interactions between different modules of a CG-based CPOP solution approach.
Hence, for details on the existing work related to the pricing sub-problem
step, interested readers are referred to Vance . (1997); Aggarwal, Saxena .
(20201).
### 2.4 Integer Programming Problem Formulation
As mentioned earlier, a CPOP is intrinsically an IPP, modeled either as a SCP
or a SPP. Notably, the SCP formulation provides higher flexibility during its
solutioning compared to the SPP formulation by accommodating deadhead flights
in the model, possibly resulting in faster convergence (Gustafsson, 1999). For
a given flight set $\mathcal{F}$ (including $F$ flights) that could be covered
in numerous ways by a set of legal pairings $\mathcal{P}$ (including $P$
pairings), the set covering problem is aimed to find a subset of pairings
($\in\mathcal{P}$), say $\mathcal{P}_{IP}^{*}$, which not only covers each
flight ($\in\mathcal{F}$) at least once, but does it at a cost lower than any
alternative subset of pairings in $\mathcal{P}$. In that, while finding
$\mathcal{P}_{IP}^{*}$ ($\subseteq\mathcal{P}$), each pairing
$p_{j}\in\mathcal{P}$ corresponds to a binary variable $x_{j}$, which
represents whether the pairing $p_{j}$ is included in $\mathcal{P}_{IP}^{*}$
(marked by $x_{j}=1$) or not ($x_{j}=0$). Here, $p_{j}$ is a $F$-dimensional
vector, whose each element, say $a_{ij}$, represents whether the flight
$f_{i}$ is covered by pairing $p_{j}$ (marked by $a_{ij}=1$) or not
($a_{ij}=0$). In this background, the IPP formulation, as used in this paper,
is as follows.
$\displaystyle\text{Minimize}~{}Z_{IP}=\sum_{j=1}^{P}c_{j}x_{j}+\psi_{D}\cdot\left(\sum_{i=1}^{F}\left(\sum_{j=1}^{P}a_{ij}x_{j}-1\right)\right),$
(1) $\displaystyle\text{subject to}\quad\sum_{j=1}^{P}a_{ij}x_{j}\geq
1,\quad~{}~{}~{}~{}~{}\forall i\in\\{1,2,...,F\\}$ (2)
$\displaystyle\qquad\qquad\quad
x_{j}\in\mathbb{Z}=\\{0,1\\},~{}~{}~{}~{}\forall j\in\\{1,2,...,P\\}$ (3)
$\displaystyle\text{where},c_{j}$ $\displaystyle:~{}\text{the cost of a legal
pairing }p_{j},$ $\displaystyle\psi_{D}$ $\displaystyle:~{}\text{an airline-
defined penalty cost against each deadhead in the solution},$
$\displaystyle\quad a_{ij}$ $\displaystyle=~{}1,~{}\text{if
flight}~{}f_{i}~{}\text{is covered in pairing}~{}p_{j};\text{ else }0$
$\displaystyle x_{j}$ $\displaystyle=~{}1,~{}\text{if
pairing}~{}p_{j}~{}\text{contributes to Minimum}~{}Z;\text{ else }0$
In the objective function (Equation 1), the first component gives the sum of
the individual costs of the pairings selected in the solution, while the other
component gives the penalty cost for the deadheads incurred in the solution
(note, $(\sum_{j=1}^{P}a_{ij}x_{j}-1)$ gives the number of deadheads,
corresponding to the flight $f_{i}$). Notably, in the above formulation, it is
assumed that the set of all possible legal pairings, namely, $\mathcal{P}$,
are available a priori, and the task is to determine $\mathcal{P}_{IP}^{*}$.
However, the generation of $\mathcal{P}$ a priori is computationally-
intractable for large-scale CPOPs, as mentioned in Section 2.3. Hence, the
solution to the CPOP/IPP is pursued in conjunction with the corresponding LPP
(formulation deferred till Section 3.3.1) assisted by the CG technique.
## 3 Proposed Airline Crew Pairing Optimization Framework (AirCROP)
This section presents the constitutive modules of the proposed optimization
framework - $AirCROP$, their working, and their interactions. As per the
schematic in Figure 2, $AirCROP$ accepts a set of given flights $\mathcal{F}$
along with the pairings’ legality constraints and costing criterion as input,
and outputs a minimal-cost set of legal pairings $\mathcal{P}_{IP}^{\star}$,
that covers all given flights. This transition from the input to output is
enabled by the constitutive modules, namely, the Legal Crew Pairing Generator,
the Initial Feasible Solution Generator, and an Optimization Engine in turn
enabled by CG-driven LPP-solutioning and IPP-solutioning submodules and their
intermittent interactions. While parts of these modules have been presented
elsewhere (Aggarwal ., 2018; Aggarwal, Saxena ., 20201) in isolation, these
are being detailed below towards a holistic view on the experimental results
presented later.
Figure 2: A schematic of $AirCROP$ illustrating the interactions between its
constitutive modules– Legal Crew Pairing Generator, Initial Feasible Solution
Generator, Optimization Engine (CG-driven LPP-solutioning interacting with
IPP-solutioning). The CG heuristic in LPP-solutioning generates a set of fresh
pairings $\mathcal{P}_{CG}^{t}$ at any LPP iteration $t$ using the following
CG strategies: Deadhead reduction ($CGD$, generating $\mathcal{P}_{CGD}^{t}$),
Crew Utilization enhancement ($CGU$, generating $\mathcal{P}_{CGU}^{t}$),
Archiving ($CGA$, generating $\mathcal{P}_{CGA}^{t}$), and Random exploration
($CGR$, generating $\mathcal{P}_{CGR}^{t}$). The interactions between LPP-
solutioning and IPP-solutioning are tracked by the counter $T$.
### 3.1 Legal Crew Pairing Generator
This module enables generation of the legal pairings in a time-efficient
manner, so they could feed real-time into the other modules - Initial Feasible
Solution Generator and the optimization engine. For time-efficiency, it
employs a parallel, duty-network based legal pairing generation approach,
whose distinctive contributions are two-folds. Firstly, a crew base centric
parallel architecture is adopted considering that several duty- and pairing-
constitutive constraints do vary with crew bases. In that, for an input flight
set, the legal pairing generation process is decomposed into independent sub-
processes (one for each crew base), running in parallel on idle-cores of the
central processing unit (CPU). This leads to a significant reduction in the
pairing generation time ($\approx$10 folds for a CPOP with 15 crew bases, as
demonstrated in Aggarwal . (2018)). Secondly, the set of all possible legal
duties and the corresponding duty overnight-connection graph with-respect-to
each crew base are enumerated and stored a priori the CPOP-solutioning. In a
duty overnight-connection graph, a node represents a legal duty, and an edge
between any two nodes represents a legal overnight-rest connection between the
respective duties. Such a preprocessing ensures that all the connection-city,
sit-time, duty, and overnight-rest constraints get naturally satisfied,
eliminating the need for their re-evaluation during the generation of legal
pairings, and leading to a significant reduction in the legal pairing
generation time.
The implementation of this module, formalized in Algorithms 1 & 2, is
elaborated below.
Input: $\mathcal{F}$; $\mathcal{B}$; and constraints:
$\mathcal{C}_{connect},~{}\mathcal{C}_{sit},~{}\mathcal{C}_{duty}~{}\&~{}\mathcal{C}_{night}$
Output: $\mathcal{D}_{b}$ & $\mathcal{G}^{d}_{b}~{}\forall b\in\mathcal{B}$
$\mathcal{G}^{f}\leftarrow$ Generate the flight-connection graph by evaluating
$\mathcal{C}_{connect}~{}\&~{}\mathcal{C}_{sit}$ between each pair of flights
$\in\mathcal{F}$
$\triangleright$
$\mathcal{G}^{f}\equiv\left(\mathcal{F},\mathcal{E}^{f}\right)$
1 for _each crew base $b\in\mathcal{B}$ in parallel_ do
2 for _each flight $f\in\mathcal{F}$_ do
3 Push $f$ into an empty $duty$
4 if _updated flight-sequence in $duty$ satisfies constraints in
$\mathcal{C}_{duty}$_ then
5 Add $duty$ to $\mathcal{D}_{b}$
6 if _$f$ has at least one flight-connection in $\mathcal{G}^{f}$_ then
7 $\texttt{DFS(}duty,f,\mathcal{G}^{f},\mathcal{C}_{duty}\texttt{)}$, and add
the enumerated duties to $\mathcal{D}_{b}$
8
9 end if
10
11 end if
12 Pop out $f$ from $duty$
13
14 end for
15 $\mathcal{G}^{d}_{b}\leftarrow$ Generate the duty overnight-connection
graph by evaluating $\mathcal{C}_{night}$ between each pair of duties
$\in\mathcal{D}_{b}$
16 end for
17return $\mathcal{D}_{b}$ & $\mathcal{G}^{d}_{b}~{}\forall b\in\mathcal{B}$
$\triangleright$ DFS($duty,parent,\mathcal{G}^{f},\mathcal{C}_{duty}$)
18 for _each $child$ of $parent$ in $\mathcal{G}^{f}$_ do
19 Push $child$ into $duty$
20 if _updated flight-sequence in $duty$ satisfies $\mathcal{C}_{duty}$_ then
21 yield $duty$ to $\mathcal{D}_{b}$
22 if _$child$ has at least one connection in $\mathcal{G}^{f}$_ then
23 DFS($duty,child,\mathcal{G}^{f},\mathcal{C}_{duty}$)
24
25 end if
26
27 end if
28 Pop out $child$ from $duty$
29
30 end for
Algorithm 1 Procedure for enumeration of legal duties and duty overnight-
connection graphs
Input: $\mathcal{F}_{*}\text{ or
}\mathcal{D}_{*};~{}\mathcal{B};~{}\mathcal{D}_{b}~{}\&~{}\mathcal{G}^{d}_{b}~{}\forall
b\in\mathcal{B}$; and constraints:
$\mathcal{C}_{base}~{}\&~{}\mathcal{C}_{other}$
Output: $\mathcal{P}_{*}$
1 for _each crew base $b\in\mathcal{B}$ in parallel_ do
2 Update $\mathcal{D}_{b}~{}\&~{}\mathcal{G}^{d}_{b}$ by removing duties
$\notin\mathcal{D}_{*}$ if $\mathcal{D}_{*}$ is input, or by removing those
duties which cover flights $\notin\mathcal{F}_{*}$ if $\mathcal{F}_{*}$ is
input
3 for _each $duty\in\mathcal{D}_{b}$_ do
4 if _departure airport of $duty$ is $b$_ then
5 Push $duty$ into an empty $pairing$
6 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{other}$_
then
7 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{base}$_ then
8 Add $pairing$ to $\mathcal{P}_{*}$
9
10 else if _$duty$ has at least one duty overnight-connection in
$\mathcal{G}^{d}_{b}$_ then
11
$\texttt{DFS(}pairing,duty,\mathcal{G}^{d}_{b},\mathcal{C}_{base}\cup\mathcal{C}_{other}\texttt{)}$,
and add enumerated pairings to $\mathcal{P}_{*}$
12
13 end if
14
15 end if
16 Pop out $duty$ from $pairing$
17
18 end if
19
20 end for
21
22 end for
23return $\mathcal{P}_{*}$
$\triangleright$
DFS($pairing,parent,\mathcal{G}^{d}_{b},\mathcal{C}_{base}\cup\mathcal{C}_{other}$)
24 for _each $child$ of $parent$ in $\mathcal{G}^{d}_{b}$_ do
25 Push $child$ into $pairing$
26 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{other}$_
then
27 if _updated duty-sequence in $pairing$ satisfies $\mathcal{C}_{base}$_
then
28 yield $pairing$ to $\mathcal{P}_{*}$
29
30 else if _$child$ has at least one duty overnight-connection in
$\mathcal{G}^{d}_{b}$_ then
31
$\texttt{DFS(}pairing,child,\mathcal{G}^{d}_{b},\mathcal{C}_{base}\cup\mathcal{C}_{other}\texttt{)}$
32
33 end if
34
35 end if
36 Pop out $child$ from $pairing$
37
38 end for
Algorithm 2 Procedure for enumeration of legal pairings from an input flight
set $\mathcal{F}_{*}\text{ or a duty set }\mathcal{D}_{*}$
For solving any CPOP, the foremost step of the $AirCROP$ is to preprocess the
entire duty-connection network– set of legal duties $\mathcal{D}_{b}$ and duty
overnight-connection graph
$\mathcal{G}^{d}_{b}\left(\equiv\left(\mathcal{D}_{b},~{}\mathcal{E}^{d}_{b}\right)\right)$
for each crew base $b$ in the given set of crew bases $\mathcal{B}$, where
$\mathcal{E}^{d}_{b}$ is the set of legal overnight-rest connections between
duty-pairs $\in\mathcal{D}_{b}$. The procedure for the above preprocessing is
presented in Algorithm 1. In that, the first step is the generation of a
flight-connection graph (denoted by $\mathcal{G}^{f}$) by evaluating the
legality of connection-city ($\mathcal{C}_{connect}$) and sit-time
($\mathcal{C}_{sit}$) constraints between every flight-pair in the given
flight schedule $\mathcal{F}$ (line 1). Here, in
$\mathcal{G}^{f}~{}\left(\equiv\left(\mathcal{F},\mathcal{E}^{f}\right)\right)$,
$\mathcal{F}$ is the set of nodes (flights) and $\mathcal{E}^{f}$ is the set
of edges (legal flight connections). Subsequently, $\mathcal{G}^{f}$ is used
for legal duty enumeration, by decomposing the process into independent sub-
processes, one for each crew base $b\in\mathcal{B}$, and executing them in
parallel (lines 2-12). In each of these sub-processes, enumeration of legal
duties, starting from each flight $f\in\mathcal{F}$, is explored. In that:
* •
flight $f$ is added to an empty candidate duty stack, given by $duty$ (line
4).
* •
the flight-sequence in $duty$ is checked for satisfaction of duty constraints
$\mathcal{C}_{duty}$, and if satisfied, $duty$ is added to the desired legal
duty set $\mathcal{D}_{b}$ (lines 5-6). Notably, if $f$ has at least one
connection with another flight in $\mathcal{G}^{f}$, and if the duty
constraints permit, then more flights could be accommodated in $duty$, leading
to enumeration of other legal duties (lines 7-9).
* •
a Depth-first Search (DFS) algorithm (Tarjan, 1972) is adapted, which is
called recursively to enumerate legal duties, starting from a parent flight
node ($parent$), by exploring its all successive paths in $\mathcal{G}^{f}$ in
a depth-first manner (lines 16-25). In each recursion, a child flight node
($child$) is pushed into $duty$, the updated flight-sequence is checked for
satisfaction of $\mathcal{C}_{duty}$, and if satisfied, $duty$ is yielded to
$\mathcal{D}_{b}$, followed by another recursion of DFS() with $child$ as the
new $parent$. In this way, all legal duties, starting from flight $f$, are
enumerated.
Subsequently, $f$ is popped out from $duty$, and duty enumeration using other
flights in $\mathcal{F}$ is explored (lines 3 & 11). The resulting set
$\mathcal{D}_{b}$ is then used to generate the duty overnight-connection graph
$\mathcal{G}^{d}_{b}$), by evaluating the legality of connection-city
($\mathcal{C}_{connect}$) and overnight-rest ($\mathcal{C}_{night}$)
constraints between every duty-pair $\in\mathcal{D}_{b}$ (line 13). Here, in
$\mathcal{G}^{d}_{b}~{}\left(\equiv\left(\mathcal{D}_{b},\mathcal{E}^{d}_{b}\right)\right)$,
$\mathcal{D}_{b}$ is the set of nodes (legal duties), and
$\mathcal{E}^{d}_{b}$ is the set of edges (legal overnight-rest connections).
The preprocessed sets of legal duties and the corresponding duty overnight-
connection graphs are utilized to enumerate legal pairings for any input
flight set (say $\mathcal{F}_{*}$) or a duty set (say $\mathcal{D}_{*}$), when
required in real-time in other modules of the $AirCROP$. Its procedure,
formalized in Algorithm 2, is elaborated below. For legal pairing enumeration,
the same crew base driven parallel architecture is utilized in which the
process is decomposed into independent sub-processes, one for each crew base
$b\in\mathcal{B}$, running in parallel on idle-cores of the CPU (line 1). In
each of these sub-processes, the first step is to update $\mathcal{D}_{b}$ and
$\mathcal{G}^{d}_{b}$, by removing duties $\notin\mathcal{D}_{*}$ if
$\mathcal{D}_{*}$ is input, or those duties that cover flights
$\notin\mathcal{F}_{*}$ if $\mathcal{F}_{*}$ is input (line 2). Subsequently,
the enumeration of legal pairings, starting from each duty ($duty$)
$\in\mathcal{D}_{b}$, is explored (line 3). In that:
* •
the $duty$ is pushed into an empty candidate pairing stack, given by
$pairing$, only if the departure airport of $duty$ is same as the crew base
$b$ (lines 4-5).
* •
the $pairing$ is checked for satisfaction of pairing constraints
$\mathcal{C}_{other}$, and if satisfied, $pairing$ is further checked for
satisfaction of end-city constraint $\mathcal{C}_{base}$, which ensures that
the arrival airport of the $pairing$’s last duty is same as the crew base $b$.
* –
If $pairing$ satisfies $\mathcal{C}_{base}$, it is classified as legal, and is
added to the desired pairing set $\mathcal{P}_{*}$ (lines 7-8).
* –
If $pairing$ does not satisfy $\mathcal{C}_{base}$, it is not complete, and
more duties are required to be covered in it to complete the legal duty-
sequence. This is only possible if $duty$ has at least one overnight-rest
connection in $\mathcal{G}^{d}_{b}$. And if it does, the DFS() sub-routine,
similar to the one used in legal duty enumeration, is called recursively to
enumerate legal pairings, starting from a parent duty node ($parent$), by
exploring its all successive paths in $\mathcal{G}^{d}_{b}$ in a depth-first
manner (lines 18-28). In each recursion:
* $\circ$
a child duty node ($child$) is pushed into the $pairing$ (line 19).
* $\circ$
the updated duty-sequence in $pairing$ is checked for satisfaction of first
$\mathcal{C}_{other}$ and then $\mathcal{C}_{base}$ (lines 20-21).
* $\circ$
if it satisfies both constraints, then $pairing$ is complete (legal), and is
yielded to the desired pairing set $\mathcal{P}_{*}$ (line 22).
* $\circ$
if it satisfies $\mathcal{C}_{other}$ but not $\mathcal{C}_{base}$, then
another recursion of DFS() with $child$ as new $parent$ is called, only if
$child$ has at least one duty overnight-rest connection in
$\mathcal{G}^{d}_{b}$ (lines 23-25).
In the above way, all legal pairings, starting from $duty$, are enumerated
using the DFS() sub-routine. Subsequently, $duty$ is popped out of $pairing$
(line 13), and the legal pairing enumeration using other duties
$\in\mathcal{D}_{b}$ is explored (line 3). Once, all the sub-processes are
complete, the desired pairing set $\mathcal{P}_{*}$ is returned (line 17).
### 3.2 Initial Feasible Solution Generator
An initial feasible solution (IFS) is any set of pairings, covering all
flights in the given flight schedule, which is used to initialize a CPOP
solution approach. For large-scale CPOPs, generation of an IFS standalone is a
computationally-challenging task. This module is designed to generate a
reasonably-sized IFS in a time-efficient manner for large and complex flight
networks, which is then used to initialize the Optimization Engine of
$AirCROP$. For this, it employs a novel Integer Programming based Divide-and-
cover Heuristic (IPDCH), which relies on: (a) a divide-and-cover strategy to
decompose the input flight schedule into sufficiently-small flight subsets,
and (b) integer programming to find a lowest-cost pairing set, covering the
maximum possible flights for each of the decomposed flight subsets.
The procedure of the proposed IPDCH, formalized in Algorithm 3, is elaborated
below.
Input: $\mathcal{F},~{}K,~{}\texttt{Pairing\\_Gen()}$
Output: $\mathcal{P}_{IFS}$
1 while _all flights $\in\mathcal{F}$ are not covered in $\mathcal{P}_{IFS}$_
do
$\mathcal{F}_{K}\leftarrow$ Select $K$ random flights from $\mathcal{F}$
without replacement
$\triangleright$ $K<F$
2
$\mathcal{P}_{K}\leftarrow~{}\texttt{Pairing\\_Gen(}\mathcal{F}_{K}\texttt{)}$
$\mathcal{F}_{K^{\prime}}\leftarrow$ Flights covered in $\mathcal{P}_{K}$
$\triangleright$ $K^{\prime}\leq K$
3 Add remaining flights
$\left(\mathcal{F}_{K}\backslash\mathcal{F}_{K^{\prime}}\right)$ back to
$\mathcal{F}_{K}$
4 Formulate the IPP using flights in $\mathcal{F}_{K^{\prime}}$ and pairings
in $\mathcal{P}_{K}$
5 $\mathcal{P}_{IP}\leftarrow$ Solve the IPP using an MIP solver, and select
pairings corresponding to non-zero variables
6 Add pairings from $\mathcal{P}_{IP}$ to $\mathcal{P}_{IFS}$
7 Replace flights in $\mathcal{F}$ if it becomes empty
8
9 end while
10return $\mathcal{P}_{IFS}$
Algorithm 3 Procedure for IFS generation using the proposed IPDCH
Being an iterative heuristic, IPDCH terminates when all flights in the input
set are covered by pairings in the desired IFS, notated as $\mathcal{P}_{IFS}$
(lines 1). The input to the heuristic involves the given flight schedule
$\mathcal{F}$ (with $F$ number of flights), the pairing generation sub-routine
Pairing_Gen() (presented in Section 3.1), and a pre-defined decomposition
parameter $K$, which regulates the number of flights to be selected from
$\mathcal{F}$ in each IPDCH-iteration. The setting of $K$ largely depends upon
the available computational resources, and the characteristics of the input
flight dataset (as highlighted in Section 4.3.3). In each IPDCH-iteration,
first a flight subset, say $\mathcal{F}_{K}$ $\left(K<F\right)$, is formed by
randomly selecting $K$ number of flights from $\mathcal{F}$ without
replacement (line 2). Subsequently, $\mathcal{F}_{K}$ is fed as input to the
Pairing_Gen() sub-routine to enumerate the set of all possible legal pairings,
say $\mathcal{P}_{K}$ (line 3). Notably, all flights in $\mathcal{F}_{K}$ may
not get covered by pairings in $\mathcal{P}_{K}$, as random selection of
flights does not guarantee legal connections for all selected flights. Let
$\mathcal{F}_{K^{\prime}}$ $\left(K^{\prime}\leq K\right)$ be the set of
flights covered in $\mathcal{P}_{K}$ (line 4). The remaining flights, given by
$\mathcal{F}_{K}\backslash\mathcal{F}_{K^{\prime}}$, are added back to
$\mathcal{F}$ (line 5). Subsequently, $\mathcal{F}_{K^{\prime}}$ and
$\mathcal{P}_{K}$ are used to formulate the corresponding IPP (line 6), which
is then solved using a commercial off-the-shelf MIP solver to find the optimal
IPP solution, say $\mathcal{P}_{IP}$, constituted by pairings corresponding to
only non-zero variables (line 7). The pairings in $\mathcal{P}_{IP}$ are then
added to the desired set $\mathcal{P}_{IFS}$ (line 8). Lastly, the flights in
$\mathcal{F}$ are replaced if it becomes empty (line 9). As soon as
$\mathcal{P}_{IFS}$ covers all the required flights, IPDCH is terminated, and
$\mathcal{P}_{IFS}$ is passed over to the Optimization Engine for its
initialization.
### 3.3 Optimization Engine: Interactions between CG-driven LPP-solutioning
and IPP-solutioning
The search for minimal cost, full flight-coverage CPOP solution is enabled by
an optimization engine. It tackles the underlying LPP and IPP through
intermittent interactions of two submodules, namely, CG-driven LPP-solutioning
and IPP-solutioning, tracked by a counter $T$. These submodules are presented
below.
#### 3.3.1 CG-driven LPP-solutioning
As illustrated in Figure 2, this submodule entails several iterations (each
referred to as an LPP iteration, and is tracked by $t$) in each of which: (a)
an LP solver is invoked on the input pairing set, leading to the current LPP
solution $\mathcal{P}_{LP}^{t}$, (b) the corresponding dual od the LPP is
formulated using $\mathcal{P}_{LP}^{t}$, which is then solved to fetch dual
variables (given by vector $Y^{t}$), and (c) a fresh set of pairings
$\mathcal{P}_{CG}^{t}$, that promises associated cost-improvement, is
generated using a domain-knowledge driven CG heuristic. For the first LPP
iteration ($t=1$), the input to the LP solver is either $\mathcal{P}_{IFS}$ if
$T=1$, or $\mathcal{P}_{IP}^{T-1}$ if $T>1$. For any subsequent LPP iteration
($t>1$), the input comprises of the current $\mathcal{P}_{CG}^{t}$ and
$\mathcal{P}_{LP}^{t}$. In this background, each of these LPP iterations are
implemented in the following three phases555For ease of reference, the
notations introduced in these phases are kept independent of the LPP iteration
counter $t$. However, these notations are super-scripted by $t$ in the
corresponding discussions and pseudocodes with reference to a particular LPP
iteration.:
* •
In the first phase, a primal of the LPP (Equations 4 to 6) is formulated from
the input pairing set, and is solved using an interior-point method based
commercial off-the-shelf LP solver (Gurobi Optimization, 2019). In the
resulting LPP solution, a primal variable $x_{j}$, varying from $0$ to $1$, is
assigned to each pairing $p_{j}$ in the input pairing set. These $x_{j}$s
together constitute the primal vector, notated as
$X~{}\left(=[x_{1}~{}x_{2}~{}x_{3}~{}...~{}x_{P}]^{\mathsf{T}}\right)$. The
set of $x_{j}$s with non-zero values ($x_{j}\neq 0$) and the set of
corresponding pairings are notated as $X_{LP}$ and $\mathcal{P}_{LP}$,
respectively.
$\displaystyle\text{Minimize}~{}Z_{LP}^{p}=\sum_{j=1}^{P}c_{j}x_{j}+\psi_{D}\cdot\left(\sum_{i=1}^{F}\left(\sum_{j=1}^{P}a_{ij}x_{j}-1\right)\right)=\sum_{j=1}^{P}\left(c_{j}+\psi_{D}\cdot\sum_{i=1}^{F}a_{ij}\right)x_{j}-F\cdot\psi_{D},$
(4) $\displaystyle\text{subject to}\quad\sum_{j=1}^{P}a_{ij}x_{j}\geq
1,\qquad\quad\forall i\in\\{1,2,...,F\\}$ (5) $\displaystyle\qquad\qquad\quad
x_{j}\in\mathbb{R}=[0,1],\qquad\forall j\in\\{1,2,...,P\\}$ (6)
It is to be noted that the minimization of $Z_{LP}^{p}$ will always lead to a
solution with all primal variables $x_{j}\leq 1$, even without explicitly
involving the corresponding constraint– Equation 6 (Vazirani, 2003). Hence,
the contribution of each pairing in the LPP solution, given by its $x_{j}$,
could be effectively treated as $x_{j}\in\mathbb{R}_{\geq 0}$ instead of
Equation 6.
* •
In the second phase, dual variables are extracted from the current LPP
solution. For this, the dual of the LPP (Equations 7 to 9) is formulated using
the pairing set $\mathcal{P}_{LP}$, and is solved using an interior-point
method (Andersen Andersen, 2000) based non-commercial LP solver (Virtanen .,
2020), to fetch the optimal dual solution. In that, a dual variable $y_{i}$
represents a shadow price corresponding to an $i^{th}$ flight-coverage
constraint in the primal. The optimal dual vector, constituted by all $y_{i}$s
in the optimal dual solution, is notated as
$Y~{}\left(=[y_{1}~{}y_{2}~{}y_{3}~{}...~{}y_{F}]^{\mathsf{T}}\right)$, whose
dimension is equal to $F$.
$\displaystyle\text{Maximize}~{}Z_{LP}^{d}=\sum_{i=1}^{F}y_{i}-F\cdot\psi_{D},$
(7) $\displaystyle\text{subject
to}\quad\sum_{i=1}^{F}a_{ij}y_{i}\leq\left(c_{j}+\psi_{D}\cdot\sum_{i=1}^{F}a_{ij}\right),~{}~{}~{}~{}\forall
j\in\\{1,2,...,P_{LP}\\}$ (8)
$\displaystyle\qquad\qquad\qquad\quad~{}~{}y_{i}\in\mathbb{R}\geq
0,\qquad\qquad\qquad~{}~{}~{}~{}\forall i\in\\{1,2,...,F\\}$ (9)
$\displaystyle\text{where},\qquad P_{LP}:~{}\text{is the number of pairings in
the set}~{}\mathcal{P}_{LP}$
$\displaystyle\qquad\qquad\quad~{}~{}~{}y_{i}:~{}\text{dual variable,
corresponding to an $i^{th}$ flight-coverage constraint},$
Notably, in a conventional approach, the optimal $Y$ is directly computed from
the optimal basis of the primal solution (obtained in the first phase), using
the principles of duality theory, particularly the theorem of complementary
slackness (Bertsimas Tsitsiklis, 1997), without explicitly solving the
corresponding dual. However, in the second phase, solving the dual explicitly
using the interior-point method (Andersen Andersen, 2000), in a sense, helps
in stabilizing the oscillating behavior of dual variables over the successive
LPP iterations (bang-bang effect, as discussed in Section 2.3). Moreover, this
interior-point method is available via only a non-commercial LP solver
(Virtanen ., 2020), and to ensure a time-efficient search, the above dual is
formulated using the pairings $\in\mathcal{P}_{LP}$, instead of pairings from
the large-sized input pairing set.
* •
In the last phase, the availability of dual variables from the second phase
paves the way for solution to the pricing sub-problem. It is aimed to generate
those legal pairings (non-basic), which if included as part of the input to
the next LPP iteration, promise a better-cost (at least a similar-cost) LPP
solution compared to the current solution. Such non-basic pairings are
identified using a reduced cost metric, given by $\mu_{j}$ (Equation 10),
which if negative (as CPOP is a minimization problem) indicates the potential
in the pairing to further reduce the cost of the current LPP solution
$Z_{LP}^{p}$, when included in the current basis (Bertsimas Tsitsiklis, 1997).
Moreover, the potential of such a pairing to further reduce the current
$Z_{LP}^{p}$, is in proportion to the magnitude of its $\mu_{j}$ value.
$\displaystyle\mu_{j}=c_{j}-\mu d_{j},~{}\text{where,}~{}\mu
d_{j}=\sum_{i=1}^{F}\left(a_{ij}\cdot y_{i}\right)~{}=\sum_{f_{i}\in
p_{j}}y_{i}~{}~{}(~{}\text{represents the dual cost component of}~{}\mu_{j})$
(10)
As mentioned in Section 2.3, the standard CG practices generate a complete
pricing network and solves it as a resource-constrained shortest-path
optimization problem, to identify only the pairing(s) with negative reduced
cost(s). However, generation of a complete pricing network for CPOPs with
large-scale and complex flight networks is computationally-intractable. To
overcome this challenge, a domain-knowledge driven CG heuristic (Aggarwal,
Saxena ., 20201) is employed here to generate a set of promising pairings (of
pre-defined size, criterion for which is discussed in Section 4.2). Notably,
the merit of this CG heuristic lies in the fact that from within the larger
pool of pairings with negative $\mu_{j}$, besides selecting pairings randomly,
it also selects pairings in a guided manner. In that, the selection of such
pairings is guided by optimal solution features at a set level and an
individual pairing level, and re-utilization of the past computational
efforts. These optimal solution features are related to the minimization of
deadheads and maximization of the crew utilization, respectively. In essence,
while the standard CG practices present equal opportunity for any pairing with
a negative $\mu_{j}$ to qualify as an input for the next LPP iteration, this
CG heuristic, besides ensuring that the pairings have negative $\mu_{j}$,
prioritizes some pairings over the others via its two-pronged strategy–
exploration of the new pairings’ space and re-utilization of pairings from the
past LPP iterations. In that:
* –
the exploration of the new pairings’ space is guided by three CG strategies,
which are elaborated below.
* $\circ$
Deadhead Reduction strategy ($CGD$): this strategy prioritizes a set of legal
pairings that is characterized by low deadheads, a feature which domain
knowledge recommends for optimality at a set level. To exploit this optimality
feature, $CGD$ generates a new paring set $\mathcal{P}_{CGD}$, which not only
provides an alternative way to cover the flights involved in a subset of the
current $\mathcal{P}_{LP}$, but also ensures that some of these flights get
covered with zero deadheads. It promises propagation of the zero deadhead
feature over successive LPP iterations, as: (a) $\mathcal{P}_{CGD}$ alongside
the current $\mathcal{P}_{LP}$ forms a part of the input for the next LPP
iteration; (b) $\mathcal{P}_{CGD}$ provides a scope for better coverage (zero
deadhead) of some flights, compared to the current $\mathcal{P}_{LP}$; and (c)
$\mathcal{P}_{CGD}$ may focus on zero deadhead coverage for different flights
in different LPP iterations.
* $\circ$
Crew Utilization enhancement strategy ($CGU$): this strategy prioritizes a set
of legal pairings each member of which is characterized by high crew
utilization, a feature which domain knowledge recommends for optimality at an
individual pairing level. To exploit this optimality feature, $CGU$: (a)
introduces a new measure, namely, crew utilization ratio, given by
$\gamma_{j}$ (Equation 11), to quantify the degree of crew utilization in a
pairing $p_{j}$ at any instant; (b) identifies pairings from the current
$\mathcal{P}_{LP}$, which are characterized by high dual cost component ($\mu
d_{j}$, Equation 10), reflecting in turn on those constitutive flights that
have high value of dual variables $y_{i}$, and hence, on the potential of
these flights to generate new pairings with more negative $\mu_{j}$; and (c)
utilizes these flights to generate promising pairings from which only the ones
with high $\gamma_{j}$ are picked to constitute the new pairing set
$\mathcal{P}_{CGU}$.
$\displaystyle\gamma_{j}=\frac{1}{\text{Number of duties in
}p_{j}}\cdot\sum_{d\in p_{j}}\frac{\text{Working hours in
duty}~{}d}{\text{Permissible hours of duty }d}$ (11)
In doing so, $CGD$ promises propagation of the higher crew utilization ratio
over successive LPP iterations, given that in each LPP iteration,
$\mathcal{P}_{CGU}$ alongside the current $\mathcal{P}_{LP}$ forms a part of
the input for the next LPP iteration.
* $\circ$
Random exploration strategy ($CGR$): this strategy, unlike $CGU$ and $CGD$
which are guided by optimal solution features, pursues random and unbiased
exploration of the new pairings’ space, independent of the current LPP
solution. It involves generation of new pairings for a random selected set of
legal duties from which only the pairings with negative reduced cost are
selected to constitute the new pairing set $\mathcal{P}_{CGR}$. Here, a random
set of legal duties is used instead of a random set of flights, as the former
has a higher probability of generating legal pairings, given that a majority
of pairing legality constraints get satisfied with the preprocessing of legal
duties.
* –
the re-utilization of pairings from the past LPP iterations is guided by an
Archiving strategy ($CGA$), that prioritizes a set of legal pairings
comprising of those flight-pairs, which as per the existing LPP solution, bear
better potential for improvement in the objective function. Such a pairing
set, originating from the flight-pair level information, is extracted from an
archive (denoted by $\mathcal{A}$) of the previously generated pairings. In
doing so, $CGA$ facilitates re-utilization of the past computational efforts,
by providing an opportunity for a previously generated pairing to be re-
inducted in the current pairing pool. For this, $CGA$:
* $\circ$
updates the archive $\mathcal{A}$ in each LPP iteration such that any pairing
is stored/retrieved with reference to a unique index $(f_{m},f_{n})$ reserved
for any legal flight-pair in that pairing.
* $\circ$
introduces a new measure, namely, reduced cost estimator, given by $\eta_{mn}$
(Equation 12), for a flight-pair $(f_{m},f_{n})$ in $\mathcal{A}$. In each LPP
iteration, this estimator is computed for all the flight-pairs present in
$\mathcal{A}$, by fetching $f_{m}$, $f_{n}$, $y_{m}$ and $y_{n}$.
$\displaystyle\eta_{mn}$
$\displaystyle=\texttt{flying\\_cost($f_{m}$)}+\texttt{flying\\_cost($f_{n}$)}-y_{m}-y_{n}=\sum_{i\in\\{m,n\\}}\left(\texttt{flying\\_cost($f_{i}$)}-y_{i}\right)$
(12)
Notably, this formulation is analogous to Equation 10, just that instead of
the complete cost of a pairing, only the flying costs corresponding to the
flights in a legal flight-pair are accounted for. Given this, $\eta_{mn}$ may
be seen as an indicator of $\mu_{j}$ at the flight-pair level.
* $\circ$
recognizes that towards further improvement in the current LPP solution, it
may be prudent to include as a part of the input for the next LPP iteration–
the new pairing set $\mathcal{P}_{CGA}$, constituted by preferentially picking
pairings from $\mathcal{A}$, that cover flight-pairs with lower $\eta_{mn}$
value.
In doing so, $CGA$ pursues the goal of continual improvement in the objective
function, while relying on the flight-pair level information embedded in the
LPP solution of current LPP iteration, and re-utilizing the computational
efforts spent till that LPP iteration.
For further details and associated nitty-gritty of the above domain-knowledge
driven CG heuristic, interested readers are referred to the authors’ previous
work– Aggarwal, Saxena . (20201). Once this CG heuristic generates a set of
promising pairings $\mathcal{P}_{CG}$ of pre-defined size, it is merged with
the current $\mathcal{P}_{LP}$, and fed as the input to the next LPP iteration
($t\mathrel{+}=1$).
These LPP iterations are repeated until the cost-improvements over a pre-
specified number of successive LPP iterations falls below a pre-specified
cost-threshold (settings given in Section 4.2). In this submodule, these LPP
iterations are repeated, until its termination criterion is not met. In that,
the cost-improvement over LPP iterations is observed, and if it falls below a
pre-specified cost-threshold, say $Th_{cost}$, over a pre-specified number of
successive LPP iterations, say $Th_{t}$, then it is terminated. The settings
of these pre-specified limits– $Th_{cost}$ and $Th_{t}$, are highlighted in
Section 4.2. After termination, the final LPP solution $\mathcal{P}_{LP}^{T}$
is then passed over to the IPP-solutioning submodule for its integerization.
#### 3.3.2 IPP-solutioning
This submodule receives as input, the LPP solution $\mathcal{P}^{T}_{LP}$, and
aims to find therein a full-coverage integer solution, notated as
$\mathcal{P}^{T}_{IP}$. Towards it, an IPP (Equations 1 to 3) is formulated
using $\mathcal{P}^{T}_{LP}$ and $\mathcal{F}$, and solved using a branch-and-
cut algorithm based off-the-shelf commercial MIP solver (Gurobi Optimization,
2019). At each node of the MIP-search tree, this solver maintains a valid
lower bound (cost of the LPP solution) and a best upper bound (cost of the IPP
solution), and it self-terminates if the gap between these two bounds becomes
zero, or all branches in the MIP-search tree have been explored. Considering
that the MIP-search for large-scale CPOPs is extremely time-consuming, a pre-
defined time limit, notated as $Th_{ipt}$ (setting highlighted in Section
4.2), is used to terminate this MIP solver, if it does not terminate by itself
a priori. Once the $\mathcal{P}^{T}_{IP}$ is obtained, it is passed back to
the previous submodule for the next LPP-IPP interaction ($T\mathrel{+}=1$),
only if the termination criterion of the Optimization Engine is not satisfied.
Overarching Optimization Engine
In the wake of the above, the procedure of the overarching Optimization
Engine, formalized in Algorithm 4, is elaborated below.
Input:
$\mathcal{F},~{}\mathcal{P}_{IFS},~{}Th_{cost},~{}Th_{t},~{}Th_{ipt},~{}\texttt{Pairing\\_Gen()},~{}\texttt{CGD()},~{}\texttt{CGU()},~{}\texttt{CGR()},~{}\texttt{CGA()}$
Output: $\mathcal{P}^{\star}_{IP}$
1 $T\leftarrow 1$ while _termination criterion of Optimization Engine is not
met_ do
2 $\triangleright$ CG-driven LPP-solutioning: $t\leftarrow 1$ while
_termiantion criterion of CG-driven LPP-solutioning is not met_ do
3 if _$t=1$ and $T=1$_ then
4 Formulate the primal of the LPP using $\mathcal{P}_{IFS}$ and $\mathcal{F}$
5 else if _$t=1$ and $T>1$_ then
6 Formulate the primal of the LPP using $\mathcal{P}^{T-1}_{IP}$ and
$\mathcal{F}$
7 else
8 Formulate the primal of the LPP using
$\mathcal{P}^{t-1}_{CG}\cup\mathcal{P}^{t-1}_{LP}$ and $\mathcal{F}$
9 end if
10 $\mathcal{P}^{t}_{LP},~{}X^{t}_{LP}\leftarrow$ Solve the primal using the
interior-point method based LP solver $\triangleright$ Termination of the CG-
driven LPP-solutioning: if _cost-improvements $\leq Th_{cost}$ over last
$Th_{t}$ number of successive LPP iterations_ then
11 $\mathcal{P}^{T}_{LP}\leftarrow\mathcal{P}^{t}_{LP}$ Break
12 end if
13 Formulate the dual of the LPP using $\mathcal{F}$ and
$\mathcal{P}^{t}_{LP}$ $Y^{t}\leftarrow$ Solve the dual using the interior-
point method based LP solver $\triangleright$ Solution to pricing sub-problem
using the CG heuristic:
$\mathcal{P}^{t}_{CGD}\leftarrow\texttt{CGD($\mathcal{P}^{t}_{LP},X^{t}_{LP},Y^{t},\ldots$)}$
$\mathcal{P}^{t}_{CGU}\leftarrow\texttt{CGU($\mathcal{P}^{t}_{LP},X^{t}_{LP},Y^{t},\ldots$)}$
$\mathcal{P}^{t}_{CGR}\leftarrow\texttt{CGR($Y^{t},\ldots$)}$
$\mathcal{P}^{t}_{CGA}\leftarrow\texttt{CGA($\mathcal{P}^{t}_{LP},X^{t}_{LP},Y^{t},\ldots$)}$
$\mathcal{P}^{t}_{CG}\leftarrow\mathcal{P}^{t}_{CGD}\cup\mathcal{P}^{t}_{CGU}\cup\mathcal{P}^{t}_{CGR}\cup\mathcal{P}^{t}_{CGA}$
$t\mathrel{+}=1$
14 end while
15 $\triangleright$ IPP-solutioning: Formulate the IPP using
$\mathcal{P}^{T}_{LP}$ and $\mathcal{F}$ $\mathcal{P}^{T}_{IP}\leftarrow$
Solve the IPP using a branch-and-cut algorithm based MIP solver until its run-
time becomes $\geq Th_{ipt}$ $\triangleright$ Termination of the Optimization
Engine: if _$Z^{T}_{IP}\left(\text{cost of
}\mathcal{P}^{T}_{IP}\right)=Z^{T}_{LP}\left(\text{cost of
}\mathcal{P}^{T}_{LP}\right)$_ then
16 $\mathcal{P}^{\star}_{IP}\leftarrow\mathcal{P}^{T}_{IP}$ Break
17 end if
18 $T\mathrel{+}=1$
19 end while
return $\mathcal{P}_{IP}^{\star}$
Algorithm 4 Procedure for the Optimization Engine
Its input involves the given flight set $\mathcal{F}$; the generated IFS
$\mathcal{P}_{IFS}$; the pre-defined termination parameters– $Th_{cost}$ &
$Th_{t}$ (for CG-driven LPP-solutioning) and $Th_{ipt}$ (for IPP-solutioning);
and the sub-routines for Legal Crew Pairing Generator (Pairing_Gen()) and the
four CG strategies ($\texttt{CGD()},~{}\texttt{CGU()},~{}\texttt{CGR()}~{}$and
CGA()) in the proposed CG heuristic. In each LPP-IPP interaction of the
Optimization Engine, first, the CG-driven LPP-solutioning is executed (lines
3-25). It entails several LPP iterations (tracked by $t$), in each of which
the first step is to formulate the primal using $\mathcal{F}$ and the
respective input pairing set. This input pairing set is:
* •
$\mathcal{P}_{IFS}$, if the first LPP iteration ($t=1$) of the first LPP-IPP
interaction ($T=1$) is being executed (lines 5-6).
* •
$\mathcal{P}^{T-1}_{IP}$, if the first LPP iteration ($t=1$) of any subsequent
LPP-IPP interaction ($T>1$) is being executed (lines 7-8).
* •
$\mathcal{P}^{t-1}_{CG}\cup\mathcal{P}^{t-1}_{LP}$, if any subsequent LPP
iteration ($t>1$) of any LPP-IPP interaction ($T\geq 1$) is being executed
(lines 9-11).
Once the primal is formulated, it is solved using the corresponding LP solver
to obtain the current optimal LPP solution, constituted by
$\mathcal{P}^{t}_{LP}~{}$and$~{}X^{t}_{LP}$ (line 12). Subsequently, the
termination criterion of CG-driven LPP-solutioning is checked (lines 13-16).
If it is terminated, then the current LPP solution $\mathcal{P}^{t}_{LP}$ is
fetched as the final LPP solution $\mathcal{P}^{T}_{LP}$ of this LPP-IPP
interaction. If not, then a dual is formulated using $\mathcal{P}^{t}_{LP}$
and $\mathcal{F}$ (line 17), which is then solved using the corresponding LP
solver to obtain the current optimal dual vector $Y^{t}$ (line 18). Using the
current $\mathcal{P}^{t}_{LP}$, $X^{t}_{LP}$ and $Y^{t}$, a fresh set of
pairings $\mathcal{P}^{t}_{CG}$ is obtained using the CG heuristic, which is
constituted by the new pairing sets from the four underlying CG strategies
(lines 19-23). At the end of the LPP iteration $t$, the fresh set of pairings
$\mathcal{P}^{t}_{CG}$ is combined with the current $\mathcal{P}^{t}_{LP}$ to
serve as input pairing set for the subsequent LPP iteration
($t\mathrel{+}=1$). Once this submodule is terminated, the resulting
$\mathcal{P}^{T}_{LP}$ is passed over to the IPP-solutioning for its
integerization, wherein, the MIP solver is used to obtain the IPP solution
$\mathcal{P}^{T}_{IP}$ (lines 26 and 27). In that, the pre-defined $Th_{ipt}$
time-limit is used to terminate the MIP-search, if it does not self-terminate
a priori. Subsequently, the resulting $\mathcal{P}^{T}_{IP}$ is passed back to
the CG-driven LPP-solutioning for the next LPP-IPP interaction
($T\mathrel{+}=1$), or returned as the final integer solution
$\mathcal{P}^{\star}_{IP}$, depending upon the termination condition of the
Optimization Engine (lines 28-32). In that, if the cost of
$\mathcal{P}^{T}_{IP}~{}\left(Z_{IP}^{T}\right)$, matches the cost of
$\mathcal{P}^{T}_{LP}~{}\left(Z_{LP}^{p,T}\right)$, then the Optimization
Engine is terminated.
## 4 Computational Experiments
This section first presents the test cases and the computational setup, used
to investigate the utility of $AirCROP$, its modules, and their interactions.
Subsequently, the settings of parameters involved in different modules of
$AirCROP$ are presented. Lastly, the experimental results are discussed.
### 4.1 Test Cases and Computational Setup
The real-world airline test cases, used for experimentation, are detailed in
Table 2. Each of these test cases involves a weekly flight schedule, and have
been provided by the research consortium’s industrial sponsor (from the
networks of US-based airlines).
Table 2: Real-world airline test cases used in this research work Test Cases | $\\#$Flights | $\\#$Crew Bases | $\\#$Airports | $\\#$Legal Duties
---|---|---|---|---
TC-1 | 3202 | 15 | 88 | 454205
TC-2 | 3228 | 15 | 88 | 464092
TC-3 | 3229 | 15 | 88 | 506272
TC-4 | 3265 | 15 | 90 | 446937
TC-5 | 4212 | 15 | 88 | 737184
(a)
(b)
Figure 3: (a) Geographical representation of TC-5 flight network, where the
red nodes, green edges and yellow nodes represent the airports, scheduled
flights and crew bases, respectively, and (b) legal flight-connections, each
represented by a point in the plot, where for a flight marked on the y-axis,
the connecting flight is marked on the x-axis.
The columns in Table 2, in order of their occurrence, highlight the notations
for the different test cases; the number of its constituent flights; the
number of constituent crew bases; and the total number of legal duties
involved, respectively. It is critical to recognize that the challenge
associated with solutioning of these test cases, depends not just on the
number of flights involved but also to the fact that these flights are part of
complex flight networks, characterized by a multiplicity of hubs as opposed to
a single hub, and multiplicity of crew bases as opposed to a single crew base.
In that, the number of legal pairings possible, grow exponentially with the
number of hubs and crew bases. As a sample instance, the geographical
representation of the flight network associated with TC-5, and the legal
flight connections involved in it, are portrayed in Figure 3. Notably, in
Figure 3(a), the presence of multiple hub-and-spoke subnetworks and multiple
crew bases (highlighted in yellow color) is evident. Furthermore, the pattern
visible in Figure 3(b) could be attributed to the (minimum and maximum) limits
on the sit-time and overnight-rest constraints. For instance, a flight, say
$f_{500}$, has legal connections only with those flights that depart from the
arrival airport of $f_{500}$, and whose departure-time gap (difference between
its departure-time and the arrival time of $f_{500}$) lies within the minimum
and maximum allowable limits, of the sit-time or the overnight-rest.
All the experiments in this research have been performed on an HP Z640
Workstation, which is powered by two Intel® Xeon® E5-2630v3 processors, each
with 16 cores at 2.40 GHz, and 96 GBs of RAM. All codes related to the
$AirCROP$ have been developed using the Python scripting language in alignment
with the Industrial sponsor’s larger vision and preference. Furthermore:
* •
the interior-point method from Gurobi Optimizer 8.1.1 (Gurobi Optimization,
2019) is used to solve the primal in the CG- driven LPP-solutioning submodule.
* •
the interior-point method (Andersen Andersen, 2000) from SciPy’s linprog
library (Virtanen ., 2020) is used to solve the dual in the CG-driven LPP-
solutioning submodule.
* •
the branch-and-cut algorithm based MIP solver from Gurobi Optimizer 8.1.1 is
used to solve the IPP in the Initial Feasible Solution Generator and the IPP-
solutioning submodule.
* •
an $AirCROP$-run, in principle, terminates when the cost of the IPP solution
matches the cost of its input LPP solution in a particular LPP-IPP
interaction. However, for practical considerations on the time-limit, an
$AirCROP$-run is allowed to terminate if the IPP and LPP costs do not conform
with each other even after 30 LPP-IPP interactions are over, or 30 hours of
total run-time is elapsed.
### 4.2 Parameter Settings
The settings of the parameters associated with different modules and
submodules of the $AirCROP$ are, as highlighted below.
* •
Initial Feasible Solution Generator: here, the proposed IPDCH involves the
decomposition parameter $K$, which regulates the size of flight subsets formed
in each of IPDCH-iteration. As mentioned before, the setting of $K$ is
dependent on the characteristics of input flight dataset and the configuration
of available computational resources. Here, the aim is to cover all given
flights in a time-efficient manner. Hence, it is important to understand the
effect of setting of $K$ on the time-performance of IPDCH, which is
highlighted below.
* –
For a relatively lower value of $K$, smaller flight subsets with lesser number
of legal flight-connections would be formed in each IPDCH-iteration, leading
to coverage of relatively lesser number of unique flights in each of them.
Though, this by itself is not a challenge, but this would necessitate a
significant number of additional IPDCH-iterations (and the respective run-
time), since the number of unique flights covered per IPDCH-iteration, which
by construct reduces with the iterations, would get further reduced with
relatively smaller flight subsets.
* –
On the flip side, for a relatively higher value of $K$, bigger flight subsets
would be formed that would lead to coverage of higher number of unique flights
per IPDCH-iteration. Though, this may reduce the total number of IPDCH-
iterations required to generate the desired IFS, the overall run-time of the
IPDCH may increase drastically. The rationale being that with bigger flight
subset in each IPDCH-iteration, the number of possible legal pairings would
increase drastically, leading to huge run-time for their generation as well as
for the subsequent MIP-search.
The above considerations suggest that $K$ should be reasonably-sized.
Considering the given computational resources and the results of initial
exploration around the possible number of pairings for differently-sized
flight sets, the value of $K$ in each IPDCH-iteration is guided by a random
integer between one-eighth and one-fourth of the size of the input flight set
$\mathcal{F}$. It may be noted that this setting of $K$ has been selected
considering the scale and complexity of the given test cases, and it needs to
be re-visited if the scale and complexity of the flight network changes
drastically.
* •
CG-driven LPP-solutioning: The parameters involved in the termination
criterion for this submodule– $Th_{cost}$ & $Th_{t}$, are set as 100 USD & 10
iterations respectively, to achieve an LPP solution with a sufficiently good
cost in a reasonably good time. Moreover, the sensitivity of these parameters
towards the $AirCROP$’s performance is discussed in Section 4.3.4. Moreover,
the effect of the parameter– size of $\mathcal{P}^{t}_{CG}$, on the
performance of this submodule (the final LPP solution’s cost and required run-
time), and the demand on the computational resources (dominantly, RAM) is
highlighted below.
* –
for a relatively small-sized $\mathcal{P}^{t}_{CG}$, the alternative pairings
available to foster further cost improvement shall be quite limited, amounting
to smaller cost benefits in each phase of the CG-driven LPP-solutioning. This
would necessitate far more LPP-IPP interactions, to reach the near-optimal
cost. This pre se is not a challenge, however, significant amount of
additional run-time may be required, since: (a) each call for CG-driven LPP-
solutioning demands a minimum of 10 LPP iterations, before it could be
terminated, (b) such calls when invoked repeatedly, may consume significant
run-time, yet, without reasonable cost benefit.
* –
On the other hand, for a very large-sized $\mathcal{P}_{CG}^{t}$, though the
potential for significant cost benefits may exist, the demand on the RAM may
become overwhelming for any CG-driven LPP-solutioning phase to proceed.
The above considerations suggest that the size of $\mathcal{P}_{CG}^{t}$ may
neither be too small nor too large. Factoring these, the experiments here aim
at $\mathcal{P}_{CG}^{t}$ sized approximately of a million pairings
(significant size, yet, not overwhelming for 96 GB RAM). Furthermore, for a
search that is not biased in favor of any particular CG strategy, the number
of pairings from each CG strategy towards the overall CG heuristic are kept
equable.
* •
IPP-solutioning: As mentioned before, the MIP-search on a large-scale IPP is
time-intensive. Hence, the termination parameter– $Th_{ipt}$, that restricts
the run-time of any IPP-solutioning phase if not self-terminated a priori, is
reasonably set as 20 minutes, and its sensitivity on the $AirCROP$’s
performance is discussed in Section 4.3.4.
### 4.3 Results & Observations
This section presents the experimental results and associated inferences, in
the order highlighted below.
1. 1.
The performance of the proposed $AirCROP$ on the given test cases with the
aforementioned parameter settings is discussed.
2. 2.
The phenomenon referred to as performance variability (Lodi Tramontani, 2013)
is discussed in the context of $AirCROP$. This aspect is pertinent since some
variability in performance (even for the same random seed) is inevitable owing
to $AirCROP$’s reliance on the mathematical programming solvers, which over
the different runs may pick different permutations of the rows (flight-
coverage) or columns (pairings).
3. 3.
The impact of the initialization methods: (a) the proposed IPDCH, (b) an
Enhanced-DFS heuristic, earlier proposed by the authors (Aggarwal ., 2018),
and (c) a commonly adopted Artificial Pairings method (Hoffman Padberg, 1993;
Vance ., 1997), on the final performance of $AirCROP$ is investigated.
4. 4.
The sensitivity of $AirCROP$’s performance to the termination parameters in
the Optimization Engine’s submodules (CG-driven LPP-solutioning and IPP-
solutioning) has been discussed.
#### 4.3.1 AirCROP’s Performance
The results of the $AirCROP$-runs on the given test cases (TC-1 to TC-5) with
the aforementioned parameter settings are reported in Table 3. In that, for
each test case:
* •
the first row marked by “$\mathcal{P}_{IFS}$” highlights the cost associated
with the IFS that initializes the $AirCROP$-run and the run-time consumed in
its generation.
* •
the subsequent rows present the results of the LPP-IPP interactions (marked by
the counter $T$). In that, for a particular $T$, the cost of the LP-solution
passed on for its integerization and the associated time are highlighted. Also
the cost of the IP-solution returned and the associated time are highlighted.
Here, the unit of cost is USD, and the time corresponds to the HH:MM format.
* •
the final crew pairing solution ($\mathcal{P}^{\star}_{IP}$) is highlighted in
the last row (emboldened) marked by “Final Solution”.
It may be noted that the experimental results in the subsequent sections are
presented in the same format, unless any digression is specifically
highlighted.
Table 3: $AirCROP$’s performance∗ on the given test cases
LPP-IPP | TC-1 | TC-2 | TC-3 | TC-4 | TC-5
---|---|---|---|---|---
Interactions
$T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time
$\mathcal{P}_{IFS}$ | 85893202 | 00:05 | 81950079 | 00:05 | 51552744 | 00:03 | 131716653 | 00:08 | 89690776 | 00:06
1 | $\mathcal{P}_{LP}^{1}$ | 3468349 | 03:56 | 3493986 | 03:56 | 3483057 | 05:18 | 3595565 | 03:27 | 4583484 | 07:48
$\mathcal{P}_{IP}^{1}$ | 3689420 | 00:20 | 3715798 | 00:20 | 3697204 | 00:20 | 3807233 | 00:20 | 4930789 | 00:20
2 | $\mathcal{P}_{LP}^{2}$ | 3467837 | 02:18 | 3494675 | 01:19 | 3484645 | 02:42 | 3600195 | 01:17 | 4588740 | 02:49
$\mathcal{P}_{IP}^{2}$ | 3557615 | 00:20 | 3587139 | 00:20 | 3590336 | 00:20 | 3679138 | 00:20 | 4734553 | 00:20
3 | $\mathcal{P}_{LP}^{3}$ | 3469591 | 00:47 | 3495254 | 01:22 | 3486614 | 01:59 | 3600813 | 01:16 | 4592143 | 01:46
$\mathcal{P}_{IP}^{3}$ | 3518161 | 00:02 | 3546777 | 00:02 | 3523538 | 00:02 | 3639313 | 00:01 | 4654258 | 00:20
4 | $\mathcal{P}_{LP}^{4}$ | 3471619 | 01:13 | 3496797 | 00:57 | 3491000 | 01:13 | 3601168 | 01:27 | 4593422 | 02:17
$\mathcal{P}_{IP}^{4}$ | 3489534 | 00:01 | 3505941 | 00:01 | 3496142 | 00:01 | 3621723 | 00:01 | 4634187 | 00:01
5 | $\mathcal{P}_{LP}^{5}$ | 3472403 | 00:31 | 3497106 | 00:23 | 3490420 | 00:56 | 3604082 | 00:37 | 4594282 | 02:14
$\mathcal{P}_{IP}^{5}$ | 3484783 | 00:01 | 3497106 | 00:01 | 3490420 | 00:01 | 3612845 | 00:01 | 4617838 | 00:01
6 | $\mathcal{P}_{LP}^{6}$ | 3473238 | 00:30 | | | | | 3604753 | 00:28 | 4595481 | 01:53
$\mathcal{P}_{IP}^{6}$ | 3473238 | 00:01 | | | | | 3604753 | 00:01 | 4615272 | 00:01
7 | $\mathcal{P}_{LP}^{7}$ | | | | | | | | | 4596466 | 01:12
$\mathcal{P}_{IP}^{7}$ | | | | | | | | | 4600428 | 00:01
8 | $\mathcal{P}_{LP}^{8}$ | | | | | | | | | 4595613 | 01:42
$\mathcal{P}_{IP}^{8}$ | | | | | | | | | 4595613 | 00:01
Final Solution | 3473238 | 10:05 | 3497106 | 08:46 | 3490420 | 12:55 | 3604753 | 09:24 | 4595613 | 22:52
∗All values in the “Cost” columns are in USD, and all corresponding real
values are rounded-off to the next integer values. All values in the “Time”
columns are in HH:MM format, and all corresponding seconds’ values are
rounded-off to the next minute values.
The above results have been tested by the research consortium’s industrial
sponsor, and verified to be highly-competitive compared to the best practice
solutions known, for different test cases. In general, the obtained solutions
have been found to be superior by about 1.5 to 3.0% in terms of the hard cost,
which reportedly is one of the most important solution quality indicator. For
reference, a comparison of the obtained solution vis-$\grave{a}$-vis the best
known solution has been drawn for TC-5, in Table 4, where a significant
difference in terms of the size of pairings can be observed. Notably, the key
features contributing to lower hard cost relate to presence of pairings with
relatively lower - TAFB, overnight rests and meal cost. However, the obtained
solution also entails more crew changes, some of which (involving aircraft
change) negatively impact the soft cost. Hence, there appears to be a trade-
off between the hard cost and the soft cost.
Table 4: Salient features of $\mathcal{P}^{\star}_{IP}$ for TC-5: $AirCROP$’s solution vis-$\grave{a}$-vis the best practice solution Features | $\bm{AirCROP}$’s solution | Best practice solution
---|---|---
$\\#$ pairings | 926 | 783
$\\#$ unique flights covered | 4,212 | 4,212
$\\#$ deadhead flights | 3 | 3
$\\#$ overnight-rests | 1,203 | 1,279
$\\#$ crew changes | 1,002 | 825
$\\#$ average crew changes per pairing | 1.082 | 1.054
Total TAFB (HH:MM) | 37444:54 | 38189:39
$\\#$ pairings covering 2 flights | 303 | 205
$\\#$ pairings covering 3 flights | 17 | 31
$\\#$ pairings covering 4 flights | 170 | 95
$\\#$ pairings covering 5 flights | 63 | 37
$\\#$ pairings covering 6 flights | 202 | 153
$\\#$ pairings covering 7 flights | 59 | 62
$\\#$ pairings covering 8 flights | 83 | 90
$\\#$ pairings covering 9 flights | 19 | 49
$\\#$ pairings covering 10 flights | 8 | 45
$\\#$ pairings covering 11 flights | 1 | 10
$\\#$ pairings covering 12 flights | 1 | 5
$\\#$ pairings covering 13 flights | 0 | 0
$\\#$ pairings covering 14 flights | 0 | 1
Hotel cost (USD) | 166,240 | 176,170
Meal cost (USD) | 157,269 | 160,397
Hard cost (USD) | 340,671 | 350,818
Soft cost (USD) | 51,600 | 42,750
Actual flying cost (USD) | 4,203,342 | 4,203,342
Total cost (USD) | 4,595,613 | 4,596,910
#### 4.3.2 Performance Variability in AirCROP
These section investigates the sensitivity of $AirCROP$ with respect to the
sources of variability over multiple runs, even for the same problem. This
study assumes importance, considering that performance variability is rather
inevitable when the mathematical programming based solution approaches are
employed (Koch ., 2011). As cited by Lodi Tramontani (2013), variability in
the performance of LP & MIP solvers may be observed on – changing the
computing platform (which may change the floating-point arithmetic), permuting
the constraints/variables of the respective mathematical models, or changing
the pseudo-random numbers’ seed. These changes/permutations may lead to an
entirely different outcome of the respective search algorithms (LP & MIP), as
highlighted below.
* •
The root source for the performance variability in MIP is the imperfect tie-
breaking. A majority of the decisions to be taken during an MIP-search are
dependent on– the ordering of the candidates according to an interim score as
well as the selection of the best candidate (one with the best score value). A
perfect score that could fully-distinguish between the candidates is not-known
mostly due to the lack of theoretical knowledge, and even if it is known, it
may be too expensive to compute666For instance, in a strong branching scheme,
the best variable to branch at each node is decided after simulating one-level
of branching for each fractional variable, however, it is performed
heuristically to make it a computationally-affordable task for MIP solvers
(Linderoth Lodi, 2011). Furthermore, additional ties or tiebreaks could be
induced by changing the floating-point operations, which inherently may change
when the computing platform is changed. Amidst such an imperfect tie-breaking,
the permutation of the variables/constraints changes the path within the MIP-
search tree, leading to a completely different evolution of the algorithm with
rather severe consequences.
* •
Depending upon the floating-point arithmetic or the sequence of variables
loaded in an LPP, the performance of the simplex and interior-point methods
may vary.
* •
The performance of the LP and MIP solvers is also affected by the choice of
pseudo-random numbers’ seed, wherever the decisions are made heuristically.
For instance, an interior-point method in the LP solvers performs a (random)
crossover to one of the vertices of the optimal face when the search reaches
its (unique) center.
Table 5: Performance variability assessment for $AirCROP$ on two test
instances∗ (TC-2 and TC-5)
Test | LPP-IPP | Runs with performance variability | Runs without performance variability
---|---|---|---
Case | Interactions | Run-1 | Run-2 | Run (Seed-$\alpha$) | Run (Seed-$\beta$) | Run (Seed-$\gamma$)
$T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time
TC-2 | $\mathcal{P}_{IFS}$ | 81950079 | 00:05 | 74533686 | 00:04 | 129221508 | 00:08 | 114054265 | 00:07 | 52515476 | 00:04
1 | $\mathcal{P}_{LP}^{1}$ | 3493986 | 03:56 | 3494580 | 03:57 | 3495054 | 03:51 | 3493757 | 03:52 | 3493909 | 03:52
$\mathcal{P}_{IP}^{1}$ | 3715798 | 00:20 | 3746847 | 00:20 | 3769811 | 00:20 | 3711267 | 00:20 | 3722248 | 00:20
2 | $\mathcal{P}_{LP}^{2}$ | 3494675 | 01:19 | 3494540 | 02:18 | 3495311 | 01:57 | 3496733 | 01:57 | 3494657 | 03:02
$\mathcal{P}_{IP}^{2}$ | 3587139 | 00:20 | 3621066 | 00:20 | 3628514 | 00:20 | 3581978 | 00:20 | 3620745 | 00:20
3 | $\mathcal{P}_{LP}^{3}$ | 3495254 | 01:22 | 3496475 | 01:41 | 3497558 | 00:52 | 3499651 | 00:54 | 3496398 | 01:23
$\mathcal{P}_{IP}^{3}$ | 3546777 | 00:02 | 3555152 | 00:06 | 3566092 | 00:11 | 3536050 | 00:01 | 3551149 | 00:03
4 | $\mathcal{P}_{LP}^{4}$ | 3496797 | 00:57 | 3497750 | 01:36 | 3499237 | 01:26 | 3500818 | 01:03 | 3496069 | 01:37
$\mathcal{P}_{IP}^{4}$ | 3505941 | 00:01 | 3525600 | 00:01 | 3516807 | 00:01 | 3520552 | 00:01 | 3543236 | 00:02
5 | $\mathcal{P}_{LP}^{5}$ | 3497106 | 00:23 | 3498588 | 01:40 | 3500169 | 00:42 | 3500504 | 01:02 | 3496706 | 01:01
$\mathcal{P}_{IP}^{5}$ | 3497106 | 00:01 | 3498588 | 00:01 | 3517585 | 00:01 | 3500504 | 00:01 | 3501210 | 00:01
6 | $\mathcal{P}_{LP}^{6}$ | | | | | 3501523 | 00:43 | | | 3499063 | 00:41
$\mathcal{P}_{IP}^{6}$ | | | | | 3504085 | 00:01 | | | 3499063 | 00:01
7 | $\mathcal{P}_{LP}^{7}$ | | | | | 3502118 | 00:31 | | | |
$\mathcal{P}_{IP}^{7}$ | | | | | 3502118 | 00:01 | | | |
Final Solution | 3497106 | 08:46 | 3498588 | 12:05 | 3502118 | 11:05 | 3500504 | 09:38 | 3499063 | 12:27
TC-5 | $\mathcal{P}_{IFS}$ | 89690776 | 00:06 | 92080420 | 00:05 | 131443284 | 00:09 | 847887053 | 00:56 | 470430395 | 00:29
1 | $\mathcal{P}_{LP}^{1}$ | 4583484 | 07:48 | 4583476 | 08:00 | 4584525 | 07:28 | 4581988 | 08:47 | 4580130 | 07:36
$\mathcal{P}_{IP}^{1}$ | 4930789 | 00:20 | 4973580 | 00:20 | 4974341 | 00:20 | 4925863 | 00:20 | 4949616 | 00:20
2 | $\mathcal{P}_{LP}^{2}$ | 4588740 | 02:49 | 4588938 | 05:59 | 4589091 | 02:25 | 4584956 | 04:51 | 4584273 | 03:22
$\mathcal{P}_{IP}^{2}$ | 4734553 | 00:20 | 4765453 | 00:20 | 4782657 | 00:20 | 4749664 | 00:20 | 4753133 | 00:20
3 | $\mathcal{P}_{LP}^{3}$ | 4592143 | 01:46 | 4591571 | 02:35 | 4589952 | 02:14 | 4587812 | 03:02 | 4585046 | 03:40
$\mathcal{P}_{IP}^{3}$ | 4654258 | 00:20 | 4661078 | 00:20 | 4736313 | 00:20 | 4653279 | 00:20 | 4666390 | 00:20
4 | $\mathcal{P}_{LP}^{4}$ | 4593422 | 02:17 | 4595741 | 01:49 | 4591145 | 02:36 | 4589247 | 02:00 | 4588952 | 02:56
$\mathcal{P}_{IP}^{4}$ | 4634187 | 00:01 | 4624039 | 00:01 | 4654627 | 00:20 | 4614651 | 00:01 | 4628239 | 00:01
5 | $\mathcal{P}_{LP}^{5}$ | 4594282 | 02:14 | 4599006 | 01:14 | 4592463 | 02:03 | 4590573 | 01:05 | 4589577 | 02:02
$\mathcal{P}_{IP}^{5}$ | 4617838 | 00:01 | 4613385 | 00:01 | 4632708 | 00:02 | 4603938 | 00:01 | 4618710 | 00:01
6 | $\mathcal{P}_{LP}^{6}$ | 4595481 | 01:53 | 4598727 | 01:11 | 4593094 | 02:00 | 4591176 | 01:15 | 4589874 | 01:48
$\mathcal{P}_{IP}^{6}$ | 4615272 | 00:01 | 4605126 | 00:01 | 4625993 | 00:01 | 4591176 | 00:01 | 4607590 | 00:01
7 | $\mathcal{P}_{LP}^{7}$ | 4596466 | 01:12 | 4598412 | 01:39 | 4593431 | 01:04 | | | 4590674 | 01:24
$\mathcal{P}_{IP}^{7}$ | 4600428 | 00:01 | 4598412 | 00:01 | 4619643 | 00:01 | | | 4605058 | 00:01
8 | $\mathcal{P}_{LP}^{8}$ | 4595613 | 01:42 | | | 4594146 | 01:03 | | | 4591065 | 02:10
$\mathcal{P}_{IP}^{8}$ | 4595613 | 00:01 | | | 4594146 | 00:01 | | | 4591065 | 00:01
Final Solution | 4595613 | 22:52 | 4598412 | 23:37 | 4594146 | 22:27 | 4591176 | 22:59 | 4591065 | 26:32
∗All values in the “Cost” columns are in USD, and all the corresponding real
values are rounded-off to the next integer values. All values in the “Time”
columns are in HH:MM, and all the corresponding seconds’ values are rounded-
off to the next minute values.
In the above background, the plausible reasons for variability in $AirCROP$’s
performance are elaborated below.
* •
Generation of new legal pairings using a parallel architecture: in any LPP
iteration $t$, new legal pairings are generated in parallel, by allocating the
sub-processes to the idle-cores of the CPU. These sub-processes return their
respective pairing sets as soon as they are terminated. This by itself is not
a challenge, however, when the $AirCROP$ is re-run, the order in which these
sub-processes terminate may not be same as before (as it depends on the state
of the CPU), permuting the pairings in the cumulative pairing set
$\mathcal{P}_{CG}^{t}$. This permuted pairing set, when fed as part of the
input to the LP solver in the next LPP iteration, may lead to a different LPP
solution, leading to a different outcome of the subsequent $AirCROP$’s search.
To curb this, the pairings in the set that trigger the LP solver are sorted in
lexicographical order of their representative strings. These strings are
constructed from the indices of the flights covered in the corresponding
pairings. For instance, the string corresponding to a pairing that covers
flights $f_{1}$, $f_{10}$, $f_{100}$ & $f_{200}$ is $1\\_10\\_100\\_200$.
Given that the pairings are distinct, the resulting strings are distinct too,
allowing for a crisp sorting criterion and ensuring a fixed pairing sequence
in each $AirCROP$-run.
* •
Numerical seed for generation of pseudo-random numbers: variability may also
be introduced if the numerical seed employed to generate pseudo-random numbers
for use in the proposed modules or the utilized LP & MIP solvers, varies. For
instance, use of the default seed method of Python (i.e., the current time of
the computing system) across different $AirCROP$ runs may lead to different
pseudo-random numbers, each time. This in turn would trigger variability in
the IFS generated by IPDCH (since the random selection of flights in each of
its iterations, is impacted), and the pairing set resulting from the CG
heuristic (since each of the underlying CG strategy is impacted). Such
variability could be negated by use of a fixed numerical seed, instead of a
time dependent one.
The intriguing questions for researchers could relate to the impact that
presence or absence of causes of variability may have on the quality of
$AirCROP$’s solutions, in terms of both cost and run-time. Table 5 attempts to
shed light on these questions through empirical evidence for two test cases
involving 3228 flights (TC-2) and 4212 flights (TC-5), respectively. In each
of these test cases, the effect of variability is revealed through:
* •
two independent runs (Run-1 and Run-2), in each of which the causes of
variability exist, that is: (a) the permutations of pairings generated using
the parallel architecture is possible, and (b) the default seed method of
Python, based on the time of the computing system applies.
* •
three independent runs, in each of which the causes of variability have been
eliminated, that is: (a) the lexicographical order of the pairings is imposed,
and (b) a fixed numerical seed has been fed for random number generation. For
these runs, the numerical seeds are given by $\alpha=0$, $\beta=1$, and
$\gamma=2$, respectively.
The key observations and inferences that could be drawn from each test case in
Table 5 are highlighted below.
* •
understandably, the Run-1 and Run-2 (corresponding to the same numerical
seed), yield different cost solutions over different run-time. Importantly,
the variation in cost (despite the presence of causes of variability) is not
alarming, though significantly different run-times may be required.
* •
each run (corresponding to Seed-$\alpha$, Seed-$\beta$, and Seed-$\gamma$,
respectively) where the causes of variability have been negated, if repeated,
yield the same cost solution in the same run-time though it has not been shown
in the table for paucity of space.
* •
the runs corresponding to the numerical seeds given by $\alpha$, $\beta$, and
$\gamma$, respectively, differ solely due to the difference in the
corresponding random numbers generated, and subsequently utilized. It can be
observed that the change in numerical seed does not significantly affect the
cost-quality of the final $AirCROP$ solution though the associated run-time
may vary significantly.
The fact that $AirCROP$ can offer final solutions with comparable cost
quality, regardless of the presence or absence of causes of variability,
endorses the robustness of the constitutive modules of the $AirCROP$. Also,
the variation in run-time could be attributed to different search trajectories
corresponding to different permutations of variables or different random
numbers. It may be noted that for the subsequent runs the lexicographical
order of the pairings and a fixed numerical seed (Seed-$\alpha=0$) have been
utilized.
#### 4.3.3 Impact of Initialization on AirCROP’s Performance
This section investigates the sensitivity of $AirCROP$ with respect to the
cost quality of the initial solution and the run-time spent to obtain it.
Towards it, the initial solution is obtained using three different methods
(offering three input alternatives with varying cost and run-time) and the
cost quality of $AirCROP^{\prime}s$ final solution alongside the necessary
run-time is noted.
Notably, in an initial attempt to generate IFS for large-scale CPOPs, the
authors proposed a DFS algorithm based heuristic, namely, Enhanced-DFS
heuristic (Aggarwal ., 2018). Its performance across the five test cases has
been highlighted in Table 6. In that, TC-1 emerges as an outlier owing to
alarmingly high run-time, when compared to all other test cases.
Table 6: Performance of Enhanced-DFS heuristic (Aggarwal ., 2018) for IFS generation. Here, the real valued “Cost” is rounded-off to the next integer value, and the seconds’ in the “Time” column are rounded-off to the next minute values. Test Cases | Time (HH:MM) | Cost (USD) | # Pairings
---|---|---|---
TC-1 | 01:48 | 3863669070 | 477617
TC-2 | 00:02 | 167405376 | 26678
TC-3 | 00:03 | 167967482 | 26871
TC-4 | 00:13 | 1072078483 | 135269
TC-5 | 00:04 | 325922318 | 51920
A plausible explanation behind this aberration is that TC-1 involves some
flights with very few legal flight connections, and a DFS based algorithm may
have to exhaustively explore several flight connections, to be able to
generate an IFS with full flight coverage. The need to do away with reliance
on DFS so as to have equable run-time across different data sets explains the
motivation for:
* •
proposition of IPDCH in this paper, which as highlighted in Section 3.2,
relies on: (a) a divide-and-cover strategy to decompose the input flight
schedule into sufficiently-small flight subsets, and (b) IP to find a lowest-
cost pairing set that covers the maximum-possible flights for each of the
decomposed flight subsets.
* •
consideration of a commonly adopted Artificial Pairings method (Vance .,
1997), that constructs a pairing set which covers all the flights, though
some/all the pairings may not be legal. Hence, for this method the initial
solution would be referred as $\mathcal{P}_{IS}$ instead of
$\mathcal{P}_{IFS}$.
Table 7: Performance assessment of $AirCROP$ on TC-1 and TC-5 when initialized
using the proposed IPDCH, the Artificial Pairings method, and the Enhanced-DFS
heuristic.
LPP-IPP | TC-1 | TC-5
---|---|---
Interactions | Enhanced-DFS | IPDCH | Artificial Pairings | Enhanced-DFS | IPDCH | Artificial Pairings
$T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time
$\mathcal{P}_{IFS}/\mathcal{P}_{IS}$ | 3863669070 | 01:48 | 74945982 | 00:05 | 14604919138 | $\approx$00:00 | 325922318 | 00:04 | 131443284 | 00:09 | 25409939785 | $\approx$00:00
1 | $\mathcal{P}_{LP}^{1}$ | 3463560 | 04:14 | 3465379 | 04:19 | 17589714 | 03:10 | 4583664 | 06:57 | 4584525 | 07:28 | 4585380 | 07:29
$\mathcal{P}_{IP}^{1}$ | 3650828 | 00:20 | 3689312 | 00:20 | 17833718 | 00:20 | 4943531 | 00:20 | 4974341 | 00:20 | 4960813 | 00:20
2 | $\mathcal{P}_{LP}^{2}$ | 3464678 | 01:51 | 3466567 | 01:32 | 17589851 | 01:29 | 4586675 | 03:41 | 4589091 | 02:25 | 4589470 | 03:34
$\mathcal{P}_{IP}^{2}$ | 17731125 | 00:20 | 3566415 | 00:20 | 3578030 | 00:20 | 4773342 | 00:20 | 4809348 | 00:20 | 4782657 | 00:20
3 | $\mathcal{P}_{LP}^{3}$ | 3466217 | 01:38 | 3467848 | 01:33 | 3466868 | 02:02 | 4586581 | 04:54 | 4589952 | 02:14 | 4593117 | 02:05
$\mathcal{P}_{IP}^{3}$ | 3531694 | 00:13 | 3556499 | 00:20 | 3553432 | 00:20 | 4701607 | 00:20 | 4736313 | 00:20 | 4672696 | 00:20
4 | $\mathcal{P}_{LP}^{4}$ | 3467672 | 01:19 | 3468777 | 01:26 | 3467935 | 00:47 | 4589568 | 01:51 | 4591145 | 02:36 | 4593938 | 02:18
$\mathcal{P}_{IP}^{4}$ | 3507987 | 00:01 | 3517901 | 00:01 | 3516376 | 00:02 | 4651824 | 00:20 | 4654627 | 00:20 | 4650449 | 00:06
5 | $\mathcal{P}_{LP}^{5}$ | 3469533 | 00:44 | 3468894 | 00:49 | 3468332 | 00:40 | 4591698 | 02:03 | 4592463 | 02:03 | 4596256 | 02:24
$\mathcal{P}_{IP}^{5}$ | 3483690 | 00:01 | 3499531 | 00:01 | 3496156 | 00:01 | 4616605 | 00:01 | 4632708 | 00:02 | 4620903 | 00:01
6 | $\mathcal{P}_{LP}^{6}$ | 3469276 | 00:52 | 3469352 | 00:48 | 3469095 | 00:47 | 4591969 | 01:05 | 4593094 | 02:00 | 4597203 | 00:49
$\mathcal{P}_{IP}^{6}$ | 3469276 | 00:01 | 3477354 | 00:01 | 3491947 | 00:01 | 4606253 | 00:01 | 4625993 | 00:01 | 4612164 | 00:01
7 | $\mathcal{P}_{LP}^{7}$ | | | 3469950 | 00:42 | 3469543 | 00:52 | 4592860 | 01:15 | 4593431 | 01:04 | 4597913 | 01:17
$\mathcal{P}_{IP}^{7}$ | | | 3469950 | 00:01 | 3487562 | 00:01 | 4592860 | 00:01 | 4619643 | 00:01 | 4606368 | 00:01
8 | $\mathcal{P}_{LP}^{8}$ | | | | | 3470100 | 00:38 | | | 4594146 | 01:03 | 4597730 | 02:00
$\mathcal{P}_{IP}^{8}$ | | | | | 3478057 | 00:01 | | | 4594146 | 00:01 | 4604551 | 00:01
9 | $\mathcal{P}_{LP}^{9}$ | | | | | 3470355 | 00:28 | | | | | 4597929 | 00:50
$\mathcal{P}_{IP}^{9}$ | | | | | 3470355 | 00:01 | | | | | 4597929 | 00:01
Final Solution | 3469276 | 13:22 | 3469950 | 12:18 | 3470355 | 12:00 | 4592860 | 23:13 | 4594146 | 22:27 | 4597929 | 23:57
∗All values in the “Cost” columns are in USD, where the real values are
rounded-off to the next integer values. All values in the “Time” columns are
in HH:MM, where the seconds’ values are rounded-off to the next minute values.
A comparison of the above three methods has been drawn in Table 7, for TC-1
(posing challenge to Enhanced-DFS) and TC-5 (largest flight set). In that,
besides the cost and run-time of the initial solution for each test case, the
results of all the iterations of $AirCROP$ leading up to the final solution
have been presented. The latter is done to shed light on whether
$AirCROP^{\prime}s$ final solution cost quality strongly depends on the cost
of the initial solution. The prominent observations from the Table 7 include:
* •
In terms of run-time: IPDCH could outperform the Enhanced-DFS, as its run-time
happened to be less than ten minutes in both the test cases. The Artificial
pairing method even out performs IPDCH, since its run-time happened to be in
milliseconds (formatted to 0 minutes in the table).
* •
In terms of initial cost: IPDCH could again outperform the Enhanced-DFS. This
could be attributed to the use of IP to find a lowest-cost pairing set that
covers the maximum-possible flights for each of the decomposed flight subsets.
In contrast, the cost associated with the Artificial pairing method, is the
worst. This is owing to a very high pseudo-cost attached to the pairings to
offset their non-legality.
Critically, regardless of the significantly varying run-time and the initial
cost associated with the three methods, the variation in the cost of the final
solution offered by $AirCROP$ is not significant. This endorses the robustness
of its constitutive modules.
#### 4.3.4 Impact of Termination Settings of Optimization Engine’s Submodules
on AirCROP’s Performance
This section investigates the sensitivity of $AirCROP$ to the termination
parameter settings of the Optimization Engine’s submodules, namely, LPP-
solutioning and IPP-solutioning. The parameters involved in LPP-solutioning
are $Th_{cost}$ and $Th_{t}$, while $Th_{ipt}$ is involved in IPP-solutioning.
To assess their impact on $AirCROP^{\prime}s$ performance, experiments are
performed with three different sets of parameter settings each, for both the
submodules.
Impact of Termination Settings of CG-driven LPP-solutioning:
As mentioned earlier, the CG-driven LPP-solutioning is terminated if the cost-
improvement per LPP iteration falls below the pre-specified threshold
$Th_{cost}$ (in USD) over $Th_{t}$ number of successive LPP iterations. To
achieve a reasonable balance between $AirCROP^{\prime}s$ run time on the one
hand and the cost reduction of the crew pairing solution on the other hand,
three different sets of parameter settings are chosen, and experimented with.
These settings of $\\{Th_{cost},Th_{t}\\}$ including $\\{500,5\\}$,
$\\{100,10\\}$, and $\\{50,15\\}$ symbolize relaxed, moderate and strict
settings, respectively, since the criterion for $AirCROP^{\prime}s$
termination gets more and more difficult as the settings change from
$\\{500,5\\}$ to $\\{50,15\\}$. The results of the $AirCROP$-runs
corresponding to these termination settings are reported in Table 8, and the
key observations are as highlighted below.
* •
As the termination settings transition through relaxed, moderate and strict
settings, the run-time to obtain the final solution increases, while the cost
of the final solution decreases. An apparent exception to this trend is
observed in TC-5 with the strict setting, but this could be explained by the
fact that the upper limit of 30 hours set for $AirCROP^{\prime}s$ run time
under practical considerations was exceeded during the fourth LPP-IPP
interaction ($T=4$). It implies that due to the enforced termination in this
particular case, $AirCROP$ could not fully utilize the potential for cost
reduction.
* •
Despite the variation in the termination settings, the cost quality of
$AirCROP^{\prime}s$ final solution does not vary as drastically, as its run
time. For instance, as the settings switched from relaxed to moderate: an
additional saving of 6384 USD could be achieved at the expense of additional
5:20 run time in the case of TC-2, while these indicators stand at 13388 USD
and 10:25, respectively, in the case of TC-5. It can also be inferred that
$\\{Th_{cost},Th_{t}\\}$ set as $\\{100,10\\}$ possibly offers a fair balance
between solution’s cost quality and run time, and this explains why these
settings have been used as the base settings for the experimental results
presented in this paper, beginning with Table 3 and ending with Table 9.
It is important to recognize that as the termination settings for LPP-
solutioning are made stricter, its run time is bound to increase. It is also
fair to expect that the cost quality of the final solution may be better,
though it cannot be guaranteed. Any such departures from the expected trend
may be due to the dependence of the quality of the final solution on the
quality of the IPP-solution for each $T$. In that, if an IPP-solution for a
particular $T$ may largely fail to approach the lower bound set by the
corresponding LPP-solution, it may negatively influence the cost quality
obtained in subsequent LPP- and IPP-solutioning phases. While such a
possibility remains, it did not surface in the experiments above.
Table 8: Performance assessment of $AirCROP$ on TC-2 and TC-5, against three
different termination settings (Relaxed, Moderate and Strict Settings) of the
CG-driven LPP-solutioning∗
LPP-IPP | TC-2 | TC-5
---|---|---
Interactions | Relaxed Setting | Moderate Setting | Strict Setting | Relaxed Setting | Moderate Setting | Strict Setting
| | $\bm{Th_{cost}=}$ 500, $\bm{Th_{t}=}$ 5 | $\bm{Th_{cost}=}$ 100, $\bm{Th_{t}=}$ 10 | $\bm{Th_{cost}=}$ 50, $\bm{Th_{t}=}$ 15 | $\bm{Th_{cost}=}$ 500, $\bm{Th_{t}=}$ 5 | $\bm{Th_{cost}=}$ 100, $\bm{Th_{t}=}$ 10 | $\bm{Th_{cost}=}$ 50, $\bm{Th_{t}=}$ 15
$T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time
$\mathcal{P}_{IFS}$ | 129221508 | 00:08 | 129221508 | 00:08 | 129221508 | 00:08 | 131443284 | 00:09 | 131443284 | 00:09 | 131443284 | 00:09
1 | $\mathcal{P}_{LP}^{1}$ | 3510231 | 01:12 | 3495054 | 03:51 | 3489337 | 08:21 | 4603984 | 02:43 | 4584525 | 07:28 | 4581711 | 10:05
$\mathcal{P}_{IP}^{1}$ | 3844119 | 00:20 | 3769811 | 00:20 | 3698316 | 00:20 | 5165821 | 00:20 | 4974341 | 00:20 | 4946510 | 00:20
2 | $\mathcal{P}_{LP}^{2}$ | 3510105 | 00:26 | 3495311 | 01:57 | 3491725 | 03:36 | 4605049 | 01:12 | 4589091 | 02:25 | 4582977 | 09:32
$\mathcal{P}_{IP}^{2}$ | 3729820 | 00:20 | 3628514 | 00:20 | 3607470 | 00:20 | 4962643 | 00:20 | 4782657 | 00:20 | 4780005 | 00:20
3 | $\mathcal{P}_{LP}^{3}$ | 3506864 | 00:35 | 3497558 | 00:52 | 3491685 | 04:15 | 4602283 | 01:09 | 4589952 | 02:14 | 4585457 | 05:56
$\mathcal{P}_{IP}^{3}$ | 3659201 | 00:20 | 3566092 | 00:11 | 3578774 | 00:20 | 4818918 | 00:20 | 4736313 | 00:20 | 4678596 | 00:20
4 | $\mathcal{P}_{LP}^{4}$ | 3506644 | 00:32 | 3499237 | 01:26 | 3494201 | 02:29 | 4604535 | 00:52 | 4591145 | 02:36 | 4595692 | 03:34
$\mathcal{P}_{IP}^{4}$ | 3606381 | 00:20 | 3516807 | 00:01 | 3540972 | 00:01 | 4727106 | 00:20 | 4654627 | 00:20 | 4624747 | 00:01
5 | $\mathcal{P}_{LP}^{5}$ | 3507647 | 00:29 | 3500169 | 00:42 | 3494409 | 02:36 | 4603253 | 00:47 | 4592463 | 02:03 | |
$\mathcal{P}_{IP}^{5}$ | 3559484 | 00:04 | 3517585 | 00:01 | 3527254 | 00:01 | 4683130 | 00:20 | 4632708 | 00:02 | |
6 | $\mathcal{P}_{LP}^{6}$ | 3507101 | 00:20 | 3501523 | 00:43 | 3496498 | 01:00 | 4603093 | 00:45 | 4593094 | 02:00 | |
$\mathcal{P}_{IP}^{6}$ | 3547304 | 00:02 | 3504085 | 00:01 | 3496498 | 00:01 | 4681335 | 00:20 | 4625993 | 00:01 | |
7 | $\mathcal{P}_{LP}^{7}$ | 3508166 | 00:18 | 3502118 | 00:31 | | | 4603638 | 00:46 | 4593431 | 01:04 | |
$\mathcal{P}_{IP}^{7}$ | 3517436 | 00:01 | 3502118 | 00:01 | | | 4651002 | 00:06 | 4619643 | 00:01 | |
8 | $\mathcal{P}_{LP}^{8}$ | 3508502 | 00:17 | | | | | 4604073 | 00:44 | 4594146 | 01:03 | |
$\mathcal{P}_{IP}^{8}$ | 3508502 | 00:01 | | | | | 4634316 | 00:02 | 4594146 | 00:01 | |
9 | $\mathcal{P}_{LP}^{9}$ | | | | | | | 4606250 | 00:28 | | | |
$\mathcal{P}_{IP}^{9}$ | | | | | | | 4614420 | 00:01 | | | |
10 | $\mathcal{P}_{LP}^{10}$ | | | | | | | 4607534 | 00:17 | | | |
$\mathcal{P}_{IP}^{10}$ | | | | | | | 4607534 | 00:01 | | | |
Final Solution | 3508502 | 05:45 | 3502118 | 11:05 | 3496498 | 23:28 | 4607534 | 12:02 | 4594146 | 22:27 | 4624747 | 30:17
∗All values in the “Cost” columns are in USD, and all the corresponding real
values are rounded-off to the next integer values. All values in the “Time”
columns are in HH:MM, and all the corresponding seconds’ values are rounded-
off to the next minute values.
Impact of Termination Settings of IPP-solutioning:
As mentioned before, integerization of an LPP solution using an MIP solver is
extremely time-consuming, particularly for large-scale CPOPs, and more so
those involving complex flight networks. Hence, from a practical perspective,
the $AirCROP$ framework imposes a threshold on the upper time limit for IPP-
solutioning (for any given $T$), namely $Th_{ipt}$, in case it does not self-
terminate a priori. To investigate the impact of $Th_{ipt}$ on
$AirCROP^{\prime}s$ performance, experiments are performed with three
different settings, including, 00:20 (one-third of an hour), 00:40 (two-third
of an hour), and 01:00 (an hour). The results are presented in Table 9, and
the key observations are as follows. In the case of TC-2, as the $Th_{ipt}$ is
raised, the run-time to obtain the final solution increases, while the cost of
the final solution decreases. However, there are exceptions to this trend in
the case of TC-5. Notably, the cost quality of the final solution
corresponding to $Th_{ipt}=$ 00:20 remains superior to that obtained for both
$Th_{ipt}=$ 00:40 and 01:00. For these two settings, the quality of LPP-
solution at $T=8$ turned worse compared to the case of $Th_{ipt}=$ 00:20, and
the gap could not be bridged even in the subsequent LPP-IPP interaction
($T=9$). The worsening of LPP-solution could be attributed to the fact that
LPP-solutioning relies on random number based heuristics, and the resulting
pairing combinations may not necessarily offer lower cost within the pre-
specified termination settings.
Based on the above, it may be inferred that despite the changes in the
termination parameter settings, $AirCROP$ is able to offer solutions with
reasonably close cost quality, though significant variations in run time may
be observed. It is also evident that even the lowest setting (desired from a
practical perspective) for $Th_{ipt}=$ 00:20 offers a good balance between
solution’s cost quality and run time, and this explains why it has been used
as the base setting for the experimental results presented in this paper.
Table 9: Performance assessment of $AirCROP$ on TC-2 and TC-5, against three
different termination settings ($Th_{ipt}=$ 00:20, 00:40 & 01:00) of the IPP-
solutioning∗
LPP-IPP | TC-2 | TC-5
---|---|---
Interactions | $\bm{Th_{ipt}=}$ 00:20 | $\bm{Th_{ipt}=}$ 00:40 | $\bm{Th_{ipt}=}$ 01:00 | $\bm{Th_{ipt}=}$ 00:20 | $\bm{Th_{ipt}=}$ 00:40 | $\bm{Th_{ipt}=}$ 01:00
$T$ | $\mathcal{P}_{LP}^{T}/\mathcal{P}_{IP}^{T}$ | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time | Cost | Time
$\mathcal{P}_{IFS}$ | 129221508 | 00:08 | 129221508 | 00:08 | 129221508 | 00:08 | 131443284 | 00:09 | 131443284 | 00:09 | 131443284 | 00:09
1 | $\mathcal{P}_{LP}^{1}$ | 3495054 | 03:51 | 3495054 | 04:21 | 3495054 | 03:57 | 4584525 | 07:28 | 4584525 | 07:46 | 4584525 | 07:49
$\mathcal{P}_{IP}^{1}$ | 3769811 | 00:20 | 3744301 | 00:40 | 3760028 | 01:00 | 4974341 | 00:20 | 4958532 | 00:40 | 4987497 | 01:00
2 | $\mathcal{P}_{LP}^{2}$ | 3495311 | 01:57 | 3497483 | 01:49 | 3495562 | 02:19 | 4589091 | 02:25 | 4585347 | 05:00 | 4588371 | 03:56
$\mathcal{P}_{IP}^{2}$ | 3628514 | 00:20 | 3629401 | 00:40 | 3632875 | 01:00 | 4782657 | 00:20 | 4778465 | 00:40 | 4766924 | 01:00
3 | $\mathcal{P}_{LP}^{3}$ | 3497558 | 00:52 | 3497473 | 01:33 | 3494305 | 01:50 | 4589952 | 02:14 | 4589481 | 02:56 | 4588911 | 04:25
$\mathcal{P}_{IP}^{3}$ | 3566092 | 00:11 | 3566247 | 00:40 | 3579899 | 01:00 | 4736313 | 00:20 | 4699845 | 00:40 | 4713402 | 01:00
4 | $\mathcal{P}_{LP}^{4}$ | 3499237 | 01:26 | 3500607 | 01:06 | 3495273 | 01:13 | 4591145 | 02:36 | 4590618 | 01:56 | 4591028 | 02:09
$\mathcal{P}_{IP}^{4}$ | 3516807 | 00:01 | 3524672 | 00:01 | 3551863 | 00:05 | 4654627 | 00:20 | 4656611 | 00:40 | 4681015 | 01:00
5 | $\mathcal{P}_{LP}^{5}$ | 3500169 | 00:42 | 3501809 | 00:49 | 3496754 | 00:52 | 4592463 | 02:03 | 4591826 | 01:18 | 4591448 | 02:03
$\mathcal{P}_{IP}^{5}$ | 3517585 | 00:01 | 3501809 | 00:01 | 3528564 | 00:01 | 4632708 | 00:02 | 4644467 | 00:15 | 4639287 | 00:29
6 | $\mathcal{P}_{LP}^{6}$ | 3501523 | 00:43 | | | 3496342 | 00:53 | 4593094 | 02:00 | 4592492 | 02:21 | 4591372 | 02:06
$\mathcal{P}_{IP}^{6}$ | 3504085 | 00:01 | | | 3512692 | 00:01 | 4625993 | 00:01 | 4617694 | 00:01 | 4616944 | 00:01
7 | $\mathcal{P}_{LP}^{7}$ | 3502118 | 00:31 | | | 3497967 | 00:59 | 4593431 | 01:04 | 4594599 | 01:30 | 4594479 | 01:23
$\mathcal{P}_{IP}^{7}$ | 3502118 | 00:01 | | | 3519996 | 00:01 | 4619643 | 00:01 | 4607261 | 00:01 | 4608085 | 00:01
8 | $\mathcal{P}_{LP}^{8}$ | | | | | 3498726 | 01:24 | 4594146 | 01:03 | 4595739 | 01:08 | 4595424 | 01:03
$\mathcal{P}_{IP}^{8}$ | | | | | 3518299 | 00:01 | 4594146 | 00:01 | 4598624 | 00:01 | 4603634 | 00:01
9 | $\mathcal{P}_{LP}^{9}$ | | | | | 3499104 | 00:40 | | | 4595703 | 00:45 | 4596929 | 00:59
$\mathcal{P}_{IP}^{9}$ | | | | | 3504258 | 00:01 | | | 4595703 | 00:01 | 4596929 | 00:01
10 | $\mathcal{P}_{LP}^{10}$ | | | | | 3499117 | 01:10 | | | | | |
$\mathcal{P}_{IP}^{10}$ | | | | | 3509608 | 00:01 | | | | | |
11 | $\mathcal{P}_{LP}^{11}$ | | | | | 3499609 | 00:45 | | | | | |
$\mathcal{P}_{IP}^{11}$ | | | | | 3499609 | 00:01 | | | | | |
Final solution | 3502118 | 11:05 | 3501809 | 12:24 | 3499609 | 19:22 | 4594146 | 22:27 | 4595703 | 27:55 | 4596929 | 30:35
∗All values in the “Cost” columns are in USD, and all the corresponding real
values are rounded-off to the next integer values. All values in the “Time”
columns are in HH:MM, and all the corresponding seconds’ values are rounded-
off to the next minute values.
## 5 Conclusion and Future Research
For an airline, crew operating cost is the second largest expense, after the
fuel cost, making the crew pairing optimization critical for business
viability. Over the last three decades, CPOP has received an unprecedented
attention from the OR community, as a result of which numerous CPOP solution
approaches have been proposed. Yet, the emergent flight networks with conjunct
scale and complexity largely remain unaddressed in the available literature.
Such a scenario is all the more alarming, considering that the air traffic is
expected to scale up to double over the next 20 years, wherein, most airlines
may need to cater to multiple crew bases and multiple hub-and-spoke
subnetworks. This research has proposed an Airline Crew Pairing Optimization
Framework ($AirCROP$) based on domain-knowledge driven CG strategies for
efficiently tackling real-world, large-scale and complex flight networks. This
paper has presented not just the design of the $AirCROP$’s constitutive
modules, but has also shared insights on how these modules interact and how
sensitive the $AirCROP^{\prime}s$ performance is to the sources of
variability, choice of different methods and parameter settings.
Given a CPOP, $AirCROP$ first preprocesses the entire duty overnight-
connection network via its Legal Crew Pairing Generator777This module is
utilized again to facilitate legal crew pairings when required in real-time in
other modules of $AirCROP$ Subsequently, $AirCROP$ is initialized using an IFS
generated by the proposed method (IPDCH). Next, the $AirCROP$’s Optimization
Engine attempts to find a good-quality CPOP solution via intermittent
interactions of its submodules, namely, CG-driven LPP-solutioning and IPP-
solutioning. The efficacy of $AirCROP$ has been demonstrated on real-world
airline flight network characterized by an unprecedented (in reference to
available literature) conjunct scale-and-complexity, marked by over 4200
flights, 15 crew bases, multiple hub-and-spoke subnetworks, and billion-plus
pairings. The distinctive contribution of this paper is also embedded in its
empirical investigation of critically important questions relating to
variability and sensitivity, which the literature is otherwise silent on. In
that:
* •
first, the sensitivity analysis of $AirCROP$ is performed in the presence and
absence of sources of variability. It is empirically highlighted that
$AirCROP$ is capable of offering comparable cost solutions, both in the
presence or absence of the sources of variability. This endorses the
robustness of its constitutive modules.
* •
second, the sensitivity of $AirCROP$ with respect to the cost quality of the
initial solution and the associated run-time is investigated
vis-$\grave{a}$-vis three different initialization methods. Again, the
robustness of $AirCROP$ is endorsed, considering that it is found to be
capable of offering similar cost solutions, despite the significantly varying
cost and run-time of the initial solutions.
* •
last, the sensitivity of $AirCROP$ to the termination parameter settings
associated with the Optimization Engine’s submodules, is investigated. The
fact that with the variation in termination settings of both LPP-solutioning
and IPP-solutioning (independent of each other)- the $AirCROP$’s performance
strongly aligns with the logically expected trends, is a testimony to the
robustness of its constitutive modules.
Notably, $AirCROP$ has been implemented using Python scripting language,
aligned with the industrial sponsor’s preferences. However, a significant
reduction in run-time could be achieved by the use of compiled programming
languages such as C++, Java, etc. Moreover, employing the domain-knowledge
driven CG strategies during the IPP-solutioning phase too, may augment the
overall cost- and time-efficiency of the $AirCROP$. Furthermore, the emerging
trend of utilizing the Machine Learning capabilities for assisting
combinatorial optimization tasks, may also hold promise for the airline crew
pairing optimization, towards which an exploratory attempt has been made by
the authors (Aggarwal, Singh Saxena, 2020). Despite the scope for improvement,
the authors hope that with the emergent trend of evolving scale and complexity
of airline flight networks, this paper shall serve as an important milestone
for the affiliated research and applications.
## Acknowledgment
This research work is a part of an Indo-Dutch joint research project,
supported by the Ministry of Electronics and Information Technology (MEITY),
India [grant number 13(4)/2015-CC&BT]; Netherlands Organization for Scientific
Research (NWO), the Netherlands; and General Electric (GE) Aviation, India.
The authors thank GE Aviation, particularly, Saaju Paulose (Senior Manager),
Arioli Arumugam (Senior Director- Data & Analytics), and Alla Rajesh (Senior
Staff Data & Analytics Scientist) for providing real-world test cases, and
sharing their domain knowledge which has helped the authors significantly in
successfully completing this research work.
## References
* Achterberg Wunderling (2013) achterberg2013mixedAchterberg, T. Wunderling, R. 2013\. Mixed integer programming: Analyzing 12 years of progress Mixed integer programming: Analyzing 12 years of progress. Facets of combinatorial optimization Facets of combinatorial optimization ( 449–481). Springer.
* Aggarwal, Saxena . (20201) aggarwal2020novelAggarwal, D., Saxena, DK., Bäck, T. Emmerich, M. 20201\. A Novel Column Generation Heuristic for Airline Crew Pairing Optimization with Large-scale Complex Flight Networks A Novel Column Generation Heuristic for Airline Crew Pairing Optimization with Large-scale Complex Flight Networks. arXiv preprint arXiv:2005.08636. https://arxiv.org/abs/2005.08636v3
* Aggarwal, Saxena . (20202) aggarwal2020realworldAggarwal, D., Saxena, DK., Bäck, T. Emmerich, M. 20202\. Real-World Airline Crew Pairing Optimization: Customized Genetic Algorithm versus Column Generation Method Real-World Airline Crew Pairing Optimization: Customized Genetic Algorithm versus Column Generation Method. arXiv preprint arXiv:2003.03792. http://arxiv.org/abs/2003.03792
* Aggarwal . (2018) aggarwal2018largeAggarwal, D., Saxena, DK., Emmerich, M. Paulose, S. 2018November. On large-scale airline crew pairing generation On large-scale airline crew pairing generation. 2018 IEEE Symposium Series on Computational Intelligence (SSCI) 2018 ieee symposium series on computational intelligence (ssci) ( 593–600).
* Aggarwal, Singh Saxena (2020) aggarwal2020learningAggarwal, D., Singh, YK. Saxena, DK. 2020\. On Learning Combinatorial Patterns to Assist Large-Scale Airline Crew Pairing Optimization On Learning Combinatorial Patterns to Assist Large-Scale Airline Crew Pairing Optimization. arXiv preprint arXiv:2004.13714. https://arxiv.org/abs/2004.13714v3
* Anbil . (1998) anbil1998columnAnbil, R., Forrest, JJ. Pulleyblank, WR. 1998\. Column generation and the airline crew pairing problem Column generation and the airline crew pairing problem. Documenta Mathematica31677.
* Anbil . (1991) anbil1991recentAnbil, R., Gelman, E., Patty, B. Tanga, R. 1991\. Recent advances in crew-pairing optimization at American Airlines Recent advances in crew-pairing optimization at american airlines. Interfaces21162–74.
* Anbil . (1992) anbil1992globalAnbil, R., Tanga, R. Johnson, EL. 1992\. A global approach to crew-pairing optimization A global approach to crew-pairing optimization. IBM Systems Journal31171–78.
* Andersen Andersen (2000) andersen2000mosekAndersen, ED. Andersen, KD. 2000\. The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm The mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. High performance optimization High performance optimization ( 197–232). Springer.
* Barnhart . (2003) barnhart2003airlineBarnhart, C., Cohn, AM., Johnson, EL., Klabjan, D., Nemhauser, GL. Vance, PH. 2003\. Airline crew scheduling Airline crew scheduling. Handbook of transportation science Handbook of transportation science ( 517–560). Springer.
* Barnhart . (1998) barnhart1998branchBarnhart, C., Johnson, EL., Nemhauser, GL., Savelsbergh, MW. Vance, PH. 1998\. Branch-and-price: Column generation for solving huge integer programs Branch-and-price: Column generation for solving huge integer programs. Operations research463316–329.
* Beasley Chu (1996) beasley1996geneticBeasley, JE. Chu, PC. 1996\. A genetic algorithm for the set covering problem A genetic algorithm for the set covering problem. European journal of operational research942392–404.
* Bertsimas Tsitsiklis (1997) bertsimas1997introductionBertsimas, D. Tsitsiklis, JN. 1997\. Introduction to linear optimization Introduction to linear optimization ( 6). Athena Scientific Belmont, MA.
* Desaulniers . (1997) desaulniers1997crewDesaulniers, G., Desrosiers, J., Dumas, Y., Marc, S., Rioux, B., Solomon, MM. Soumis, F. 1997\. Crew pairing at air france Crew pairing at air france. European journal of operational research972245–259.
* Desaulniers Soumis (2010) desaulniers2010airlineDesaulniers, G. Soumis, F. 2010\. Airline Crew Scheduling by Column Generation Airline crew scheduling by column generation. CIRRELT Spring School, Montréal Canada.
* Desrochers . (1992) desrochers1992newDesrochers, M., Desrosiers, J. Solomon, M. 1992\. A new optimization algorithm for the vehicle routing problem with time windows A new optimization algorithm for the vehicle routing problem with time windows. Operations research402342–354.
* Desrochers Soumis (1989) desrochers1989columnDesrochers, M. Soumis, F. 1989\. A column generation approach to the urban transit crew scheduling problem A column generation approach to the urban transit crew scheduling problem. Transportation science2311–13.
* Desrosiers . (1991) desrosiers1991breakthroughDesrosiers, J., Dumas, Y., Desrochers, M., Soumis, F., Sanso, B. Trudeau, P. 1991\. A breakthrough in airline crew scheduling A breakthrough in airline crew scheduling G-91-11. MontrealCahiers du GERAD.
* Desrosiers . (1984) desrosiers1984routingDesrosiers, J., Soumis, F. Desrochers, M. 1984\. Routing with time windows by column generation Routing with time windows by column generation. Networks144545–565.
* Deveci Demirel (20181) deveci2018evolutionaryDeveci, M. Demirel, NÇ. 20181\. Evolutionary algorithms for solving the airline crew pairing problem Evolutionary algorithms for solving the airline crew pairing problem. Computers & Industrial Engineering115389–406.
* Deveci Demirel (20182) deveci2018surveyDeveci, M. Demirel, NC. 20182\. A survey of the literature on airline crew scheduling A survey of the literature on airline crew scheduling. Engineering Applications of Artificial Intelligence7454–69.
* Du Merle . (1999) du1999stabilizedDu Merle, O., Villeneuve, D., Desrosiers, J. Hansen, P. 1999\. Stabilized column generation Stabilized column generation. Discrete Mathematics1941-3229–237.
* Garey Johnson (1979) garey2002computersGarey, MR. Johnson, DS. 1979\. Computers and Intractibility: A Guide to the Theory of NP-Completeness Computers and intractibility: A guide to the theory of np-completeness ( 44). New York: W. H. Freeman & Company.
* Gershkoff (1989) gershkoff1989optimizingGershkoff, I. 1989\. Optimizing flight crew schedules Optimizing flight crew schedules. Interfaces19429–43.
* Goldberg (2006) goldberg2006geneticGoldberg, DE. 2006\. Genetic algorithms Genetic algorithms. Pearson Education India.
* Gurobi Optimization (2019) gurobiGurobi Optimization, L. 2019\. Gurobi Optimizer Reference Manual. Gurobi optimizer reference manual. http://www.gurobi.com
* Gustafsson (1999) gustafsson1999heuristicGustafsson, T. 1999\. A heuristic approach to column generation for airline crew scheduling A heuristic approach to column generation for airline crew scheduling. Department of Mathematics, Chalmers University of Technology.
* Hoffman Padberg (1993) hoffman1993solvingHoffman, KL. Padberg, M. 1993\. Solving airline crew scheduling problems by branch-and-cut Solving airline crew scheduling problems by branch-and-cut. Management science396657–682.
* Karmarkar (1984) karmarkar1984newKarmarkar, N. 1984\. A new polynomial-time algorithm for linear programming A new polynomial-time algorithm for linear programming. Proceedings of the sixteenth annual ACM symposium on Theory of computing Proceedings of the sixteenth annual acm symposium on theory of computing ( 302–311).
* Kasirzadeh . (2017) kasirzadeh2017airlineKasirzadeh, A., Saddoune, M. Soumis, F. 2017\. Airline crew scheduling: models, algorithms, and data sets Airline crew scheduling: models, algorithms, and data sets. EURO Journal on Transportation and Logistics62111–137.
* Koch . (2011) koch2011miplibKoch, T., Achterberg, T., Andersen, E., Bastert, O., Berthold, T., Bixby, RE.others 2011\. MIPLIB 2010 Miplib 2010. Mathematical Programming Computation32103.
* Kornilakis Stamatopoulos (2002) kornilakis2002crewKornilakis, H. Stamatopoulos, P. 2002\. Crew pairing optimization with genetic algorithms Crew pairing optimization with genetic algorithms. Hellenic Conference on Artificial Intelligence Hellenic conference on artificial intelligence ( 109–120).
* Land Doig (1960) land1960anLand, AH. Doig, AG. 1960\. An Automatic Method of Solving Discrete Programming Problems An automatic method of solving discrete programming problems. Econometrica283497–520.
* Levine (1996) levine1996applicationLevine, D. 1996\. Application of a hybrid genetic algorithm to airline crew scheduling Application of a hybrid genetic algorithm to airline crew scheduling. Computers & Operations Research236547–558.
* Linderoth Lodi (2011) linderoth2010milpLinderoth, JT. Lodi, A. 2011\. MILP software Milp software (JJ. Cochran, ). John Wiley & Sons.
* Lodi (2009) lodi2009mixedLodi, A. 2009\. Mixed integer programming computation Mixed integer programming computation (M. Jünger ., ). Springer-Verlag.
* Lodi Tramontani (2013) lodi2013performanceLodi, A. Tramontani, A. 2013\. Performance variability in mixed-integer programming Performance variability in mixed-integer programming. Theory Driven by Influential Applications Theory driven by influential applications ( 1–12). INFORMS.
* Lübbecke (2010) lubbecke2010columnLübbecke, ME. 2010\. Column generation Column generation. Wiley encyclopedia of operations research and management science.
* Lübbecke Desrosiers (2005) lubbecke2005selectedLübbecke, ME. Desrosiers, J. 2005\. Selected topics in column generation Selected topics in column generation. Operations research5361007–1023.
* Marsten (1994) marsten1994crewMarsten, R. 1994\. Crew Planning at Delta Airlines Crew planning at delta airlines. Presentation at XV Mathematical Programming Symposium, Ann Arbor, MI, USA.
* Ozdemir Mohan (2001) ozdemir2001flightOzdemir, HT. Mohan, CK. 2001\. Flight graph based genetic algorithm for crew scheduling in airlines Flight graph based genetic algorithm for crew scheduling in airlines. Information Sciences1333-4165–173.
* Padberg Rinaldi (1991) padberg1991branchPadberg, M. Rinaldi, G. 1991\. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM review33160–100.
* Tarjan (1972) tarjan1972depthTarjan, R. 1972\. Depth-first search and linear graph algorithms Depth-first search and linear graph algorithms. SIAM journal on computing12146–160.
* Vance . (1997) vance1997heuristicVance, PH., Barnhart, C., Gelman, E., Johnson, EL., Krishna, A., Mahidhara, D.Rebello, R. 1997\. A heuristic branch-and-price approach for the airline crew pairing problem A heuristic branch-and-price approach for the airline crew pairing problem LEC-97-06. AtlantaGeorgia Institute of Technology.
* Vazirani (2003) vazirani2013approximationVazirani, VV. 2003\. Approximation Algorithms Approximation algorithms. Springer, Berlin, Heidelberg, (Chapter 13).
* Virtanen . (2020) scipyVirtanen, P., Gommers, R., Oliphant, TE., Haberland, M., Reddy, T., Cournapeau, D.Contributors, S. 2020\. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods17261–272. https://doi.org/10.1038/s41592-019-0686-2
* Zeren Özkol (2012) zeren2012improvedZeren, B. Özkol, İ. 2012\. An improved genetic algorithm for crew pairing optimization An improved genetic algorithm for crew pairing optimization. Journal of Intelligent Learning Systems and Applications40170.
* Zeren Özkol (2016) zeren2016novelZeren, B. Özkol, I. 2016\. A novel column generation strategy for large scale airline crew pairing problems A novel column generation strategy for large scale airline crew pairing problems. Expert Systems with Applications55133–144.
|
2024-09-04T02:54:58.320151 | 2020-03-09T09:39:36 | 2003.03999 | {
"authors": "LHCb collaboration: R. Aaij, C. Abell\\'an Beteta, T. Ackernley, B.\n Adeva, M. Adinolfi, H. Afsharnia, C.A. Aidala, S. Aiola, Z. Ajaltouni, S.\n Akar, P. Albicocco, J. Albrecht, F. Alessio, M. Alexander, A. Alfonso Albero,\n G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, Y. Amhis, L. An,\n L. Anderlini, G. Andreassi, M. Andreotti, F. Archilli, A. Artamonov, M.\n Artuso, K. Arzymatov, E. Aslanides, M. Atzeni, B. Audurier, S. Bachmann, J.J.\n Back, S. Baker, V. Balagura, W. Baldini, A. Baranov, R.J. Barlow, S. Barsuk,\n W. Barter, M. Bartolini, F. Baryshnikov, J.M. Basels, G. Bassi, V.\n Batozskaya, B. Batsukh, A. Battig, A. Bay, M. Becker, F. Bedeschi, I.\n Bediaga, A. Beiter, V. Belavin, S. Belin, V. Bellee, K. Belous, I. Belyaev,\n G. Bencivenni, E. Ben-Haim, S. Benson, A. Berezhnoy, R. Bernet, D.\n Berninghoff, H.C. Bernstein, C. Bertella, E. Bertholet, A. Bertolin, C.\n Betancourt, F. Betti, M.O. Bettler, Ia. Bezshyiko, S. Bhasin, J. Bhom, M.S.\n Bieker, S. Bifani, P. Billoir, A. Bizzeti, M. Bj{\\o}rn, M.P. Blago, T. Blake,\n F. Blanc, S. Blusk, D. Bobulska, V. Bocci, O. Boente Garcia, T. Boettcher, A.\n Boldyrev, A. Bondar, N. Bondar, S. Borghi, M. Borisyak, M. Borsato, J.T.\n Borsuk, T.J.V. Bowcock, C. Bozzi, M.J. Bradley, S. Braun, A. Brea Rodriguez,\n M. Brodski, J. Brodzicka, A. Brossa Gonzalo, D. Brundu, E. Buchanan, A.\n B\\\"uchler-Germann, A. Buonaura, C. Burr, A. Bursche, A. Butkevich, J.S.\n Butter, J. Buytaert, W. Byczynski, S. Cadeddu, H. Cai, R. Calabrese, L.\n Calero Diaz, S. Cali, R. Calladine, M. Calvi, M. Calvo Gomez, P. Camargo\n Magalhaes, A. Camboni, P. Campana, D.H. Campora Perez, A.F. Campoverde\n Quezada, L. Capriotti, A. Carbone, G. Carboni, R. Cardinale, A. Cardini, I.\n Carli, P. Carniti, K. Carvalho Akiba, A. Casais Vidal, G. Casse, M. Cattaneo,\n G. Cavallero, S. Celani, R. Cenci, J. Cerasoli, M.G. Chapman, M. Charles, Ph.\n Charpentier, G. Chatzikonstantinidis, M. Chefdeville, V. Chekalina, C. Chen,\n S. Chen, A. Chernov, S.-G. Chitic, V. Chobanova, S. Cholak, M. Chrzaszcz, A.\n Chubykin, V. Chulikov, P. Ciambrone, M.F. Cicala, X. Cid Vidal, G. Ciezarek,\n F. Cindolo, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier, J.L.\n Cobbledick, V. Coco, J.A.B. Coelho, J. Cogan, E. Cogneras, L. Cojocariu, P.\n Collins, T. Colombo, A. Contu, N. Cooke, G. Coombs, S. Coquereau, G. Corti,\n C.M. Costa Sobral, B. Couturier, D.C. Craik, J. Crkovsk\\'a, A. Crocombe, M.\n Cruz Torres, R. Currie, C.L. Da Silva, E. Dall'Occo, J. Dalseno, C.\n D'Ambrosio, A. Danilina, P. d'Argent, A. Davis, O. De Aguiar Francisco, K. De\n Bruyn, S. De Capua, M. De Cian, J.M. De Miranda, L. De Paula, M. De Serio, P.\n De Simone, J.A. de Vries, C.T. Dean, W. Dean, D. Decamp, L. Del Buono, B.\n Delaney, H.-P. Dembinski, A. Dendek, V. Denysenko, D. Derkach, O. Deschamps,\n F. Desse, F. Dettori, B. Dey, A. Di Canto, P. Di Nezza, S. Didenko, H.\n Dijkstra, V. Dobishuk, F. Dordei, M. Dorigo, A.C. dos Reis, L. Douglas, A.\n Dovbnya, K. Dreimanis, M.W. Dudek, L. Dufour, P. Durante, J.M. Durham, D.\n Dutta, M. Dziewiecki, A. Dziurda, A. Dzyuba, S. Easo, U. Egede, V. Egorychev,\n S. Eidelman, S. Eisenhardt, S. Ek-In, L. Eklund, S. Ely, A. Ene, E. Epple, S.\n Escher, J. Eschle, S. Esen, T. Evans, A. Falabella, J. Fan, Y. Fan, N.\n Farley, S. Farry, D. Fazzini, P. Fedin, M. F\\'eo, P. Fernandez Declara, A.\n Fernandez Prieto, F. Ferrari, L. Ferreira Lopes, F. Ferreira Rodrigues, S.\n Ferreres Sole, M. Ferrillo, M. Ferro-Luzzi, S. Filippov, R.A. Fini, M.\n Fiorini, M. Firlej, K.M. Fischer, C. Fitzpatrick, T. Fiutowski, F. Fleuret,\n M. Fontana, F. Fontanelli, R. Forty, V. Franco Lima, M. Franco Sevilla, M.\n Frank, C. Frei, D.A. Friday, J. Fu, Q. Fuehring, W. Funk, E. Gabriel, A.\n Gallas Torreira, D. Galli, S. Gallorini, S. Gambetta, Y. Gan, M. Gandelman,\n P. Gandini, Y. Gao, L.M. Garcia Martin, J. Garc\\'ia Pardi\\~nas, B. Garcia\n Plana, F.A. Garcia Rosales, L. Garrido, D. Gascon, C. Gaspar, D. Gerick, E.\n Gersabeck, M. Gersabeck, T. Gershon, D. Gerstel, Ph. Ghez, V. Gibson, A.\n Giovent\\`u, P. Gironella Gironell, L. Giubega, C. Giugliano, K. Gizdov, V.V.\n Gligorov, C. G\\\"obel, E. Golobardes, D. Golubkov, A. Golutvin, A. Gomes, P.\n Gorbounov, I.V. Gorelov, C. Gotti, E. Govorkova, J.P. Grabowski, R. Graciani\n Diaz, T. Grammatico, L.A. Granado Cardoso, E. Graug\\'es, E. Graverini, G.\n Graziani, A. Grecu, R. Greim, P. Griffith, L. Grillo, L. Gruber, B.R. Gruberg\n Cazon, C. Gu, E. Gushchin, A. Guth, Yu. Guz, T. Gys, P. A. G\\\"unther, T.\n Hadavizadeh, G. Haefeli, C. Haen, S.C. Haines, P.M. Hamilton, Q. Han, X. Han,\n T.H. Hancock, S. Hansmann-Menzemer, N. Harnew, T. Harrison, R. Hart, C.\n Hasse, M. Hatch, J. He, M. Hecker, K. Heijhoff, K. Heinicke, A.M. Hennequin,\n K. Hennessy, L. Henry, J. Heuel, A. Hicheur, D. Hill, M. Hilton, P.H.\n Hopchev, J. Hu, J. Hu, W. Hu, W. Huang, W. Hulsbergen, T. Humair, R.J.\n Hunter, M. Hushchyn, D. Hutchcroft, D. Hynds, P. Ibis, M. Idzik, P. Ilten, A.\n Inglessi, K. Ivshin, R. Jacobsson, S. Jakobsen, E. Jans, B.K. Jashal, A.\n Jawahery, V. Jevtic, F. Jiang, M. John, D. Johnson, C.R. Jones, B. Jost, N.\n Jurik, S. Kandybei, M. Karacson, J.M. Kariuki, N. Kazeev, M. Kecke, F.\n Keizer, M. Kelsey, M. Kenzie, T. Ketel, B. Khanji, A. Kharisova, K.E. Kim, T.\n Kirn, V.S. Kirsebom, S. Klaver, K. Klimaszewski, S. Koliiev, A. Kondybayeva,\n A. Konoplyannikov, P. Kopciewicz, R. Kopecna, P. Koppenburg, M. Korolev, I.\n Kostiuk, O. Kot, S. Kotriakhova, L. Kravchuk, R.D. Krawczyk, M. Kreps, F.\n Kress, S. Kretzschmar, P. Krokovny, W. Krupa, W. Krzemien, W. Kucewicz, M.\n Kucharczyk, V. Kudryavtsev, H.S. Kuindersma, G.J. Kunde, T. Kvaratskheliya,\n D. Lacarrere, G. Lafferty, A. Lai, D. Lancierini, J.J. Lane, G. Lanfranchi,\n C. Langenbruch, O. Lantwin, T. Latham, F. Lazzari, R. Le Gac, S.H. Lee, R.\n Lef\\`evre, A. Leflat, O. Leroy, T. Lesiak, B. Leverington, H. Li, L. Li, X.\n Li, Y. Li, Z. Li, X. Liang, T. Lin, R. Lindner, V. Lisovskyi, G. Liu, X. Liu,\n D. Loh, A. Loi, J. Lomba Castro, I. Longstaff, J.H. Lopes, G. Loustau, G.H.\n Lovell, Y. Lu, D. Lucchesi, M. Lucio Martinez, Y. Luo, A. Lupato, E. Luppi,\n O. Lupton, A. Lusiani, X. Lyu, S. Maccolini, F. Machefert, F. Maciuc, V.\n Macko, P. Mackowiak, S. Maddrell-Mander, L.R. Madhan Mohan, O. Maev, A.\n Maevskiy, D. Maisuzenko, M.W. Majewski, S. Malde, B. Malecki, A. Malinin, T.\n Maltsev, H. Malygina, G. Manca, G. Mancinelli, R. Manera Escalero, D.\n Manuzzi, D. Marangotto, J. Maratas, J.F. Marchand, U. Marconi, S. Mariani, C.\n Marin Benito, M. Marinangeli, P. Marino, J. Marks, P.J. Marshall, G.\n Martellotti, L. Martinazzoli, M. Martinelli, D. Martinez Santos, F. Martinez\n Vidal, A. Massafferri, M. Materok, R. Matev, A. Mathad, Z. Mathe, V.\n Matiunin, C. Matteuzzi, K.R. Mattioli, A. Mauri, E. Maurice, M. McCann, L.\n Mcconnell, A. McNab, R. McNulty, J.V. Mead, B. Meadows, C. Meaux, G. Meier,\n N. Meinert, D. Melnychuk, S. Meloni, M. Merk, A. Merli, M. Mikhasenko, D.A.\n Milanes, E. Millard, M.-N. Minard, O. Mineev, L. Minzoni, S.E. Mitchell, B.\n Mitreska, D.S. Mitzel, A. M\\\"odden, A. Mogini, R.D. Moise, T. Momb\\\"acher,\n I.A. Monroy, S. Monteil, M. Morandin, G. Morello, M.J. Morello, J. Moron,\n A.B. Morris, A.G. Morris, R. Mountain, H. Mu, F. Muheim, M. Mukherjee, M.\n Mulder, D. M\\\"uller, K. M\\\"uller, C.H. Murphy, D. Murray, P. Muzzetto, P.\n Naik, T. Nakada, R. Nandakumar, T. Nanut, I. Nasteva, M. Needham, N. Neri, S.\n Neubert, N. Neufeld, R. Newcombe, T.D. Nguyen, C. Nguyen-Mau, E.M. Niel, S.\n Nieswand, N. Nikitin, N.S. Nolte, C. Nunez, A. Oblakowska-Mucha, V.\n Obraztsov, S. Ogilvy, D.P. O'Hanlon, R. Oldeman, C.J.G. Onderwater, J. D.\n Osborn, A. Ossowska, J.M. Otalora Goicochea, T. Ovsiannikova, P. Owen, A.\n Oyanguren, P.R. Pais, T. Pajero, A. Palano, M. Palutan, G. Panshin, A.\n Papanestis, M. Pappagallo, L.L. Pappalardo, C. Pappenheimer, W. Parker, C.\n Parkes, G. Passaleva, A. Pastore, M. Patel, C. Patrignani, A. Pearce, A.\n Pellegrino, M. Pepe Altarelli, S. Perazzini, D. Pereima, P. Perret, L.\n Pescatore, K. Petridis, A. Petrolini, A. Petrov, S. Petrucci, M. Petruzzo, B.\n Pietrzyk, G. Pietrzyk, M. Pili, D. Pinci, J. Pinzino, F. Pisani, A. Piucci,\n V. Placinta, S. Playfer, J. Plews, M. Plo Casasus, F. Polci, M. Poli Lener,\n M. Poliakova, A. Poluektov, N. Polukhina, I. Polyakov, E. Polycarpo, G.J.\n Pomery, S. Ponce, A. Popov, D. Popov, S. Poslavskii, K. Prasanth, L.\n Promberger, C. Prouve, V. Pugatch, A. Puig Navarro, H. Pullen, G. Punzi, W.\n Qian, J. Qin, R. Quagliani, B. Quintana, N.V. Raab, R.I. Rabadan Trejo, B.\n Rachwal, J.H. Rademacker, M. Rama, M. Ramos Pernas, M.S. Rangel, F. Ratnikov,\n G. Raven, M. Reboud, F. Redi, F. Reiss, C. Remon Alepuz, Z. Ren, V. Renaudin,\n S. Ricciardi, D.S. Richards, S. Richards, K. Rinnert, P. Robbe, A. Robert,\n A.B. Rodrigues, E. Rodrigues, J.A. Rodriguez Lopez, M. Roehrken, A. Rollings,\n V. Romanovskiy, M. Romero Lamas, A. Romero Vidal, J.D. Roth, M. Rotondo, M.S.\n Rudolph, T. Ruf, J. Ruiz Vidal, A. Ryzhikov, J. Ryzka, J.J. Saborido Silva,\n N. Sagidova, N. Sahoo, B. Saitta, C. Sanchez Gras, C. Sanchez Mayordomo, R.\n Santacesaria, C. Santamarina Rios, M. Santimaria, E. Santovetti, G. Sarpis,\n M. Sarpis, A. Sarti, C. Satriano, A. Satta, M. Saur, D. Savrina, L.G.\n Scantlebury Smead, S. Schael, M. Schellenberg, M. Schiller, H. Schindler, M.\n Schmelling, T. Schmelzer, B. Schmidt, O. Schneider, A. Schopper, H.F.\n Schreiner, M. Schubiger, S. Schulte, M.H. Schune, R. Schwemmer, B. Sciascia,\n A. Sciubba, S. Sellam, A. Semennikov, A. Sergi, N. Serra, J. Serrano, L.\n Sestini, A. Seuthe, P. Seyfert, D.M. Shangase, M. Shapkin, L. Shchutska, T.\n Shears, L. Shekhtman, V. Shevchenko, E. Shmanin, J.D. Shupperd, B.G. Siddi,\n R. Silva Coutinho, L. Silva de Oliveira, G. Simi, S. Simone, I. Skiba, N.\n Skidmore, T. Skwarnicki, M.W. Slater, J.G. Smeaton, A. Smetkina, E. Smith,\n I.T. Smith, M. Smith, A. Snoch, M. Soares, L. Soares Lavra, M.D. Sokoloff,\n F.J.P. Soler, B. Souza De Paula, B. Spaan, E. Spadaro Norella, P. Spradlin,\n F. Stagni, M. Stahl, S. Stahl, P. Stefko, O. Steinkamp, S. Stemmle, O.\n Stenyakin, M. Stepanova, H. Stevens, S. Stone, S. Stracka, M.E. Stramaglia,\n M. Straticiuc, S. Strokov, J. Sun, L. Sun, Y. Sun, P. Svihra, K. Swientek, A.\n Szabelski, T. Szumlak, M. Szymanski, S. Taneja, Z. Tang, T. Tekampe, F.\n Teubert, E. Thomas, K.A. Thomson, M.J. Tilley, V. Tisserand, S. T'Jampens, M.\n Tobin, S. Tolk, L. Tomassetti, D. Torres Machado, D.Y. Tou, E. Tournefier, M.\n Traill, M.T. Tran, E. Trifonova, C. Trippl, A. Tsaregorodtsev, G. Tuci, A.\n Tully, N. Tuning, A. Ukleja, A. Usachov, A. Ustyuzhanin, U. Uwer, A. Vagner,\n V. Vagnoni, A. Valassi, G. Valenti, M. van Beuzekom, H. Van Hecke, E. van\n Herwijnen, C.B. Van Hulse, M. van Veghel, R. Vazquez Gomez, P. Vazquez\n Regueiro, C. V\\'azquez Sierra, S. Vecchi, J.J. Velthuis, M. Veltri, A.\n Venkateswaran, M. Veronesi, M. Vesterinen, J.V. Viana Barbosa, D. Vieira, M.\n Vieites Diaz, H. Viemann, X. Vilasis-Cardona, G. Vitali, A. Vitkovskiy, A.\n Vollhardt, D. Vom Bruch, A. Vorobyev, V. Vorobyev, N. Voropaev, R. Waldi, J.\n Walsh, J. Wang, J. Wang, J. Wang, M. Wang, Y. Wang, Z. Wang, D.R. Ward, H.M.\n Wark, N.K. Watson, D. Websdale, A. Weiden, C. Weisser, B.D.C. Westhenry, D.J.\n White, M. Whitehead, D. Wiedner, G. Wilkinson, M. Wilkinson, I. Williams, M.\n Williams, M.R.J. Williams, T. Williams, F.F. Wilson, W. Wislicki, M. Witek,\n L. Witola, G. Wormser, S.A. Wotton, H. Wu, K. Wyllie, Z. Xiang, D. Xiao, Y.\n Xie, H. Xing, A. Xu, J. Xu, L. Xu, M. Xu, Q. Xu, Z. Xu, Z. Yang, Z. Yang, Y.\n Yao, L.E. Yeomans, H. Yin, J. Yu, X. Yuan, O. Yushchenko, K.A. Zarebski, M.\n Zavertyaev, M. Zdybal, M. Zeng, D. Zhang, L. Zhang, S. Zhang, W.C. Zhang, Y.\n Zhang, A. Zhelezov, Y. Zheng, X. Zhou, Y. Zhou, X. Zhu, V. Zhukov, J.B.\n Zonneveld, S. Zucchelli",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26112",
"submitter": "Titus Momb\\\"acher",
"url": "https://arxiv.org/abs/2003.03999"
} | arxiv-papers | EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-EP-2020-023 LHCb-PAPER-2020-001 May 28, 2020
Search for the rare decays $B^{0}_{s}\rightarrow e^{+}e^{-}$ and
$B^{0}\rightarrow e^{+}e^{-}$
LHCb collaboration†††Authors are listed at the end of this Letter.
A search for the decays $B^{0}_{s}\rightarrow e^{+}e^{-}$ and
$B^{0}\rightarrow e^{+}e^{-}$ is performed using data collected with the LHCb
experiment in proton-proton collisions at center-of-mass energies of $7$, $8$
and $13\,\text{TeV}$, corresponding to integrated luminosities of $1$, $2$ and
$2\,\text{fb}^{-1}$, respectively. No signal is observed. Assuming no
contribution from $B^{0}\rightarrow e^{+}e^{-}$ decays, an upper limit on the
branching fraction $\mathcal{B}(B^{0}_{s}\rightarrow
e^{+}e^{-})<9.4\,(11.2)\times 10^{-9}$ is obtained at $90\,(95)\,\%$
confidence level. If no $B^{0}_{s}\rightarrow e^{+}e^{-}$ contribution is
assumed, a limit of $\mathcal{B}(B^{0}\rightarrow e^{+}e^{-})<2.5\,(3.0)\times
10^{-9}$ is determined at $90\,(95)\,\%$ confidence level. These upper limits
are more than one order of magnitude lower than the previous values.
Published in Phys. Rev. Lett. 124 (2020) 211802
© 2020 CERN for the benefit of the LHCb collaboration. CC BY 4.0 licence.
Searches for rare particle decays provide ideal probes for contributions from
physics processes beyond the Standard Model (SM). Recent measurements of
decays involving $b\\!\rightarrow s\ell^{+}\ell^{-}$ transitions (the
inclusion of charge-conjugated processes is implied throughout this Letter)
hint at deviations from SM predictions in lepton-flavor universality tests [1,
2, 3, 4, 5, 6] and thus motivate measurements of decay rates into final states
involving leptons. Following the observation of the decay
${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$ [7, 8], the search for
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ decays provides an independent test of
lepton-flavor universality. According to SM predictions (calculated from Ref.
[9], neglecting QED corrections that are expected to be at the percent level),
${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ decays have branching fractions
of $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})=$(8.60\pm
0.36)\text{\times}{10}^{-14}$$ and
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})=$(2.41\pm
0.13)\text{\times}{10}^{-15}$$. With contributions beyond the SM, these
branching fractions could be significantly larger, reaching values of
$\mathcal{O}(10^{-8})$ for
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})$ and
$\mathcal{O}(10^{-10})$ for
$\mathcal{B}\mbox{(${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$)}$ [10]. These values
are close to the current experimental bounds of
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<$2.8\text{\times}{10}^{-7}$$
and
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<$8.3\text{\times}{10}^{-8}$$
at $90\text{\,}\mathrm{\char 37\relax}$ confidence level (CL) [11], set by the
CDF collaboration.
In this Letter a search for ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ decays is presented using data collected
with the LHCb experiment in proton-proton collisions at center-of-mass
energies of $7\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ in 2011,
$8\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ in 2012 and
$13\text{\,}\mathrm{T}\mathrm{e}\mathrm{V}$ in 2015 and 2016, corresponding to
integrated luminosities of $1$, $2$ and $2\text{\,}\mathrm{f}\mathrm{b}^{-1}$,
respectively. The signal yields are determined from a fit to the data and
normalized to those of the
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$ decay, where the ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$ meson decays to $e^{+}e^{-}$, which has a precisely measured branching
fraction [12] and a similar dielectron signature in the detector.
The LHCb detector [13, 14] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $b$ or $c$ quarks. The detector includes a high-precision tracking
system consisting of a silicon-strip vertex detector surrounding the $pp$
interaction region, a large-area silicon-strip detector located upstream of a
dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three
stations of silicon-strip detectors and straw drift tubes placed downstream of
the magnet. Different types of charged hadrons are distinguished using
information from two ring-imaging Cherenkov detectors. Photons, electrons and
hadrons are identified by a calorimeter system consisting of scintillating-pad
and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons
are identified by a system composed of alternating layers of iron and
multiwire proportional chambers.
The online event selection is performed by a trigger [15], which consists of a
hardware stage, based on information from the calorimeter and muon systems,
followed by a software stage, which applies a full event reconstruction. At
the hardware trigger stage, events are required to have a high-energy deposit
in the calorimeters associated with a signal electron candidate, or a muon
candidate with high transverse momentum $p_{\mathrm{T}}$, or a photon,
electron or hadron candidate with high transverse energy from the decays of
other particles from the $pp$ collision. The software trigger requires a two-
track secondary vertex with a significant displacement from any primary $pp$
interaction vertex (PV). At least one charged particle must have high
$p_{\mathrm{T}}$ and be inconsistent with originating from a PV. A
multivariate algorithm [16, 17] is used in the trigger for the identification
of secondary vertices consistent with the decay of a $b$ hadron. Simulated
samples are used to optimize the candidate selection, estimate selection
efficiencies and describe the expected invariant-mass shapes of the signal
candidates and background decays. In the simulation, $pp$ collisions are
generated using Pythia [18, *Sjostrand:2007gs] with a specific LHCb
configuration [20]. Decays of unstable particles are described by EvtGen [21],
in which final-state radiation is generated using Photos [22]. The interaction
of the generated particles with the detector, and its response, are
implemented using the Geant4 toolkit [23, *Agostinelli:2002hh] as described in
Ref. [25]. The simulation is corrected for data-simulation differences in
$B$-meson production kinematics, detector occupancy and isolation criteria
[26] using ${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$ and
${{B}^{0}_{s}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}\phi$ decays, with ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}\\!\rightarrow{e^{+}e^{-}}$ and $\phi\\!\rightarrow{{K}^{+}}{{K}^{-}}$.
Particle identification variables are calibrated using data from
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$ and ${{D}^{0}}\\!\rightarrow{{K}^{-}}{{\pi}^{+}}$ decays
[27]. The calibration data are binned in momentum and pseudorapidity of the
particle as well as detector occupancy to account for possible differences in
kinematics between the investigated decay and the calibration data.
The ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ candidates are selected in
events passing the trigger requirements by combining two tracks that are
inconsistent with originating from any PV in the event and which form a good-
quality secondary vertex. The tracks are also required to have a momentum
larger than 3 $\text{\,Ge\kern-1.00006ptV\\!/}c$ and $p_{\mathrm{T}}$ greater
than 500 $\text{\,Me\kern-1.00006ptV\\!/}c$, and must be identified as
electrons using information from the Cherenkov detectors and calorimeters. The
dielectron candidate’s momentum must be aligned with the vector pointing from
a PV (the associated PV) to the two-track vertex and have a considerable
transverse component. The candidate must also have an invariant mass in the
range $[4166,6566]\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$.
The measured electron momenta are corrected for losses due to bremsstrahlung
radiation by adding the momentum of photons consistent with being emitted
upstream of the magnet [28]. Candidates in data and simulation are separated
into three categories with either zero, one, or both electrons having a
bremsstrahlung correction applied. To avoid experimenters’ bias, the narrowest
dielectron invariant-mass region containing $90\text{\,}\mathrm{\char
37\relax}$ of simulated ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays,
corresponding to a range of [$4689$, $5588$]
$\text{\,Me\kern-1.00006ptV\\!/}c^{2}$, was removed from the data set until
the analysis procedure was finalized.
Candidates for the normalization mode,
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$, are constructed similarly, but require an additional track
consistent with being a kaon and originating from the same vertex as the
dielectron candidate. The dielectron candidate must have an invariant mass in
the range $[2450,3176]\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$, consistent with
arising from a ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}$ meson decay. In
addition, the reconstructed ${B}^{+}$ candidate mass, when the dielectron
candidate is constrained to the known ${J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}$ mass [12], must be above $5175\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$,
suppressing partially reconstructed decays.
A boosted decision tree (BDT) algorithm[29, 30, 31] is used to separate
${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ signal from random combinations
of two electrons (combinatorial background). The BDT is trained separately for
data taking periods 2011–2012 (Run 1) and 2015–2016 (Run 2) on simulated
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays as signal proxy and
dielectron candidates from data with a mass above
$5588\,\text{\,Me\kern-1.00006ptV\\!/}c^{2}$ as background proxy. The split
between the data taking periods is done to account for changes in the center-
of-mass energies and trigger strategies, which significantly impact the data
distributions and improve the BDT and the particle identification algorithms
in Run 2. It is checked that the data behave consistently across the data-
taking periods. The BDT input variables comprise of the following: kinematic
information on the electron tracks and $B$ candidate, information on the
displacement of the electrons and $B$ candidate from the associated PV, and
isolation variables that quantify the compatibility of other tracks in the
event with originating from the same decay as the $B$ candidate [26, 32].
Candidates with a BDT response compatible with that of the background are
discarded, with the threshold chosen by maximizing the figure of merit
${\epsilon_{\text{signal}}}/{(\sqrt{N_{\text{background}}}+3/2)}$ [33], where
$\epsilon_{\text{signal}}$ is the signal efficiency and the expected
background yield in the signal region is $N_{\text{background}}$.
The final selected data set is separated by data-taking period and by category
of bremsstrahlung correction. The branching fraction
$\mathcal{B}({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ is measured
relative to that of the normalization channel via
$\displaystyle\mathcal{B}({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$
$\displaystyle=N({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})\times\alpha\times\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})\times\left(\frac{f_{d(s)}}{f_{u}}\right)^{-1},$ (1)
where
$\displaystyle\alpha$
$\displaystyle\equiv\frac{\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})}{\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})}\times\frac{1}{N({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})},$ (2)
$\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and
$\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})$ denote the efficiencies of the signal and normalization
modes, and $N$(${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$) and
$N$(${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$) their yields. The normalization mode branching fraction
(including that for the decay ${{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}\\!\rightarrow{e^{+}e^{-}}$) is
$\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})=$(6.03\pm 0.17)\text{\times}{10}^{-5}$$, taken from
Ref.[12]. The $b$-hadron fragmentation fraction ratio $f_{d}/f_{u}$ is assumed
to be unity, while $f_{s}/f_{u}=$0.259\pm 0.015$$ [34] is used for the Run 1
data and is scaled by $1.068\pm 0.016$ for the Run 2 data, according to Ref.
[35], to account for center-of-mass energy differences. A measurement of
$f_{s}/f_{u}$ from Run 2 yields a consistent, but less precise, result [36].
The yield of the normalization mode is determined using an unbinned maximum-
likelihood fit to the ${K}^{+}$ $e^{+}e^{-}$ invariant mass separately for
each year of data taking and bremsstrahlung category. The fit model comprises
a Gaussian function with power-law tails [37] for the signal component, where
the tail parameters are fixed from simulation, and an exponential function to
describe combinatorial background. Summed over the bremsstrahlung categories,
the yield of the normalization mode is $20\,480\pm 140$ in the Run 1 data and
$33\,080\pm 180$ in the Run 2 data.
The selection efficiencies
$\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and
$\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})$ are determined separately for each year of data taking and
bremsstrahlung category using simulated decays that are weighted to better
represent the data. Calibration data are used to evaluate particle-
identification efficiencies [27]. Trigger efficiencies are also estimated from
data, using the technique described in Ref. [38]. For simulated
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, the mean ${B}^{0}_{s}$
lifetime [39] is assumed. The selection efficiency is assumed to be the same
for both ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, which is consistent with
results from simulation. The normalization factors, $\alpha$, are combined
across the data-taking periods and given in Table 1, split by bremsstrahlung
category (for the selection efficiency ratio between normalization and signal
mode, see the Supplemental Material [40]).
Table 1: Normalization factors $\alpha$ for ${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$. The bremsstrahlung category denotes whether zero, one or both electrons are corrected for bremsstrahlung losses. The uncertainties include statistical uncertainties and uncertainties due to limited size of the simulated samples. Bremsstrahlung category | 2011–2012 $[10^{-5}]$ | 2015–2016 $[10^{-5}]$
---|---|---
No correction | $2.85\pm 0.24$ | $1.84\pm 0.08$
One electron corrected | $1.13\pm 0.08$ | $0.70\pm 0.03$
Both electrons corrected | $1.73\pm 0.20$ | $1.04\pm 0.06$
In addition to the combinatorial background, backgrounds due to
misidentification and partial reconstruction are present in the data. These
backgrounds differ significantly between the categories of bremsstrahlung
correction. Their invariant-mass shapes and relative contributions are
evaluated using simulation. In the lower mass region, partially reconstructed
backgrounds of the types ${B}\\!\rightarrow X{e^{+}e^{-}}$ and
${{B}^{+}}\\!\rightarrow{{\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}(\rightarrow
Y^{+}{e^{-}}{{\overline{\nu}}_{e}}){e^{+}}{{\nu}_{e}}$ dominate, where $X$ and
$Y$ represent hadronic systems. The main source of background in the $B$-mass
region, however, stems from misidentified particles in the decays
${{B}^{0}}\\!\rightarrow{{\pi}^{-}}{e^{+}}{{\nu}_{e}}$ and
${B}\\!\rightarrow{h}^{+}{h}^{\prime-}$, where $h$ and $h^{\prime}$ are
hadrons. The latter has a peaking structure in the $B$-mass region.
Backgrounds involving misidentified particles contribute mostly to categories
in which at most one of the electrons has a bremsstrahlung correction applied.
The contribution from combinatorial background is evaluated from same-sign
lepton pairs in data and found to be small. The yields of the backgrounds are
Gaussian constrained to their expected values, estimated from simulation using
their known branching fractions [12].
The shape of the invariant mass of the
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ components is modeled using a Gaussian
function with power-law tails, where the parameters are obtained from
simulation and differ between each bremsstrahlung category and year of data
taking. The peak values and the widths of the functions are corrected for
data-simulation differences by a factor determined from the normalization
mode. The parameters of the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ line shapes are fixed to the same values
with the exception of the peak value, which is shifted according to the known
${B}^{0}_{s}$–${B}^{0}$ mass difference [12]. Due to the limited mass
resolution, arising from imperfect bremsstrahlung recovery, the line shapes
from ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ are highly overlapping. Therefore the
branching fraction of ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ is obtained by
performing a simultaneous fit to the dielectron invariant-mass distribution of
all six data sets while neglecting the contribution from
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$, and vice versa. In these fits, the only
shared parameters between categories are the branching fractions
$\mathcal{B}({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and
$\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})$, and the ratio of the fragmentation fractions
$f_{s}/f_{u}$.
Systematic uncertainties are estimated separately for each data set. Dominant
sources of systematic uncertainties in the normalization arise from the
uncertainty on the fragmentation fraction ratio, the technique used to
evaluate the trigger efficiencies, and the determination of particle-
identification efficiencies; the systematic uncertainties from these sources
extend to $5.8\text{\,}\mathrm{\char 37\relax}$, $5.3\text{\,}\mathrm{\char
37\relax}$, and $5.3\text{\,}\mathrm{\char 37\relax}$ on the branching
fractions, respectively. The uncertainty on
$\mathcal{B}({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}})$ of $2.8\text{\,}\mathrm{\char 37\relax}$ [12] is taken into
account. A difference of up to $4.1\text{\,}\mathrm{\char 37\relax}$ is found
between the efficiency of the BDT selection on simulated
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$ decays and
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$ decays in data, which is assigned as a systematic
uncertainty. The fraction of candidates in each bremsstrahlung-correction
category of the signal modes is taken from simulation. The difference between
simulation and data is investigated using
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip
2.0mu}}{{K}^{+}}$ decays and its effect on the normalization, up to
$4.0\text{\,}\mathrm{\char 37\relax}$, is taken as a systematic uncertainty.
Systematic uncertainties on the invariant-mass resolution corrections are
determined by repeating the correction procedure with pseudoexperiments
obtained with the bootstrapping method [41], yielding up to
$1.1\text{\,}\mathrm{\char 37\relax}$. A difference between the total
selection efficiencies in the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ channels of up to
$2.5\text{\,}\mathrm{\char 37\relax}$ is assigned as a systematic uncertainty
on the ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ normalization factor. Due to the
presence of an additional kaon in the final state of the normalization mode,
the track-reconstruction efficiency is different between the signal and
normalization modes. An uncertainty of $1.1\text{\,}\mathrm{\char 37\relax}$
is assigned to the branching fraction as a systematic uncertainty on the kaon
reconstruction efficiency arising from the limited knowledge of the
interactions in the detector material [42]. Finally, an uncertainty of
$1.0\text{\,}\mathrm{\char 37\relax}$ is assigned to account for small
differences in detector occupancy between the signal and normalization mode
arising from the trigger selection. The dominant sources of systematic
uncertainties on the background composition are due to the imprecise knowledge
of the branching fractions of the background components. The largest
uncertainty of this type on the expected background yield in the $B$-mass
region is $14\text{\,}\mathrm{\char 37\relax}$, determined from refitting the
mass sidebands while varying the background components according to their
uncertainties. Taking all correlations into account, overall single event
sensitivities of $[4.71\pm 0.12\text{(stat.)}\pm 0.33\text{(syst.)}]\times
10^{-10}$ for ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and $[1.271\pm
0.034\text{(stat.)}\pm 0.063\text{(syst.)}]\times 10^{-10}$ for
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ are obtained.
The dielectron invariant-mass spectrum, summed over bremsstrahlung categories,
is shown in Fig. 1, with the result of the
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ fit. The individual categories are
shown in the Supplemental Material [40], as well as the distributions with the
result of the ${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ fit. The measured
branching fractions are
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})=$(2.4\pm
4.4)\text{\times}{10}^{-9}$$ and
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})=$(0.30\pm
1.29)\text{\times}{10}^{-9}$$, where the uncertainties include both
statistical and systematic components. The results are in agreement with the
background-only hypothesis.
Figure 1: Simultaneous fit to the dielectron invariant-mass distribution, with
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The sum of
bremsstrahlung categories is shown for (left) Run 1 and (right) Run 2. The
relative proportions of background contributions change between Run 1 and Run
2 due to different performances of the particle identification algorithms and
BDT selections.
Upper limits on the branching fractions are set using the CLs method [43], as
implemented in the GammaCombo framework[44, 45] with a one-sided profile
likelihood ratio [46] as test statistic. The likelihoods are computed from
fits to the invariant-mass distributions. In the fits, the normalization
factor, normalization mode branching fraction, fragmentation fraction ratio,
and background yields are Gaussian constrained to their expected values within
statistical and systematic uncertainties. Pseudoexperiments, in which the
nuisance parameters are set to their fitted values from data, are used for the
evaluation of the test statistic.
The expected and observed CLs distributions are shown in Fig. 2. The upper
observed limits are
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<9.4\,(11.2)\times
10^{-9}$ and
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<2.5\,(3.0)\times 10^{-9}$ at
$90\,(95)\,\%$ confidence level. These are consistent with the expected upper
limits of
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<7.0\,(8.6)\times
10^{-9}$ and
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<2.0\,(2.5)\times 10^{-9}$ at
$90\,(95)\,\%$ confidence level, obtained as the median of limits determined
on background-only pseudoexperiments.
Figure 2: CLs values as a function of the branching fractions of the decays
(left) ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ and (right)
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$. The red solid line (black solid line
with data points) corresponds to the distribution of the expected (observed)
upper limits, and the light blue (dark blue) band contains the $1\sigma$
$(2\sigma)$ uncertainties on the expected upper limits. Thresholds
corresponding to $90\,\%$ and $95\,\%$ confidence level are indicated with
dashed lines. The observed values are plotted for branching fractions greater
than the measured branching fraction in the data; the test statistic is
defined to be nonzero only in that region.
In conclusion, a search for the rare decays
${{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}}$ is performed using data from
proton-proton collisions recorded with the LHCb experiment, corresponding to a
total integrated luminosity of $5\text{\,}\text{\,fb}^{-1}$. No excess of
events is observed over the background. The resulting limits on the branching
fractions are
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})<9.4\,(11.2)\times
10^{-9}$ and
$\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})<2.5\,(3.0)\times 10^{-9}$ at
$90\,(95)\,\%$ confidence level, when neglecting the contribution from the
other decay. The mean ${B}^{0}_{s}$ lifetime is assumed for
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays. Assuming SM-like $C\\!P$-odd
($C\\!P$-even) ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ decays, an increase
(decrease) of $2.4\text{\,}\mathrm{\char 37\relax}$ with respect to the quoted
limit is found. The results improve the limits on these branching fractions
[11] by more than one order of magnitude and constrain contributions beyond
the SM, for example from scalar and pseudoscalar currents [10].
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST
and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN
(Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE
(Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC
(United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources
that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN
(Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and
Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-
GRID (Poland) and OSC (USA). We are indebted to the communities behind the
multiple open-source software packages on which we depend. Individual groups
or members have received support from AvH Foundation (Germany); EPLANET, Marie
Skłodowska-Curie Actions and ERC (European Union); ANR, Labex P2IO and OCEVU,
and Région Auvergne-Rhône-Alpes (France); Key Research Program of Frontier
Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); RFBR, RSF
and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society
and the Leverhulme Trust (United Kingdom).
## Supplemental Material for LHCb-PAPER-2020-001
The individual categories of the simultaneous fit to the dielectron invariant-
mass using the ${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis are
presented in Fig. 3. The fit to the invariant dielectron mass including the
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis instead of the
${{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis is shown in Fig. 4, where
the bremsstrahlung categories are summed. The individual categories of the
simultaneous fit to the dielectron invariant-mass using the
${{B}^{0}}\\!\rightarrow{e^{+}e^{-}}$ hypothesis are presented in Fig. 5.
Table 2 lists the inputs to the normalization factors: the ratio of
normalization and signal efficiencies and the normalization yield. The
efficiency of the normalization mode differs from the signal and causes the
efficiency ratio to decrease with bremsstrahlung category due to the slightly
different reconstruction and preselection and a different impact of the BDT
selection, where the differences mainly originate from the additional track in
the normalization mode.
Figure 3: Simultaneous fit to the dielectron invariant-mass distribution in
all categories, with $\mathcal{B}({{B}^{0}}\\!\rightarrow{e^{+}e^{-}})$ fixed
to zero. The top figures show the three bremsstrahlung categories in the Run 1
data set and the bottom figures show the Run 2 data set. From left to right,
the data sets correspond to the bremsstrahlung correction category with no
correction, correcting one electron and correcting both electrons. The
relative proportions of background contributions change between Run 1 and Run
2 due to different performances of the particle-identification algorithms and
BDT selections. Their relative fractions between bremsstrahlung categories
follow the expectation from simulation.
Figure 4: Simultaneous fit to the dielectron invariant-mass distribution, with
$\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The
bremsstrahlung categories are summed over the (left) Run 1 and (right) Run 2
data sets. The relative proportions of background contributions change between
Run 1 and Run 2 due to different performances of the particle-identification
algorithms and BDT selections.
Figure 5: Simultaneous fit to the dielectron invariant-mass distribution in all categories, with $\mathcal{B}({{B}^{0}_{s}}\\!\rightarrow{e^{+}e^{-}})$ fixed to zero. The top figures show the three bremsstrahlung categories in the Run 1 data set and the bottom figures show the Run 2 data set. From left to right, the data sets correspond to the bremsstrahlung correction category with no correction, correcting one electron and correcting both electrons. The relative proportions of background contributions change between Run 1 and Run 2 due to different performances of the particle-identification algorithms and BDT selections. Their relative fractions between bremsstrahlung categories follow the expectation from simulation. Table 2: Inputs for the normalization factors, the efficiency ratio $\varepsilon({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})/\varepsilon({{B}_{({s})}^{0}}\\!\rightarrow{e^{+}e^{-}})$ and normalization yield $N({{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi\mskip 2.0mu}}{{K}^{+}})$. The bremsstrahlung category (Brem. cat.) denotes whether zero, one or both electrons are corrected for bremsstrahlung losses. The uncertainties on the efficiency ratios include statistical uncertainties from the calibration and uncertainties due to limited size of the simulated samples. Brem. cat. | 2011–2012 | 2015–2016
---|---|---
| Efficiency ratio | Norm. yield $[10^{3}]$ | Efficiency ratio | Norm. yield $[10^{3}]$
Brem. 0 | $0.144\pm 0.012$ | $5.05\pm 0.07$ | $0.148\pm 0.118$ | $7.96\pm 0.09$
Brem. 1 | $0.119\pm 0.008$ | $10.43\pm 0.11$ | $0.118\pm 0.005$ | $12.75\pm 0.05$
Brem. 2 | $0.086\pm 0.010$ | $4.95\pm 0.07$ | $0.085\pm 0.005$ | $8.306\pm 0.032$
## References
* [1] LHCb collaboration, R. Aaij et al., _Test of lepton universality with ${{B}^{0}}\\!\rightarrow{{K}^{*0}}\ell^{+}\ell^{-}$ decays_, JHEP 08 (2017) 055, arXiv:1705.05802
* [2] LHCb collaboration, R. Aaij et al., _Search for lepton-universality violation in ${{{B}^{+}}}\\!\rightarrow{{K}^{+}}\ell^{+}\ell^{-}$ decays_, Phys. Rev. Lett. 122 (2019) 191801, arXiv:1903.09252
* [3] LHCb collaboration, R. Aaij et al., _Test of lepton universality using ${{\mathchar 28931\relax}^{0}_{b}}\\!\rightarrow p{{K}^{-}}\ell^{+}\ell^{-}$ decays_, JHEP 05 (2020) 040, arXiv:1912.08139
* [4] BaBar collaboration, J. P. Lees et al., _Measurement of branching fractions and rate asymmetries in the rare decays $B\rightarrow K^{(*)}l^{+}l^{-}$_, Phys. Rev. D86 (2012) 032012, arXiv:1204.3933
* [5] Belle collaboration, A. Abdesselam et al., _Test of lepton flavor universality in ${B\rightarrow K\ell^{+}\ell^{-}}$ decays_, arXiv:1908.01848
* [6] Belle collaboration, A. Abdesselam et al., _Test of lepton flavor universality in ${B\rightarrow K^{\ast}\ell^{+}\ell^{-}}$ decays at Belle_, arXiv:1904.02440
* [7] CMS and LHCb collaborations, V. Khachatryan et al., _Observation of the rare ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}\mu^{-}}$ decay from the combined analysis of CMS and LHCb data_, Nature 522 (2015) 68, arXiv:1411.4413
* [8] LHCb collaboration, R. Aaij et al., _Measurement of the ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}\mu^{-}}$ branching fraction and effective lifetime and search for ${{B}^{0}}\\!\rightarrow{\mu^{+}\mu^{-}}$ decays_, Phys. Rev. Lett. 118 (2017) 191801, arXiv:1703.05747
* [9] M. Beneke, C. Bobeth, and R. Szafron, _Power-enhanced leading-logarithmic QED corrections to $B_{q}\rightarrow\mu^{+}\mu^{-}$_, JHEP 10 (2019) 232, arXiv:1908.07011
* [10] R. Fleischer, R. Jaarsma, and G. Tetlalmatzi-Xolocotzi, _In pursuit of New Physics with $B^{0}_{s,d}\rightarrow\ell^{+}\ell^{-}$_, JHEP 05 (2017) 156, arXiv:1703.10160
* [11] CDF collaboration, T. Aaltonen et al., _Search for the decays $B^{0}_{s}\rightarrow e^{+}\mu^{-}$ and $B^{0}_{s}\rightarrow e^{+}e^{-}$ in CDF Run II_, Phys. Rev. Lett. 102 (2009) 201801, arXiv:0901.3803
* [12] Particle Data Group, M. Tanabashi et al., _Review of particle physics_ , Phys. Rev. D98 (2018) 030001, and 2019 update
* [13] LHCb collaboration, A. A. Alves Jr. et al., _The LHCb detector at the LHC_, JINST 3 (2008) S08005
* [14] LHCb collaboration, R. Aaij et al., _LHCb detector performance_ , Int. J. Mod. Phys. A30 (2015) 1530022, arXiv:1412.6352
* [15] R. Aaij et al., _The LHCb trigger and its performance in 2011_, JINST 8 (2013) P04022, arXiv:1211.3055
* [16] V. V. Gligorov and M. Williams, _Efficient, reliable and fast high-level triggering using a bonsai boosted decision tree_ , JINST 8 (2013) P02013, arXiv:1210.6861
* [17] T. Likhomanenko et al., _LHCb topological trigger reoptimization_ , J. Phys. Conf. Ser. 664 (2015) 082025
* [18] T. Sjöstrand, S. Mrenna, and P. Skands, _PYTHIA 6.4 physics and manual_ , JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [19] T. Sjöstrand, S. Mrenna, and P. Skands, _A brief introduction to PYTHIA 8.1_ , Comput. Phys. Commun. 178 (2008) 852, arXiv:0710.3820
* [20] I. Belyaev et al., _Handling of the generation of primary events in Gauss, the LHCb simulation framework_ , J. Phys. Conf. Ser. 331 (2011) 032047
* [21] D. J. Lange, _The EvtGen particle decay simulation package_ , Nucl. Instrum. Meth. A462 (2001) 152
* [22] P. Golonka and Z. Was, _PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays_, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026
* [23] Geant4 collaboration, J. Allison et al., _Geant4 developments and applications_ , IEEE Trans. Nucl. Sci. 53 (2006) 270
* [24] Geant4 collaboration, S. Agostinelli et al., _Geant4: A simulation toolkit_ , Nucl. Instrum. Meth. A506 (2003) 250
* [25] M. Clemencic et al., _The LHCb simulation application, Gauss: Design, evolution and experience_, J. Phys. Conf. Ser. 331 (2011) 032023
* [26] LHCb collaboration, R. Aaij et al., _Search for the rare decays ${{B}^{0}_{s}}\\!\rightarrow{\mu^{+}\mu^{-}}$ and ${{B}^{0}}\\!\rightarrow{\mu^{+}\mu^{-}}$_, Phys. Lett. B708 (2012) 55, arXiv:1112.1600
* [27] L. Anderlini et al., _The PIDCalib package_ , LHCb-PUB-2016-021, 2016
* [28] LHCb collaboration, R. Aaij et al., _Measurement of the ${{B}^{0}}\\!\rightarrow{{K}^{*0}}{e^{+}e^{-}}$ branching fraction at low dilepton mass_, JHEP 05 (2013) 159, arXiv:1304.3035
* [29] Y. Freund and R. E. Schapire, _A decision-theoretic generalization of on-line learning and an application to boosting_ , J. Comput. Syst. Sci. 55 (1997) 119
* [30] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and regression trees, Wadsworth international group, Belmont, California, USA, 1984
* [31] F. Pedregosa et al., _Scikit-learn: Machine learning in Python_ , J. Machine Learning Res. 12 (2011) 2825, arXiv:1201.0490, and online at http://scikit-learn.org/stable/
* [32] L. Gavardi, _Search for lepton flavour violation in $\tau$ decays at the LHCb experiment_, Master’s thesis, Università degli studi di Milano-Bicocca, 2013, presented 28 Nov 2013, CERN-THESIS-2013-259
* [33] G. Punzi, _Sensitivity of searches for new signals and its optimization_ , eConf C030908 (2003) MODT002, arXiv:physics/0308063
* [34] LHCb collaboration, R. Aaij et al., _Measurement of the fragmentation fraction ratio $f_{s}/f_{d}$ and its dependence on $B$ meson kinematics_, JHEP 04 (2013) 001, arXiv:1301.5286, $f_{s}/f_{d}$ value updated in LHCb-CONF-2013-011
* [35] LHCb collaboration, R. Aaij et al., _Measurement of $f_{s}/f_{u}$ variation with proton-proton collision energy and $B$-meson kinematics_, Phys. Rev. Lett. 124 (2020) 122002, arXiv:1910.09934
* [36] LHCb collaboration, R. Aaij et al., _Measurement of $b$-hadron fractions in 13 TeV $p$ $p$ collisions_, Phys. Rev. D100 (2019) 031102(R), arXiv:1902.06794
* [37] T. Skwarnicki, A study of the radiative cascade transitions between the Upsilon-prime and Upsilon resonances, PhD thesis, Institute of Nuclear Physics, Krakow, 1986, DESY-F31-86-02
* [38] S. Tolk, J. Albrecht, F. Dettori, and A. Pellegrino, _Data driven trigger efficiency determination at LHCb_ , LHCb-PUB-2014-039, 2014
* [39] Particle Data Group, K. A. Olive et al., _Review of particle physics_ , Chin. Phys. C38 (2014) 090001
* [40] See Supplemental Material for the selection efficiency ratio of normalization and signal mode, the individual categories of the $B^{0}_{s}\rightarrow e^{+}e^{-}$ fit and the distributions with the result of the $B^{0}\rightarrow e^{+}e^{-}$ fit
* [41] B. Efron and R. J. Tibshirani, An introduction to the bootstrap, Mono. Stat. Appl. Probab., Chapman and Hall, London, 1993
* [42] LHCb collaboration, R. Aaij et al., _Measurement of the track reconstruction efficiency at LHCb_ , JINST 10 (2015) P02007, arXiv:1408.1251
* [43] A. L. Read, _Presentation of search results: The CL s technique_, J. Phys. G28 (2002) 2693
* [44] LHCb collaboration, R. Aaij et al., _Measurement of the CKM angle $\gamma$ from a combination of LHCb results_, JHEP 12 (2016) 087, arXiv:1611.03076
* [45] M. Kenzie et al., _GammaCombo: A statistical analysis framework for combining measurements, fitting datasets and producing confidence intervals_ , doi: 10.5281/zenodo.3371421
* [46] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, _Asymptotic formulae for likelihood-based tests of new physics_ , Eur. Phys. J. C71 (2011) 1554, Erratum ibid. C73 (2013) 2501, arXiv:1007.1727
LHCb collaboration
R. Aaij31, C. Abellán Beteta49, T. Ackernley59, B. Adeva45, M. Adinolfi53, H.
Afsharnia9, C.A. Aidala81, S. Aiola25, Z. Ajaltouni9, S. Akar66, P.
Albicocco22, J. Albrecht14, F. Alessio47, M. Alexander58, A. Alfonso Albero44,
G. Alkhazov37, P. Alvarez Cartelle60, A.A. Alves Jr45, S. Amato2, Y. Amhis11,
L. An21, L. Anderlini21, G. Andreassi48, M. Andreotti20, F. Archilli16, A.
Artamonov43, M. Artuso67, K. Arzymatov41, E. Aslanides10, M. Atzeni49, B.
Audurier11, S. Bachmann16, J.J. Back55, S. Baker60, V. Balagura11,b, W.
Baldini20, A. Baranov41, R.J. Barlow61, S. Barsuk11, W. Barter60, M.
Bartolini23,47,h, F. Baryshnikov78, J.M. Basels13, G. Bassi28, V.
Batozskaya35, B. Batsukh67, A. Battig14, A. Bay48, M. Becker14, F. Bedeschi28,
I. Bediaga1, A. Beiter67, V. Belavin41, S. Belin26, V. Bellee48, K. Belous43,
I. Belyaev38, G. Bencivenni22, E. Ben-Haim12, S. Benson31, A. Berezhnoy39, R.
Bernet49, D. Berninghoff16, H.C. Bernstein67, C. Bertella47, E. Bertholet12,
A. Bertolin27, C. Betancourt49, F. Betti19,e, M.O. Bettler54, Ia. Bezshyiko49,
S. Bhasin53, J. Bhom33, M.S. Bieker14, S. Bifani52, P. Billoir12, A.
Bizzeti21,t, M. Bjørn62, M.P. Blago47, T. Blake55, F. Blanc48, S. Blusk67, D.
Bobulska58, V. Bocci30, O. Boente Garcia45, T. Boettcher63, A. Boldyrev79, A.
Bondar42,w, N. Bondar37,47, S. Borghi61, M. Borisyak41, M. Borsato16, J.T.
Borsuk33, T.J.V. Bowcock59, C. Bozzi20, M.J. Bradley60, S. Braun65, A. Brea
Rodriguez45, M. Brodski47, J. Brodzicka33, A. Brossa Gonzalo55, D. Brundu26,
E. Buchanan53, A. Büchler-Germann49, A. Buonaura49, C. Burr47, A. Bursche26,
A. Butkevich40, J.S. Butter31, J. Buytaert47, W. Byczynski47, S. Cadeddu26, H.
Cai72, R. Calabrese20,g, L. Calero Diaz22, S. Cali22, R. Calladine52, M.
Calvi24,i, M. Calvo Gomez44,l, P. Camargo Magalhaes53, A. Camboni44,l, P.
Campana22, D.H. Campora Perez31, A.F. Campoverde Quezada5, L. Capriotti19,e,
A. Carbone19,e, G. Carboni29, R. Cardinale23,h, A. Cardini26, I. Carli6, P.
Carniti24,i, K. Carvalho Akiba31, A. Casais Vidal45, G. Casse59, M.
Cattaneo47, G. Cavallero47, S. Celani48, R. Cenci28,o, J. Cerasoli10, M.G.
Chapman53, M. Charles12, Ph. Charpentier47, G. Chatzikonstantinidis52, M.
Chefdeville8, V. Chekalina41, C. Chen3, S. Chen26, A. Chernov33, S.-G.
Chitic47, V. Chobanova45, S. Cholak48, M. Chrzaszcz33, A. Chubykin37, V.
Chulikov37, P. Ciambrone22, M.F. Cicala55, X. Cid Vidal45, G. Ciezarek47, F.
Cindolo19, P.E.L. Clarke57, M. Clemencic47, H.V. Cliff54, J. Closier47, J.L.
Cobbledick61, V. Coco47, J.A.B. Coelho11, J. Cogan10, E. Cogneras9, L.
Cojocariu36, P. Collins47, T. Colombo47, A. Contu26, N. Cooke52, G. Coombs58,
S. Coquereau44, G. Corti47, C.M. Costa Sobral55, B. Couturier47, D.C. Craik63,
J. Crkovská66, A. Crocombe55, M. Cruz Torres1,z, R. Currie57, C.L. Da Silva66,
E. Dall’Occo14, J. Dalseno45,53, C. D’Ambrosio47, A. Danilina38, P.
d’Argent47, A. Davis61, O. De Aguiar Francisco47, K. De Bruyn47, S. De
Capua61, M. De Cian48, J.M. De Miranda1, L. De Paula2, M. De Serio18,d, P. De
Simone22, J.A. de Vries76, C.T. Dean66, W. Dean81, D. Decamp8, L. Del Buono12,
B. Delaney54, H.-P. Dembinski14, A. Dendek34, V. Denysenko49, D. Derkach79, O.
Deschamps9, F. Desse11, F. Dettori26,f, B. Dey7, A. Di Canto47, P. Di Nezza22,
S. Didenko78, H. Dijkstra47, V. Dobishuk51, F. Dordei26, M. Dorigo28,x, A.C.
dos Reis1, L. Douglas58, A. Dovbnya50, K. Dreimanis59, M.W. Dudek33, L.
Dufour47, P. Durante47, J.M. Durham66, D. Dutta61, M. Dziewiecki16, A.
Dziurda33, A. Dzyuba37, S. Easo56, U. Egede69, V. Egorychev38, S.
Eidelman42,w, S. Eisenhardt57, S. Ek-In48, L. Eklund58, S. Ely67, A. Ene36, E.
Epple66, S. Escher13, J. Eschle49, S. Esen31, T. Evans47, A. Falabella19, J.
Fan3, Y. Fan5, N. Farley52, S. Farry59, D. Fazzini11, P. Fedin38, M. Féo47, P.
Fernandez Declara47, A. Fernandez Prieto45, F. Ferrari19,e, L. Ferreira
Lopes48, F. Ferreira Rodrigues2, S. Ferreres Sole31, M. Ferrillo49, M. Ferro-
Luzzi47, S. Filippov40, R.A. Fini18, M. Fiorini20,g, M. Firlej34, K.M.
Fischer62, C. Fitzpatrick47, T. Fiutowski34, F. Fleuret11,b, M. Fontana47, F.
Fontanelli23,h, R. Forty47, V. Franco Lima59, M. Franco Sevilla65, M. Frank47,
C. Frei47, D.A. Friday58, J. Fu25,p, Q. Fuehring14, W. Funk47, E. Gabriel57,
A. Gallas Torreira45, D. Galli19,e, S. Gallorini27, S. Gambetta57, Y. Gan3, M.
Gandelman2, P. Gandini25, Y. Gao4, L.M. Garcia Martin46, J. García Pardiñas49,
B. Garcia Plana45, F.A. Garcia Rosales11, L. Garrido44, D. Gascon44, C.
Gaspar47, D. Gerick16, E. Gersabeck61, M. Gersabeck61, T. Gershon55, D.
Gerstel10, Ph. Ghez8, V. Gibson54, A. Gioventù45, P. Gironella Gironell44, L.
Giubega36, C. Giugliano20, K. Gizdov57, V.V. Gligorov12, C. Göbel70, E.
Golobardes44,l, D. Golubkov38, A. Golutvin60,78, A. Gomes1,a, P. Gorbounov38,
I.V. Gorelov39, C. Gotti24,i, E. Govorkova31, J.P. Grabowski16, R. Graciani
Diaz44, T. Grammatico12, L.A. Granado Cardoso47, E. Graugés44, E. Graverini48,
G. Graziani21, A. Grecu36, R. Greim31, P. Griffith20, L. Grillo61, L.
Gruber47, B.R. Gruberg Cazon62, C. Gu3, E. Gushchin40, A. Guth13, Yu.
Guz43,47, T. Gys47, P. A. Günther16, T. Hadavizadeh62, G. Haefeli48, C.
Haen47, S.C. Haines54, P.M. Hamilton65, Q. Han7, X. Han16, T.H. Hancock62, S.
Hansmann-Menzemer16, N. Harnew62, T. Harrison59, R. Hart31, C. Hasse14, M.
Hatch47, J. He5, M. Hecker60, K. Heijhoff31, K. Heinicke14, A.M. Hennequin47,
K. Hennessy59, L. Henry46, J. Heuel13, A. Hicheur68, D. Hill62, M. Hilton61,
P.H. Hopchev48, J. Hu16, J. Hu71, W. Hu7, W. Huang5, W. Hulsbergen31, T.
Humair60, R.J. Hunter55, M. Hushchyn79, D. Hutchcroft59, D. Hynds31, P.
Ibis14, M. Idzik34, P. Ilten52, A. Inglessi37, K. Ivshin37, R. Jacobsson47, S.
Jakobsen47, E. Jans31, B.K. Jashal46, A. Jawahery65, V. Jevtic14, F. Jiang3,
M. John62, D. Johnson47, C.R. Jones54, B. Jost47, N. Jurik62, S. Kandybei50,
M. Karacson47, J.M. Kariuki53, N. Kazeev79, M. Kecke16, F. Keizer54,47, M.
Kelsey67, M. Kenzie55, T. Ketel32, B. Khanji47, A. Kharisova80, K.E. Kim67, T.
Kirn13, V.S. Kirsebom48, S. Klaver22, K. Klimaszewski35, S. Koliiev51, A.
Kondybayeva78, A. Konoplyannikov38, P. Kopciewicz34, R. Kopecna16, P.
Koppenburg31, M. Korolev39, I. Kostiuk31,51, O. Kot51, S. Kotriakhova37, L.
Kravchuk40, R.D. Krawczyk47, M. Kreps55, F. Kress60, S. Kretzschmar13, P.
Krokovny42,w, W. Krupa34, W. Krzemien35, W. Kucewicz33,k, M. Kucharczyk33, V.
Kudryavtsev42,w, H.S. Kuindersma31, G.J. Kunde66, T. Kvaratskheliya38, D.
Lacarrere47, G. Lafferty61, A. Lai26, D. Lancierini49, J.J. Lane61, G.
Lanfranchi22, C. Langenbruch13, O. Lantwin49, T. Latham55, F. Lazzari28,u, R.
Le Gac10, S.H. Lee81, R. Lefèvre9, A. Leflat39,47, O. Leroy10, T. Lesiak33, B.
Leverington16, H. Li71, L. Li62, X. Li66, Y. Li6, Z. Li67, X. Liang67, T.
Lin60, R. Lindner47, V. Lisovskyi14, G. Liu71, X. Liu3, D. Loh55, A. Loi26, J.
Lomba Castro45, I. Longstaff58, J.H. Lopes2, G. Loustau49, G.H. Lovell54, Y.
Lu6, D. Lucchesi27,n, M. Lucio Martinez31, Y. Luo3, A. Lupato27, E. Luppi20,g,
O. Lupton55, A. Lusiani28,s, X. Lyu5, S. Maccolini19,e, F. Machefert11, F.
Maciuc36, V. Macko48, P. Mackowiak14, S. Maddrell-Mander53, L.R. Madhan
Mohan53, O. Maev37, A. Maevskiy79, D. Maisuzenko37, M.W. Majewski34, S.
Malde62, B. Malecki47, A. Malinin77, T. Maltsev42,w, H. Malygina16, G.
Manca26,f, G. Mancinelli10, R. Manera Escalero44, D. Manuzzi19,e, D.
Marangotto25,p, J. Maratas9,v, J.F. Marchand8, U. Marconi19, S.
Mariani21,21,47, C. Marin Benito11, M. Marinangeli48, P. Marino48, J. Marks16,
P.J. Marshall59, G. Martellotti30, L. Martinazzoli47, M. Martinelli24,i, D.
Martinez Santos45, F. Martinez Vidal46, A. Massafferri1, M. Materok13, R.
Matev47, A. Mathad49, Z. Mathe47, V. Matiunin38, C. Matteuzzi24, K.R.
Mattioli81, A. Mauri49, E. Maurice11,b, M. McCann60, L. Mcconnell17, A.
McNab61, R. McNulty17, J.V. Mead59, B. Meadows64, C. Meaux10, G. Meier14, N.
Meinert74, D. Melnychuk35, S. Meloni24,i, M. Merk31, A. Merli25, M.
Mikhasenko47, D.A. Milanes73, E. Millard55, M.-N. Minard8, O. Mineev38, L.
Minzoni20, S.E. Mitchell57, B. Mitreska61, D.S. Mitzel47, A. Mödden14, A.
Mogini12, R.D. Moise60, T. Mombächer14, I.A. Monroy73, S. Monteil9, M.
Morandin27, G. Morello22, M.J. Morello28,s, J. Moron34, A.B. Morris10, A.G.
Morris55, R. Mountain67, H. Mu3, F. Muheim57, M. Mukherjee7, M. Mulder47, D.
Müller47, K. Müller49, C.H. Murphy62, D. Murray61, P. Muzzetto26, P. Naik53,
T. Nakada48, R. Nandakumar56, T. Nanut48, I. Nasteva2, M. Needham57, N.
Neri25,p, S. Neubert16, N. Neufeld47, R. Newcombe60, T.D. Nguyen48, C. Nguyen-
Mau48,m, E.M. Niel11, S. Nieswand13, N. Nikitin39, N.S. Nolte47, C. Nunez81,
A. Oblakowska-Mucha34, V. Obraztsov43, S. Ogilvy58, D.P. O’Hanlon53, R.
Oldeman26,f, C.J.G. Onderwater75, J. D. Osborn81, A. Ossowska33, J.M. Otalora
Goicochea2, T. Ovsiannikova38, P. Owen49, A. Oyanguren46, P.R. Pais48, T.
Pajero28,28,47,s, A. Palano18, M. Palutan22, G. Panshin80, A. Papanestis56, M.
Pappagallo57, L.L. Pappalardo20, C. Pappenheimer64, W. Parker65, C. Parkes61,
G. Passaleva21,47, A. Pastore18, M. Patel60, C. Patrignani19,e, A. Pearce47,
A. Pellegrino31, M. Pepe Altarelli47, S. Perazzini19, D. Pereima38, P.
Perret9, L. Pescatore48, K. Petridis53, A. Petrolini23,h, A. Petrov77, S.
Petrucci57, M. Petruzzo25,p, B. Pietrzyk8, G. Pietrzyk48, M. Pili62, D.
Pinci30, J. Pinzino47, F. Pisani19, A. Piucci16, V. Placinta36, S. Playfer57,
J. Plews52, M. Plo Casasus45, F. Polci12, M. Poli Lener22, M. Poliakova67, A.
Poluektov10, N. Polukhina78,c, I. Polyakov67, E. Polycarpo2, G.J. Pomery53, S.
Ponce47, A. Popov43, D. Popov52, S. Poslavskii43, K. Prasanth33, L.
Promberger47, C. Prouve45, V. Pugatch51, A. Puig Navarro49, H. Pullen62, G.
Punzi28,o, W. Qian5, J. Qin5, R. Quagliani12, B. Quintana8, N.V. Raab17, R.I.
Rabadan Trejo10, B. Rachwal34, J.H. Rademacker53, M. Rama28, M. Ramos
Pernas45, M.S. Rangel2, F. Ratnikov41,79, G. Raven32, M. Reboud8, F. Redi48,
F. Reiss12, C. Remon Alepuz46, Z. Ren3, V. Renaudin62, S. Ricciardi56, D.S.
Richards56, S. Richards53, K. Rinnert59, P. Robbe11, A. Robert12, A.B.
Rodrigues48, E. Rodrigues59, J.A. Rodriguez Lopez73, M. Roehrken47, A.
Rollings62, V. Romanovskiy43, M. Romero Lamas45, A. Romero Vidal45, J.D.
Roth81, M. Rotondo22, M.S. Rudolph67, T. Ruf47, J. Ruiz Vidal46, A.
Ryzhikov79, J. Ryzka34, J.J. Saborido Silva45, N. Sagidova37, N. Sahoo55, B.
Saitta26,f, C. Sanchez Gras31, C. Sanchez Mayordomo46, R. Santacesaria30, C.
Santamarina Rios45, M. Santimaria22, E. Santovetti29,j, G. Sarpis61, M.
Sarpis16, A. Sarti30, C. Satriano30,r, A. Satta29, M. Saur5, D. Savrina38,39,
L.G. Scantlebury Smead62, S. Schael13, M. Schellenberg14, M. Schiller58, H.
Schindler47, M. Schmelling15, T. Schmelzer14, B. Schmidt47, O. Schneider48, A.
Schopper47, H.F. Schreiner64, M. Schubiger31, S. Schulte48, M.H. Schune11, R.
Schwemmer47, B. Sciascia22, A. Sciubba22, S. Sellam68, A. Semennikov38, A.
Sergi52,47, N. Serra49, J. Serrano10, L. Sestini27, A. Seuthe14, P. Seyfert47,
D.M. Shangase81, M. Shapkin43, L. Shchutska48, T. Shears59, L. Shekhtman42,w,
V. Shevchenko77, E. Shmanin78, J.D. Shupperd67, B.G. Siddi20, R. Silva
Coutinho49, L. Silva de Oliveira2, G. Simi27,n, S. Simone18,d, I. Skiba20, N.
Skidmore16, T. Skwarnicki67, M.W. Slater52, J.G. Smeaton54, A. Smetkina38, E.
Smith13, I.T. Smith57, M. Smith60, A. Snoch31, M. Soares19, L. Soares Lavra9,
M.D. Sokoloff64, F.J.P. Soler58, B. Souza De Paula2, B. Spaan14, E. Spadaro
Norella25,p, P. Spradlin58, F. Stagni47, M. Stahl64, S. Stahl47, P. Stefko48,
O. Steinkamp49,78, S. Stemmle16, O. Stenyakin43, M. Stepanova37, H. Stevens14,
S. Stone67, S. Stracka28, M.E. Stramaglia48, M. Straticiuc36, S. Strokov80, J.
Sun26, L. Sun72, Y. Sun65, P. Svihra61, K. Swientek34, A. Szabelski35, T.
Szumlak34, M. Szymanski47, S. Taneja61, Z. Tang3, T. Tekampe14, F. Teubert47,
E. Thomas47, K.A. Thomson59, M.J. Tilley60, V. Tisserand9, S. T’Jampens8, M.
Tobin6, S. Tolk47, L. Tomassetti20,g, D. Torres Machado1, D.Y. Tou12, E.
Tournefier8, M. Traill58, M.T. Tran48, E. Trifonova78, C. Trippl48, A.
Tsaregorodtsev10, G. Tuci28,o, A. Tully48, N. Tuning31, A. Ukleja35, A.
Usachov31, A. Ustyuzhanin41,79, U. Uwer16, A. Vagner80, V. Vagnoni19, A.
Valassi47, G. Valenti19, M. van Beuzekom31, H. Van Hecke66, E. van
Herwijnen47, C.B. Van Hulse17, M. van Veghel75, R. Vazquez Gomez44, P. Vazquez
Regueiro45, C. Vázquez Sierra31, S. Vecchi20, J.J. Velthuis53, M. Veltri21,q,
A. Venkateswaran67, M. Veronesi31, M. Vesterinen55, J.V. Viana Barbosa47, D.
Vieira64, M. Vieites Diaz48, H. Viemann74, X. Vilasis-Cardona44,l, G.
Vitali28, A. Vitkovskiy31, A. Vollhardt49, D. Vom Bruch12, A. Vorobyev37, V.
Vorobyev42,w, N. Voropaev37, R. Waldi74, J. Walsh28, J. Wang3, J. Wang72, J.
Wang6, M. Wang3, Y. Wang7, Z. Wang49, D.R. Ward54, H.M. Wark59, N.K. Watson52,
D. Websdale60, A. Weiden49, C. Weisser63, B.D.C. Westhenry53, D.J. White61, M.
Whitehead13, D. Wiedner14, G. Wilkinson62, M. Wilkinson67, I. Williams54, M.
Williams63, M.R.J. Williams61, T. Williams52, F.F. Wilson56, W. Wislicki35, M.
Witek33, L. Witola16, G. Wormser11, S.A. Wotton54, H. Wu67, K. Wyllie47, Z.
Xiang5, D. Xiao7, Y. Xie7, H. Xing71, A. Xu4, J. Xu5, L. Xu3, M. Xu7, Q. Xu5,
Z. Xu4, Z. Yang3, Z. Yang65, Y. Yao67, L.E. Yeomans59, H. Yin7, J. Yu7, X.
Yuan67, O. Yushchenko43, K.A. Zarebski52, M. Zavertyaev15,c, M. Zdybal33, M.
Zeng3, D. Zhang7, L. Zhang3, S. Zhang4, W.C. Zhang3,y, Y. Zhang47, A.
Zhelezov16, Y. Zheng5, X. Zhou5, Y. Zhou5, X. Zhu3, V. Zhukov13,39, J.B.
Zonneveld57, S. Zucchelli19,e.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4School of Physics State Key Laboratory of Nuclear Physics and Technology,
Peking University, Beijing, China
5University of Chinese Academy of Sciences, Beijing, China
6Institute Of High Energy Physics (IHEP), Beijing, China
7Institute of Particle Physics, Central China Normal University, Wuhan, Hubei,
China
8Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy,
France
9Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France
10Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France
11Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
12LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3,
Paris, France
13I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany
14Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
15Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
16Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
17School of Physics, University College Dublin, Dublin, Ireland
18INFN Sezione di Bari, Bari, Italy
19INFN Sezione di Bologna, Bologna, Italy
20INFN Sezione di Ferrara, Ferrara, Italy
21INFN Sezione di Firenze, Firenze, Italy
22INFN Laboratori Nazionali di Frascati, Frascati, Italy
23INFN Sezione di Genova, Genova, Italy
24INFN Sezione di Milano-Bicocca, Milano, Italy
25INFN Sezione di Milano, Milano, Italy
26INFN Sezione di Cagliari, Monserrato, Italy
27INFN Sezione di Padova, Padova, Italy
28INFN Sezione di Pisa, Pisa, Italy
29INFN Sezione di Roma Tor Vergata, Roma, Italy
30INFN Sezione di Roma La Sapienza, Roma, Italy
31Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands
32Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, Netherlands
33Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
34AGH - University of Science and Technology, Faculty of Physics and Applied
Computer Science, Kraków, Poland
35National Center for Nuclear Research (NCBJ), Warsaw, Poland
36Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
37Petersburg Nuclear Physics Institute NRC Kurchatov Institute (PNPI NRC KI),
Gatchina, Russia
38Institute of Theoretical and Experimental Physics NRC Kurchatov Institute
(ITEP NRC KI), Moscow, Russia, Moscow, Russia
39Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
40Institute for Nuclear Research of the Russian Academy of Sciences (INR RAS),
Moscow, Russia
41Yandex School of Data Analysis, Moscow, Russia
42Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia
43Institute for High Energy Physics NRC Kurchatov Institute (IHEP NRC KI),
Protvino, Russia, Protvino, Russia
44ICCUB, Universitat de Barcelona, Barcelona, Spain
45Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de
Santiago de Compostela, Santiago de Compostela, Spain
46Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia -
CSIC, Valencia, Spain
47European Organization for Nuclear Research (CERN), Geneva, Switzerland
48Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
49Physik-Institut, Universität Zürich, Zürich, Switzerland
50NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
51Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
52University of Birmingham, Birmingham, United Kingdom
53H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
54Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
55Department of Physics, University of Warwick, Coventry, United Kingdom
56STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
57School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
58School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
59Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
60Imperial College London, London, United Kingdom
61Department of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
62Department of Physics, University of Oxford, Oxford, United Kingdom
63Massachusetts Institute of Technology, Cambridge, MA, United States
64University of Cincinnati, Cincinnati, OH, United States
65University of Maryland, College Park, MD, United States
66Los Alamos National Laboratory (LANL), Los Alamos, United States
67Syracuse University, Syracuse, NY, United States
68Laboratory of Mathematical and Subatomic Physics , Constantine, Algeria,
associated to 2
69School of Physics and Astronomy, Monash University, Melbourne, Australia,
associated to 55
70Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
71Guangdong Provencial Key Laboratory of Nuclear Science, Institute of Quantum
Matter, South China Normal University, Guangzhou, China, associated to 3
72School of Physics and Technology, Wuhan University, Wuhan, China, associated
to 3
73Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia,
associated to 12
74Institut für Physik, Universität Rostock, Rostock, Germany, associated to 16
75Van Swinderen Institute, University of Groningen, Groningen, Netherlands,
associated to 31
76Universiteit Maastricht, Maastricht, Netherlands, associated to 31
77National Research Centre Kurchatov Institute, Moscow, Russia, associated to
38
78National University of Science and Technology “MISIS”, Moscow, Russia,
associated to 38
79National Research University Higher School of Economics, Moscow, Russia,
associated to 41
80National Research Tomsk Polytechnic University, Tomsk, Russia, associated to
38
81University of Michigan, Ann Arbor, United States, associated to 67
aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil
bLaboratoire Leprince-Ringuet, Palaiseau, France
cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
dUniversità di Bari, Bari, Italy
eUniversità di Bologna, Bologna, Italy
fUniversità di Cagliari, Cagliari, Italy
gUniversità di Ferrara, Ferrara, Italy
hUniversità di Genova, Genova, Italy
iUniversità di Milano Bicocca, Milano, Italy
jUniversità di Roma Tor Vergata, Roma, Italy
kAGH - University of Science and Technology, Faculty of Computer Science,
Electronics and Telecommunications, Kraków, Poland
lDS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain
mHanoi University of Science, Hanoi, Vietnam
nUniversità di Padova, Padova, Italy
oUniversità di Pisa, Pisa, Italy
pUniversità degli Studi di Milano, Milano, Italy
qUniversità di Urbino, Urbino, Italy
rUniversità della Basilicata, Potenza, Italy
sScuola Normale Superiore, Pisa, Italy
tUniversità di Modena e Reggio Emilia, Modena, Italy
uUniversità di Siena, Siena, Italy
vMSU - Iligan Institute of Technology (MSU-IIT), Iligan, Philippines
wNovosibirsk State University, Novosibirsk, Russia
xINFN Sezione di Trieste, Trieste, Italy
ySchool of Physics and Information Technology, Shaanxi Normal University
(SNNU), Xi’an, China
zUniversidad Nacional Autonoma de Honduras, Tegucigalpa, Honduras
|
2024-09-04T02:54:58.337551 | 2020-03-09T10:05:41 | 2003.04013 | {
"authors": "Richard A. Fallows, Biagio Forte, Ivan Astin, Tom Allbrook, Alex\n Arnold, Alan Wood, Gareth Dorrian, Maaijke Mevius, Hanna Rothkaehl, Barbara\n Matyjasiak, Andrzej Krankowski, James M. Anderson, Ashish Asgekar, I. Max\n Avruch, Mark Bentum, Mario M. Bisi, Harvey R. Butcher, Benedetta Ciardi,\n Bartosz Dabrowski, Sieds Damstra, Francesco de Gasperin, Sven Duscha, Jochen\n Eisl\\\"offel, Thomas M.O. Franzen, Michael A. Garrett, Jean-Matthias\n Grie\\b{eta}meier, Andr\\'e W. Gunst, Matthias Hoeft, J\\\"org R. H\\\"orandel,\n Marco Iacobelli, Huib T. Intema, Leon V.E. Koopmans, Peter Maat, Gottfried\n Mann, Anna Nelles, Harm Paas, Vishambhar N. Pandey, Wolfgang Reich, Antonia\n Rowlinson, Mark Ruiter, Dominik J. Schwarz, Maciej Serylak, Aleksander\n Shulevski, Oleg M. Smirnov, Marian Soida, Matthias Steinmetz, Satyendra\n Thoudam, M. Carmen Toribio, Arnold van Ardenne, Ilse M. van Bemmel, Matthijs\n H.D. van der Wiel, Michiel P. van Haarlem, Ren\\'e C. Vermeulen, Christian\n Vocks, Ralph A.M.J. Wijers, Olaf Wucknitz, Philippe Zarka, Pietro Zucca",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26113",
"submitter": "Richard A. Fallows",
"url": "https://arxiv.org/abs/2003.04013"
} | arxiv-papers | 11institutetext: ASTRON - the Netherlands Institute for Radio Astronomy, Oude
Hoogeveensedijk 4, 7991 PD Dwingeloo, the Netherlands 22institutetext:
Department of Electronic and Electrical Engineering, University of Bath,
Claverton Down, Bath, BA2 7AY, UK 33institutetext: School of Science and
Technology, Nottingham Trent University, Clifton Lane, Nottingham, NG11 8NS,
UK 44institutetext: Space Environment and Radio Engineering, School of
Engineering, The University of
Birmingham, Edgbaston, Birmingham, B15 2TT, UK 55institutetext: Space Research
Centre, Polish Academy of Sciences, Bartycka 18A, 00-716 Warsaw, Poland
66institutetext: Space Radio-Diagnostics Research Centre, University of Warmia
and Mazury, ul. Romana Prawocheskiego 9, 10-719 Olsztyn, Poland
77institutetext: Technische Universität Berlin, Institut für Geodäsie und
Geoinformationstechnik, Fakultät VI, Sekr. H 12, Hauptgebäude Raum H 5121,
Straße des 17. Juni 135, 10623 Berlin, Germany 88institutetext: GFZ German
Research Centre for Geosciences, Telegrafenberg, 14473 Potsdam, Germany
99institutetext: Shell Technology Center, Bangalore, India 1010institutetext:
Science and Technology B.V., Delft, the Netherlands 1111institutetext: RAL
Space, UKRI STFC, Rutherford Appleton Laboratory, Harwell Campus, Oxfordshire,
OX11 0QX, UK
1212institutetext: Mt Stromlo Observatory, Research School of Astronomy and
Astrophysics, Australian National University, Cotter Road, Weston Creek, ACT
2611, Australia 1313institutetext: Max Planck Institute for Astrophysics,
Karl-Schwarzschild-Str. 1, 85748 Garching, Germany 1414institutetext:
Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029, Hamburg,
Germany 1515institutetext: Thüringer Landessternwarte, Sternwarte 4, D-07778
Tautenburg, Germany 1616institutetext: Jodrell Bank Centre for Astrophysics
(JBCA), Department of Physics & Astronomy, Alan Turing Building, Oxford Road,
University of Manchester, Manchester M139PL, UK 1717institutetext: Leiden
Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The
Netherlands 1818institutetext: LPC2E - Université d’Orléans / CNRS, 45071
Orléans cedex 2, France 1919institutetext: Station de Radioastronomie de
Nançay, Observatoire de Paris, PSL Research University, CNRS, Univ. Orléans,
OSUC, 18330 Nançay, France 2020institutetext: Radboud University, Department
of Astrophysics/IMAPP, P.O. Box 9010, 6500 GL Nijmegen, The Netherlands
2121institutetext: Nikhef, Science Park 105, 1098 XG Amsterdam, The
Netherlands 2222institutetext: Vrije Universiteit Brussel, Astronomy and
Astrophysics Research Group, Pleinlaan 2, 1050 Brussel, Belgium
2323institutetext: Kapteyn Astronomical Institute, University of Groningen,
P.O.Box 800, 9700AV Groningen, the Netherlands 2424institutetext: Leibniz-
Institut für Astrophysik Potsdam, An der Sternwarte 16, D-14482 Potsdam,
Germany 2525institutetext: ECAP, Friedrich-Alexander-Universität Erlangen-
Nürnberg, Erwin-Rommel-Str. 1,
91054 Erlangen, Germany 2626institutetext: DESY, Platanenallee 6, 15738
Zeuthen, Germany 2727institutetext: CIT, Rijksuniversiteit Groningen,
Nettelbosje 1, 9747 AJ Groningen, The Netherlands 2828institutetext: Max-
Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
2929institutetext: Anton Pannekoek Institute, University of Amsterdam, Postbus
94249, 1090 GE Amsterdam, The Netherlands 3030institutetext: Fakultät für
Physik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld, Germany
3131institutetext: South African Radio Astronomy Observatory, 2 Fir Street,
Black River Park, Observatory, Cape Town, 7925, South Africa
3232institutetext: Department of Physics and Astronomy, University of the
Western Cape, Cape Town 7535, South Africa 3333institutetext: Department of
Physics and Electronics, Rhodes University, PO Box 94, Makhanda, 6140, South
Africa 3434institutetext: Jagiellonian University in Kraków, Astronomical
Observatory, ul. Orla 171, PL 30-244 Kraków, Poland 3535institutetext:
Department of Physics, Khalifa University, PO Box 127788, Abu Dhabi, United
Arab Emirates 3636institutetext: Department of Space, Earth and Environment,
Chalmers University of Technology, Onsala Space Observatory, SE-439 92 Onsala,
Sweden 3737institutetext: JIVE, Joint Institute for VLBI-ERIC, Oude
Hoogeveensedijk 4, 7991 PD Dwingeloo, the Netherlands 3838institutetext: LESIA
& USN, Observatoire de Paris, CNRS, PSL, SU/UP/UO, 92195 Meudon, France
# A LOFAR Observation of Ionospheric Scintillation from Two Simultaneous
Travelling Ionospheric Disturbances
R.A. Fallows111Corresponding author<EMAIL_ADDRESS>11 B. Forte 22 I. Astin
22 T. Allbrook222Now at BAE Systems (operation) Ltd. 22 A. Arnold333Now an
independent researcher 22 A. Wood 33 G. Dorrian 44 M. Mevius 11 H. Rothkaehl
55 B. Matyjasiak 55 A. Krankowski 66 J.M. Anderson 7788 A. Asgekar 99 I.M.
Avruch 1010 M.J.Bentum 11 M.M. Bisi 1111 H.R. Butcher 1212 B. Ciardi 1313 B.
Dabrowski 66 S. Damstra 11 F. de Gasperin 1414 S. Duscha 11 J. Eislöffel 1515
T.M.O Franzen 11 M.A. Garrett 16161717 J.-M. Grießmeier 18181919 A.W. Gunst 11
M. Hoeft 1515 J.R. Hörandel 202021212222 M. Iacobelli 11 H.T. Intema 1717
L.V.E. Koopmans 2323 P. Maat 11 G. Mann 2424 A. Nelles 25252626 H. Paas 2727
V.N. Pandey 112323 W. Reich 2828 A. Rowlinson 112929 M. Ruiter 11 D.J. Schwarz
3030 M. Serylak 31313232 A. Shulevski 2929 O.M. Smirnov 33333131 M. Soida 3434
M. Steinmetz 2424 S. Thoudam 3535 M.C. Toribio 3636 A. van Ardenne 11 I.M. van
Bemmel 3737 M.H.D. van der Wiel 11 M.P. van Haarlem 11 R.C. Vermeulen 11 C.
Vocks 2424 R.A.M.J. Wijers 2929 O. Wucknitz 2828 P. Zarka 3838 P. Zucca 11
(Received November 30, 2019)
###### Abstract
This paper presents the results from one of the first observations of
ionospheric scintillation taken using the Low-Frequency Array (LOFAR). The
observation was of the strong natural radio source Cassiopeia A, taken
overnight on 18-19 August 2013, and exhibited moderately strong scattering
effects in dynamic spectra of intensity received across an observing bandwidth
of 10-80 MHz. Delay-Doppler spectra (the 2-D FFT of the dynamic spectrum) from
the first hour of observation showed two discrete parabolic arcs, one with a
steep curvature and the other shallow, which can be used to provide estimates
of the distance to, and velocity of, the scattering plasma. A cross-
correlation analysis of data received by the dense array of stations in the
LOFAR “core” reveals two different velocities in the scintillation pattern: a
primary velocity of $\sim$20-40 m s-1 with a north-west to south-east
direction, associated with the steep parabolic arc and a scattering altitude
in the F-region or higher, and a secondary velocity of $\sim$110 m s-1 with a
north-east to south-west direction, associated with the shallow arc and a
scattering altitude in the D-region. Geomagnetic activity was low in the mid-
latitudes at the time, but a weak sub-storm at high latitudes reached its peak
at the start of the observation. An analysis of Global Navigation Satellite
Systems (GNSS) and ionosonde data from the time reveals a larger–scale
travelling ionospheric disturbance (TID), possibly the result of the
high–latitude activity, travelling in the north-west to south-east direction,
and, simultaneously, a smaller–scale TID travelling in a north-east to south-
west direction, which could be associated with atmospheric gravity wave
activity. The LOFAR observation shows scattering from both TIDs, at different
altitudes and propagating in different directions. To the best of our
knowledge this is the first time that such a phenomenon has been reported.
###### keywords:
ionospheric scintillation – travelling ionospheric disturbances – instability
mechanisms
## 1 Introduction
Radio waves from compact sources can be strongly affected by any ionised
medium through which they pass. Refraction through large-scale density
structures in the medium leads to strong lensing effects where the radio
source appears, if imaged, to focus, de-focus and change shape as the density
structures in the line of sight themselves move and change. Diffraction of the
wavefront by small-scale density structures leads to variations building up in
the intensity of the wavefront with distance from the scattering medium, due
to interference between the scattered waves, an effect known as scintillation.
Observations of all these effects thus contain a great deal of information on
the medium through which the radio waves have passed, including the large-
scale density, turbulence, and the movement of the medium across the line of
sight. Since the second world war, a large number of studies have shown the
effect of ionospheric density variations on radio signals, as reviewed by
Aarons (1982), and this can lead to disruption for applications using Global
Navigation Satellite Systems (GNSS, e.g., GPS), as thoroughly reviewed by,
e.g., Hapgood (2017). The Low-Frequency Array (LOFAR - van Haarlem et al.
(2013)) is Europe’s largest low-frequency radio telescope, operating across
the frequency band 10–250 MHz, and with a dense array of stations in the
Netherlands and, at the time of writing, 13 stations internationally from
Ireland to Poland. It was conceived and designed for radio astronomy but, at
these frequencies, the ionosphere can also have a strong effect on the radio
astronomy measurement (de Gasperin et al., 2018). Ionospheric scintillation,
which is rarely seen over the mid-latitudes on the high-frequency signals of
GNSS, is seen almost continually in observations of strong natural radio
sources by LOFAR.
The wide bandwidth available with LOFAR enables an easy and direct assessment
of scattering conditions and how they change in a given observation, including
whether scattering is weak or strong, or refractive effects dominate, and
enables further information to be gleaned from delay-Doppler spectra (the 2-D
FFT of a dynamic spectrum, termed variously as the “scattering function”,
“generalised power spectrum”, or “secondary spectrum” - here we use the term
“delay-Doppler” spectrum as this clearly describes what the spectrum shows).
In observations of interstellar scintillation these spectra can exhibit
discrete parabolic arcs which can be modelled to give information on the
distance to the scattering “screen” giving rise to the scintillation and its
velocity across the line of sight (Stinebring et al., 2001; Cordes et al.,
2006). Broadband observations of ionospheric scintillation are not common, but
such arcs have been observed using the Kilpisjärvi Atmospheric Imaging
Receiver Array (KAIRA, McKay-Bukowski et al. (2014) – an independent station
built using LOFAR hardware in arctic Finland) in a study by Fallows et al.
(2014). Model spectra produced by Knepp and Nickisch (2009) have also
illustrated parabolic arc structures, particularly in the case of scattering
from a thin scattering screen.
The wide spatial distribution of LOFAR stations also enables scintillation
conditions at these observing frequencies to be sampled over a large part of
western Europe. A dense “core” of 24 stations, situated near Exloo in the
north-east of the Netherlands, over an area with a diameter of $\sim$3.5 km
further provides a more detailed spatial view of the scintillation pattern in
its field of view.
LOFAR thus enables detailed studies of ionospheric scintillation to be
undertaken which can both reveal details which would be unavailable to
discrete-frequency observations such as those taken using GNSS receivers, and
act as a low-frequency complement to these observations to probe potentially
different scattering scales.
A number of different phenomena can lead to scattering effects in radio wave
propagation through the mid-latitude ionosphere: Ionisation structures due to
gradients in the spatial distribution of the plasma density can arise from a
southward expansion of the auroral oval or from large- to small- scale
travelling ionospheric disturbances (TIDs). Large-scale TIDs (LSTIDs) with
wavelengths of about 200 km typically propagate southward after forming in the
high-latitude ionosphere in response to magnetic disturbances (e.g. storms or
sub-storms, Tsugawa et al. (2004)). On the other hand, medium-scale TIDs
(MSTIDs) seem to form in response to phenomena occurring in the neutral
atmosphere triggering atmospheric gravity waves (AGWs), which then propagate
upwards to generate TIDs at ionospheric heights (Kelley, 2009). The morphology
of MSTIDs varies with local time, season, and magnetic longitude. Their
propagation shows irregular patterns that vary on a case-by-case basis,
although they commonly seem to propagate mainly equatorward during winter
daytime and westward during summer night-time (Hernández-Pajares et al., 2006,
2012; Tsugawa et al., 2007; Saito and Fukao, 1998; Emardson et al., 2013).
Smaller-scale ionisation gradients, likely associated with the Perkins
instability (Kelley, 2009, 2011), can then form as a consequence of the
presence of MSTIDs, potentially leading to scintillation at LOFAR frequencies.
In this paper, we perform an in-depth analysis of ionospheric scintillation
seen in an observation of the strong natural radio source Cassiopeia A (Cas A)
overnight on 18-19 August 2013. This observation was amongst the first of its
kind taken with LOFAR and exhibited quite strong scattering effects across the
10-80 MHz band. The purpose of this paper is both technical and scientific: We
first describe the observation itself, and then demonstrate several techniques
to analyse LOFAR data and show how these can bring out the details of
ionospheric structures. Finally, we use supporting data from GNSS and
ionosondes to get a broader picture of conditions in the ionosphere at the
time and how these give rise to the scintillation seen by LOFAR.
## 2 The LOFAR Observation
LOFAR observed Cas A (Right Ascension 23h23m24s, Declination +58°48’54”)
between 21:05 UT on 18 August 2013 and 04:05 UT on 19 August 2013, recording
dynamic spectra from each individual station with a sampling time of 0.083 s
over the band 2.24-97.55 MHz from each available station. The observing band
was sampled with 7808 channels of 12.207 kHz each, but averaged over each
successive 16-channel block to 488 subbands of 195.3125 kHz for the analyses
described in this paper. At the time of observation the available stations
were the 24 stations of the LOFAR “core”, 13 “remote” stations across the
north-east of the Netherlands, and the international stations at Effelsburg,
Unterweilenbach, Tautenburg, Potsdam, and Jülich (Germany), Nançay (France),
Onsala (Sweden), and Chilbolton (UK). The reader is referred to van Haarlem et
al. (2013) for full details of the LOFAR receiving system. The raw data for
this observation can be obtained from the LOFAR long-term archive
(lta.lofar.eu); observation ID L169059 under project “IPS”.
We first illustrate the data in a more traditional sense. Figure 1 shows time
series’ at three discrete observing frequencies of the data taken by LOFAR
station CS002, at the centre of the core, and their associated power spectra.
The power spectra show a fairly typical shape for intensity scintillation: An
initial flat section at the lowest spectral frequencies represents scattering
from larger-scale density structures which are close enough to the observer
that the scattered waves have not had the space to fully interfere to develop
a full intensity scintillation pattern; the turnover (often termed the
“Fresnel Knee”) indicates the largest density scales for which the intensity
scintillation pattern has fully formed; this is followed by a power-law in the
spectra illustrating the cascade from larger to smaller density scales, which
is cut off in these spectra by white noise due to the receiving system (the
flat section covering high spectral frequencies).
Figure 1: 1 Time series of intensity received at three discrete frequencies by
LOFAR station CS002 during the observation of Cas A on 18-19 August 2013,
plus, 1 and 1, power spectra of two 10-minute periods within these time
series’.
However, the advantage of observing a natural radio source with LOFAR is that
full dynamic spectra can be produced covering the full observed band. Dynamic
spectra of data taken by LOFAR station CS002 are presented in Figure 2, which
includes a dynamic spectrum of the full observation, alongside more detailed
views of three different single hours of the observation to illustrate the
range of scattering conditions seen. The strength of the scattering can be
seen much more clearly in this view, compared to time series’ from discrete
observing frequencies. In general, scattering appears weak in this observation
at the highest observing frequencies (where intensity remains highly
correlated across the observing band) with a transition to strong scattering
conditions as the observing frequency decreases. The frequency range displayed
in these dynamic spectra is restricted to exclude the radio–frequency
interference (RFI) which dominates below about 20 MHz and a fade in signal
strength at the higher frequencies due to the imposition of a hard filter to
exclude the FM waveband.
Figure 2: Dynamic spectra of normalised intensity data taken by LOFAR station
CS002 during the observation of Cas A on 18-19 August 2013. The dynamic
spectrum of the entire observing period is given at the top, with zooms into
three different hours of observation below to illustrate the range of
conditions seen. White areas within the plots indicate where RFI was
identified.
RFI is still visible as white areas within the plots. These were identified by
applying a median filter to the data using a window of (19.5 MHz $\times$ 4.2
s) to flatten out the scintillation pattern and then applying a threshold to
identify the RFI. This method appears to be quite successful at identifying
the RFI without also falsely identifying strong peaks in the scintillation as
RFI. For subsequent analysis the RFI data points are replaced by an
interpolation from nearby data, using the Python Astropy (Astropy
Collaboration et al., 2013; Price-Whelan et al., 2018) library routine,
“interpolate_replace_nans”. Normalisation of the data, to correct for long-
period temporal variations in the system (e.g., gain variations resulting from
the varying sensitivity of the receiving antenna array with source elevation),
is carried out after RFI excision by dividing the intensities for each single
frequency subband by a fitted 3rd-order polynomial.
When analysing the data, a variety of scattering conditions are observed
during the course of the observation, as indicated in Figure 2. Different
conditions also naturally occurred over the various international stations
compared to those observed over the Dutch part of LOFAR. In this paper we
therefore focus our analysis on only the first hour of observation and the
measurements taken by the 24 core stations. This allows us to demonstrate the
analysis techniques and to investigate the reason for the scintillation seen
over this interval. Observations from later in this dataset undoubtedly show
other effects and may be discussed in a subsequent publication.
## 3 LOFAR Data Analysis Methods and Results
### 3.1 Delay-Doppler Spectra
The first stage of analysis was the calculation of delay–Doppler spectra:
These were created from the dynamic spectra using five-minute time slices,
advancing every minute through the observation, following the methods
described in Fallows et al. (2014). To avoid regions more heavily contaminated
by RFI, the frequency band used was restricted to 28.5–64.1 MHz. Example
spectra from the first hour are presented in Figure 3.
Figure 3: Example delay-Doppler spectra from the first hour of observation,
taken using five-minute chunks of the dynamic spectrum from CS103 over the
frequency band 28.5-64.1 MHz.
The spectra show two clear arcs: the first is a steeper arc which varies in
curvature throughout the first hour (henceforth labelled for convenience as
the “primary arc”); the second is a very shallow arc (henceforth labelled as
the “secondary arc”) which remains stable for the first 40 minutes of the
observation before fading away. By the end of the first hour of observation
the primary arc also becomes less distinctive for a short while before the
delay–Doppler spectra again show distinctive structure, including a return of
the secondary arc.
The variability of the curvature of the primary arc appears to follow a
wave–like pattern during this part of the observation, as displayed in Figure
4. Here, simple parabolas involving only the square term ($y=Cx^{2}$ where $C$
is the curvature) were plotted with various curvatures until a reasonable
eyeball fit was achieved, and the resulting curvatures plotted for every
minute of observation for the first hour. It proved impossible to achieve
reasonable fits using least-squares methods due to confusion from non–arc
structure in the spectra: Fitting curvatures to these scintillation arcs is a
well–known problem in the interstellar scintillation field and new methods of
attempting this were presented at a recent workshop, but they are not easily
described and have yet to be published. Hence, we do not attempt their
application here.
Figure 4: Curvatures of the steeper arc seen in delay-Doppler spectra
calculated using data from CS103, from simple parabolas fitted by eye. The
grey bounds represent an estimated error.
The presence of two scintillation arcs likely indicates that scattering is
dominated by two distinct layers in the ionosphere. A simple analysis, as
described in Fallows et al. (2014), can be used to estimate the altitude of
the scattering region with a basic formula relating arc curvature $C$ to
velocity $V$ and distance $L$ along the line of sight to the scattering region
(Cordes et al., 2006):
$L=2CV^{2}$ (1)
The square term for the velocity illustrates the importance of gaining a good
estimate of velocity to be able to accurately estimate the altitude of the
scattering region via this method.
### 3.2 Scintillation Pattern Flow
The core area of LOFAR contains 24 stations within an area with a diameter of
$\sim$3.5 km. When viewing dynamic spectra from each of these stations it is
clear that the scintillation pattern is mobile over the core (i.e., temporal
shifts in the scintillation pattern are clear between stations) but does not
necessarily evolve significantly. Therefore, the flow of the scintillation
pattern over the core stations may be viewed directly by simply plotting the
intensity received, for a single subband, by each station on a map of
geographical station locations, for data from successive time steps. A movie
(CasA_20130818_NL.mp4) of the scintillation pattern flow through the
observation is published as an online supplement to this article. The result,
for 12 example time steps, is displayed in Figure 5, where a band of higher
intensities can be seen to progress from north-west to south-east over the
core. It should be noted that the data were integrated in time to 0.92 s for
this purpose, to reduce both flicker due to noise and the duration of the
movie. This does not average over any scintillation structure in this
observation; structure with periodicities shorter than one second would be
obvious in the delay–Doppler spectra as an extension of the arc(s) to greater
than 0.5 Hz along the Doppler frequency axis.
Figure 5: Normalised intensities received by all core stations at an observing
frequency of 44.13 MHz, plotted on a geographical map of the stations. The
intensities are colour-coded using a colour scale from yellow to purple with a
range of 0.8 to 1.3 respectively. Times are at $\sim$10 s intervals from
21:22:25 UT at top left to 21:24:15 UT at bottom right, and each plot uses
data samples with an integration time of 0.92 s. Plot diameter is $\sim$4.5
km.
However, this is not the entire picture because the lines of sight from radio
source to receivers are moving through the ionosphere as the Earth rotates,
meaning that the scintillation pattern flow observed is a combination of flow
due to the movement of density variations in the ionosphere and the movement
of the lines of sight themselves through the ionosphere. Since the speed with
which any single point on a line of sight passing through the ionosphere is
dependent on the altitude of that point (the so-called ionosphere “pierce
point”), this altitude needs to be either assumed or calculated to estimate a
correction to the overall flow speed to obtain the natural ionospheric
contribution. This introduces a natural uncertainty into estimates of
velocity. Figure 6 shows the track of an ionospheric pierce-point at an
assumed altitude of 200 km (an altitude chosen as representative of a typical
F-region altitude where large-scale plasma structures are commonly observed)
for the line of sight from core station CS002 to the radio source Cassiopeia A
through the 7-hour course of the observation to illustrate this movement.
Although not the subject of this paper, it is worth noting that an east to
west flow seen later in the observation appears to be solely due to the lines
of sight moving across a mostly static ionospheric structure (see the online
movie), if the 200 km pierce point is assumed, further illustrating the
necessity to take accurate care of the contribution from line of sight
movement when assessing ionospheric speeds.
Figure 6: Map showing the track of the 200 km pierce point of the line of
sight from core station CS002 to Cassiopeia A from 2013-08-18T21:05:00 to
2013-08-19T04:05:00 UT. The thicker orange part of the track enhances the
first hour of the observation. The black line winding a path across the centre
of the image is the location of the border between the Netherlands and
Germany. The location of CS002 is marked with a black star.
The movie of the scintillation pattern flow, assuming a 200 km pierce point,
shows a clear general north-west to south-east flow during the first hour of
the observation, but also indicates some short (minutes) periods of confusion
in which a north-east to south-west component might be just about discernable.
Any second flow is likely to be associated with a second ionospheric layer and
so warrants further investigation.
### 3.3 Estimating Velocities
The representation of the scintillation pattern flow in movie form gives a
direct and broad picture of the flow pattern and is very helpful in
discovering short time-scale changes in speed and direction. However a cross-
correlation analysis is still necessary to assess actual velocity(s).
Correlation functions are calculated as follows:-
* •
Time series’ of intensity received by each station are calculated by averaging
over the frequency band 55–65 MHz, with these frequencies chosen as the
scintillation pattern remains highly correlated over this band;
* •
For each three-minute data slice, advancing the start time of each successive
slice by 10 s:-
* –
Calculate auto- and cross- power spectra using intensities from every station
pair within the LOFAR core;
* –
Apply low- and high-pass filters to exclude the DC-component and any slow
system variation unlikely to be due to ionospheric effects, and white noise at
the high spectral frequencies. The white noise is also subtracted using an
average of spectral power over the high frequencies;
* –
Inverse–FFT the power spectra back to the time domain to give auto- and cross-
correlation functions.
In the analysis the high- and low-pass filter values were set to 0.01 Hz and
0.5 Hz respectively. This process results in a large set of cross-correlation
functions for each time slice, each of which has an associated station-station
baseline and a primary peak at, typically, a non-zero time delay from which a
velocity can be calculated. However, the direction of the scintillation
pattern flow still needs to be found for calculation of the actual velocity.
For this, directions were assumed for each degree in the full 360–degree range
of possible azimuth directions and the velocities re-calculated using the
components of all baselines aligned with each assumed direction. This results,
for each time slice, in 360 sets of velocities and from each set a median
velocity and standard deviation about the median can be calculated (the median
is used as this is less susceptible to rogue data points than the mean). The
actual flow direction corresponds to the azimuth with the maximum median
velocity and minimum standard deviation, as illustrated in Figure 7.
From this analysis the primary velocity of $\sim$20–40 m s-1 travelling from
north-west to south-east is found, illustrated in Figure 7, corresponding to
the obvious scintillation pattern flow seen in the movie. However, the
presence of a second flow is still not obvious, although a hint of it can be
seen in, for example, the second peak in the median velocity seen in Figure 7.
Figure 7: Plots for single 3-minute time slices of the median velocity and
standard deviation of velocities about the median versus azimuth direction,
calculated from the range of velocities found from all cross-correlation
functions with the baselines within each station pair re-calculated for each
assumed azimuth direction, in the usual form, counting clockwise from north. 7
Time slice commencing 21:05:00 UT using cross-correlations calculated after
applying a high-pass filter at 0.01 Hz; 7 Time slice commencing 21:15:00 UT
using cross-correlations calculated after applying a high-pass filter at 0.07
Hz. Note that the same y-axis is used for both velocity and standard
deviation.
A closer look at the auto- power spectra yielded the key to finding the second
flow. Many spectra show a “bump” which can be viewed as being a second
spectrum superposed on the main one. This is illustrated in Figure 8. To
isolate this part of the spectrum, the spectra were re-filtered with a high-
pass filter value of 0.07 Hz (the low-pass filter value remained the same),
and correlation functions re-calculated. After following the same analysis as
above to find median velocities and standard deviations, the second flow was
found, as illustrated in Figure 7.
Figure 8: Example power spectrum calculated from three minutes of intensity
data received by CS003. The black curve is the raw spectrum, the blue curve is
the filtered and noise-subtracted spectrum. The locations of the low-pass
filter and both high-pass filters used are illustrated.
The analysis, using both high-pass filter values, has been carried out for the
full data set. The velocities and associated directions in degrees azimuth for
the first hour of the observation are given in Figure 9. Error bounds in the
velocities are calculated as the standard deviation about the median of all
velocity values available for the calculated azimuth direction.
Figure 9: Top: Velocities calculated for the first hour of observation from
cross-correlations created after filtering using the two different high-pass
filter values. Bottom: Directions of these velocities, in degrees azimuth.
The higher velocity (henceforth labelled as the “secondary velocity”) shows
some scatter: Periods where the secondary velocity drops to around the primary
velocity values are due to the secondary velocity not being detected at these
times; in these cases, it can still be detected in short-duration drops of
velocity if correlation functions are re-calculated using an even higher high-
pass filter value (the bump in these spectra appears shifted to slightly
higher spectral frequencies). Values which decrease/increase towards/away from
the primary velocity values likely represent a mix between the two velocities.
The larger error bars seen in velocities may also be indicative in some
instances of the standard deviation being broadened by some velocity values
being more dominated by the other flow. The more extended period of scatter
around 21:40 to 22:00 UT is a period where the secondary velocity is less
apparent and the secondary scintillation arc fades from the delay-Doppler
spectra. This indicates that the secondary structure is restricted in either
space or time, either moving out of the field of view of the observation or
ceasing for a period around 21:40 UT. It gives a first indication that the
secondary velocity is associated with the secondary scintillation arc.
### 3.4 Estimating Scattering Altitudes
The velocities can now be used to estimate scattering altitudes, using the
curvatures of the scintillation arcs and the simple formula given in Equation
1. Initially the movement of the line of sight through the ionosphere is not
accounted for, since this correction also requires an estimate of the pierce-
point altitude to be reasonably calculated. Therefore an initial calculation
of the scattering altitudes is made based on velocity values which are not
corrected for this movement.
Using the primary velocities and combining these with the curvatures of the
primary arc (Figure 4) in Equation 1, a range of distances, L, along the line
of sight to the scattering region are found. These distances are converted to
altitudes by accounting for source elevation (Cas A increased in elevation
from 55 ∘ to 64 ∘ during the first hour of observation). This process resulted
in a range of altitudes to the scattering region of 200 to 900 km. Doing the
same for the secondary velocities and applying an arc curvature of 3.2$\pm$0.3
for the secondary scintillation arc gives estimated scattering altitudes of
only $\sim$70 km. If the primary/secondary velocities are combined vice-versa
with the secondary/primary arc curvatures respectively, then the resulting
scattering altitudes are clearly unreasonable (the secondary arc, primary
velocity combination gives estimated altitudes of only $\sim$10 km for
example), lending further credence to the secondary velocity being associated
with the secondary arc.
Velocity contributions from the line of sight movement are calculated as
follows: For each time slice, t, the geographical locations beneath the pierce
point of the line of sight through the ionosphere corresponding to the
estimated scattering altitude at t are calculated, for both t and t +
$\delta$t, where $\delta$t is taken as 3 minutes (the actual value is
unimportant for this calculation). A velocity and its direction are found from
the horizontal distance between these two locations and the direction of
travel from one to the other. The general direction of the movement of the
line of sight through the ionosphere is indicated by the orange line in Figure
6. Although the high scattering altitudes related to the primary scintillation
arc and primary scintillation velocities lead to line-of-sight movements of up
to $\sim$35 m s-1, this movement is almost perpendicular to the direction of
the primary scintillation velocity, limiting the actual contribution to $\sim$
5 m s-1. The line of sight movement is, however, in a very similar direction
to the secondary velocities but the low corresponding scattering altitudes
also limit the contribution in this case to $\sim$5 m s-1.
An iterative procedure is then followed to correct the scintillation
velocities for line-of-sight movement at the calculated scattering altitudes,
re-calculate these altitudes, and re-calculate the line-of-sight movement.
This procedure converges to a set of final scattering altitudes within 5
iterations. These are presented in Figure 10, with error bounds taken as the
lowest and highest possible altitudes resulting from applying this procedure
using the lower and upper limits of the arc curvature and scintillation
velocity error bounds.
Figure 10: Scattering altitudes estimated using Equation 1, the primary
velocities and primary scintillation arc curvatures (blue curve) and the
secondary velocities and the curvature of the secondary scintillation arc (red
dashed curves).
The range of scattering altitudes encompassed by the error bounds is quite
large in some instances, particularly where the calculated altitudes are
higher. Although the square term for the velocity in Equation 1 could lead to
the natural conclusion that the error in the velocity dominates the error in
scattering altitude, the errors in the velocity calculations are, for the most
part, relatively small. Nevertheless, the error in the secondary velocity does
appear to be the dominant error in the lower range of scattering altitudes
(the red curves in Figure 10. However, the dominant error for the higher range
of scattering altitudes appears to be the scintillation arc curvatures,
illustrating the importance of developing accurate fitting methods for these
curvatures. Despite these concerns, it is clear that scattering is seen from
two layers in the ionosphere; the primary scintillation arc arises from
scattering in the F-region and the secondary scintillation arc arises from
scattering much lower down in the D-region. Plasma decays by recombination
with neutral species. In the F-region these densities are lower and so plasma
lifetimes are longer than in the D-region. Typical plasma lifetimes in the
F-region are of the order of hours, while they are of the order of minutes in
the D-region. Hence the structures seen in each level may have a different
source and time history.
## 4 Conditions in the Ionosphere
We now investigate what the overall ionospheric conditions were at the time
and hence the possible cause(s) of the scintillation seen by LOFAR at the
time.
### 4.1 Geomagnetic Conditions
The overall geomagnetic conditions at the time are given in Figure 11, which
shows 24–hour traces of the H–component of magnetic field for a representative
set of magnetometers from the Norwegian magnetometer chain for 18 August 2013.
Activity can be described as unsettled, with a minor substorm at high
latitudes, peaking at the start of the LOFAR observation. However, geomagnetic
activity remains quiet further south, and Kp took a value of 1 at 21 UT on
18th August 2013, indicating that this is unlikely to be a direct cause of the
scintillation seen at LOFAR latitudes. We therefore investigate whether TIDs
were present at the time and whether these could be consistent with the
scintillation seen by LOFAR.
Figure 11: Traces of the H-component of the geomagnetic field recorded on 18
August 2013 by a selection of magnetometer stations from the Norwegian chain.
From top to bottom these are, along with their geomagnetic latitudes (2004,
altitude 100 km): Longyearbyen (75.31∘N), Bjørnøya (71.52∘N), Nordkapp
(67.87∘N), Tromsø(66.69∘N), Rørvik (62.28∘N), and Karmøy (56.43∘N).
### 4.2 Ionosonde Data
The presence of TIDs can be detected through the simultaneous appearance of
wave-like structures on multiple sounding frequencies recorded by an
ionosonde. This method is generally limited to a single point of observation
and detection. The spatial extent of TIDs can be attempted by comparing
multiple traces from different ionosondes, but this is limited by the low
density of ionosondes in a given region. Measurements from the ionosonde in
Chilton (UK) do indeed suggest the presence of wave-like patterns which, in
principle, could be due to a large-scale TID propagating southward and/or
MSTID triggered by a local Atmospheric Gravity Wave (Figure 12).
Figure 12: Multiple traces from the ionosonde in Chilton (UK) recorded between
20:00 18 August 2013 and 06:00 19 August 2013.
### 4.3 GNSS Data
However, measurements from ground-based GNSS receivers offer a more
comprehensive view of the characteristics of any MSTIDs present (Kelley,
2009). In the present study, we focus on perturbations in the slant Total
Electron Content (STEC) observed over the evening of 18 August 2013 from a
network of GNSS stations around the LOFAR core stations (see Figure 13). These
stations are sufficient to infer the presence of TIDs and to infer the upper
spatial scale-size limit of smaller-scale irregularities causing the intensity
scintillation seen at LOFAR wavelengths.
The presence of TID-induced perturbations can be deduced from the presence of
wave-like residuals on the STEC calculated for each satellite-receiver pair.
Figure 13: Map showing the locations of the GNSS stations used.
STEC was calculated and detrended following the methods of Hernández-Pajares
et al. (2006), with the detrending carried out according to:
$\Delta
STEC\left(t\right)=STEC\left(t\right)-\frac{STEC\left(t+\tau\right)+STEC\left(t-\tau\right)}{2}\left[TECu\right]$
(2)
where $\tau=300s$.
It is worth noting that the measured carrier phases $L_{1}$ and $L_{2}$ vary
with time as a consequence of the motion of GNSS satellites relative to a
given receiver on the Earth’s surface. As such, the spatial and temporal
variabilities of ionisation gradients (such as those connected with TIDs and
corresponding instabilities) become entangled. The various detrending methods
(similar to equation 2) lead to an estimate of ionisation gradients by
considering temporal gradients only, with spatial and temporal variabilities
intrinsically entangled in the GNSS observations.
Figure 14 shows examples of wave-like residuals on STEC for one pair of GNSS
stations (Dentergem and Bruxelles in Belgium) observing the same GNSS
satellite. The wave pattern is strongest over the first two hours shown (18:00
- 20:00 UT) but then weakens considerably by the start of the LOFAR
observation, although it remains evident. STEC from the observations of both
stations appears well–correlated, with the Bruxelles dataset lagging behind
that of Dentergem. Since Dentergem lies to the WNW of Bruxelles, this suggests
a strong westerly component in the direction of travel, which could correspond
with the secondary velocity seen by LOFAR.
Figure 14: Example of a satellite-station pair. 14 PRN01 as observed on 18
August 2013 from Dentergem (DENT, blue line) and Bruxelles (BRUX, red line),
both in Belgium, with baseline oriented from WNW to ESE; 14 azimuth/elevation
plot for PRN01 as observed from Dentergem.
Figure 15 shows hourly plots of the overall geographical distribution of the
STEC residuals calculated for all satellite passes seen within each hour by
the GNSS stations used. The patterns shown in Figure 15 suggest a spatially
and temporally varying propagation of MSTID wavefronts with components along
the NE-SW as well as the NW-SE directions. Furthermore, the examples shown in
Figure 15 also indicate the presence of smaller-scale ionisation structures in
proximity to the wavefronts of the MSTIDs. This suggests that the
scintillation seen by LOFAR is likely associated with the perpendicular
propagation of two MSTIDs. However, the STEC variations here are also seen to
fade by the start of the LOFAR observation.
Figure 15: Hourly geographical distribution of all STEC perturbations in the
evening of 18 August 2013: 15 18:00-19:00 UT, 15 19:00-20:00 UT, 15
20:00-21:00 UT, and 15 21:00-22:00 UT.
A further illustration looks at the overall power spectral densities for the
STEC residuals on all satellite–receiver pairs considered here over the hourly
periods 20:00 UT to 21:00] UT and 21:00 UT to 22:00 UT (Figure 16). The
earlier hour is chosen alongside the hour covering the LOFAR data period as
this better displays the components seen in the spectra The temporal
frequencies f can be converted into spatial scales L by assuming a given
velocity VREL for the motion of the ionospheric structures across a GNSS
raypath. That is:
$L=\frac{V_{REL}}{f}$ (3)
where VREL = VIONO-VSAT is the relative velocity between the velocity of the
ionospheric structures and the scan velocity of a single raypath (at the same
shell height). VSAT can be of the order of a few tens of m s-1 at 300 km.
Figure 16: Power Spectral Densities of all the TEC residuals considered during
the hours 16 20:00-21:00 UT and 16 21:00-22:00 UT. The arrows indicate the two
components considered in the text.
There appear to be two main components in the energy cascade from larger to
smaller ionisation scales: one with a period of $\sim$1666 s, and another
component with a period of $\sim$666 s. Taking VREL to be $\sim$100 m s-1 (the
secondary velocity seen by LOFAR as this is in a south-westerly direction and
the example GNSS data in Figure 14 indicate a westerly component), these
periodicities correspond to spatial scales of the order of 166 km and 66 km
respectively. Beyond these scales the STEC analysis is limited by the
sensitivity of the technique (Tsugawa et al., 2007), as the Power Spectral
Densities reach the noise floor (Figure 16). These orders of magnitudes
suggest the presence of a larger–scale TID together with a smaller–scale TID
(Kelley, 2009), while the energy cascade that can be observed through the
Power Spectral Densities indicates that the large–scale structure breaks down
into small–scale structures, likely owing to some instability mechanism.
### 4.4 Estimation of Scale Sizes of Plasma Structures
The scale sizes of the plasma structures causing the scintillation seen by
LOFAR can also be calculated. The variations in the intensity of the received
signal are caused by irregularities with a spatial scale size ranging from the
Fresnel dimension to an order of magnitude below this value (Basu et al.,
1998). The Fresnel length DF is related to the wavelength of the radio wave
$\lambda$ and the line of sight distance from the receiver to the scattering
region L:
$D_{F}=\sqrt{2\lambda L}$ (4)
The Fresnel length was calculated for plasma structures at altitudes of 70 km,
200 km, 350 km and 700 km, elevations of 55∘ and 64∘, and at frequencies of
25.19 MHz, 35.15 MHz and 60.15 MHz, and the results are shown in Table 1. The
altitudes were chosen to cover the range of altitudes identified for the
primary and secondary features in the LOFAR analysis, with the addition of 350
km as this altitude is commonly used within studies using GNSS satellites. The
elevations of the radio source at the start and the end of the first hour of
observation were used to establish the range of Fresnel scales for each
altitude. The frequencies were chosen to match Figure 1.
Table 1 shows that the Fresnel length ranges between $\sim$1 km and $\sim$5 km
and therefore the plasma structures causing the variations in signal intensity
are likely to have a spatial scale size between $\sim$100 m and $\sim$5 km.
The velocities calculated from the LOFAR data indicate that such structures
would take tens of seconds to pass through the source-to-receiver line and the
intensity variations in the observed signal occur on a similar timescale.
Altitude | 70 km | 200 km | 350 km | 700 km
---|---|---|---|---
Frequency | | | |
25.19 MHz | 1.4 | 2.3–2.4 | 3.0–3.2 | 4.3–4.5
35.15 MHz | 1.2 | 1.9–2.0 | 2.6–2.7 | 3.6–3.8
60.15 MHz | 0.9 | 1.5–1.6 | 2.0–2.1 | 2.8–2.9
Table 1: The Fresnel length at altitudes of 70 km, 200 km, 350 km and 700 km
for three different frequencies received by LOFAR station CS002. The ranges
represent calculation using the source elevation for the start and for the end
of the first hour of observation. Values are in km.
## 5 Further Discussion
Geomagnetic activity was low in the mid-latitudes at the time, so enhanced
activity was unlikely to be the direct cause of the scintillation observed.
However, a weak sub-storm was seen at high latitudes and this reached its peak
at the time of the start of the observation. An analysis of GNSS and ionosonde
data reveals the presence of an MSTID travelling in the north-west to south-
east direction. The larger-scale nature of this TID, and its direction of
travel, are strongly consistent with the primary velocity and F-region
scattering altitudes seen in the LOFAR observation. It is possible that this
TID was caused by the geomagnetic activity at high latitude, but this is not
confirmed. Simultaneously, an MSTID is also present travelling in a north-east
to south-west direction which would most likely be associated with an
atmospheric gravity wave propagating up from the neutral atmosphere. The
smaller–scale nature of it, its direction of travel, and likely low-altitude
source make it highly consistent with the secondary velocity and D–region
scattering altitudes observed by LOFAR.
The amplitude of TID activity observed through GNSS STEC residuals decreased
after 20:00 UT (as visible from Figure 14 as well as from the comparison of
hourly geographical maps in Figure 15). However, the LOFAR observation did not
start until 21:05 UT and the presence of scintillation on the radio
frequencies observed by LOFAR remained significant for much of the first hour
of observation. Whilst the presence of MSTIDs seems evident from the ionosonde
multiple traces and GNSS STEC residuals in the region considered, their
signatures do not appear simultaneously above the LOFAR core stations between
21:00 UT and 22:00 UT. This can be explained by the inability of GNSS to
detect smaller amplitudes in STEC residuals, as the noise floor is encountered
for observations with pierce points above the core LOFAR stations (Figures 15
and 16). The scale sizes of plasma structures calculated for the LOFAR data
indicate that these are an order of magnitude lower than those estimated from
GNSS STEC. Smaller ionisation scales developing, for example, through the
Perkins instability could induce scintillation on the VHF radio frequencies
received by LOFAR but not on the L-band frequencies of GNSS. Hence,
scintillation from these mid-latitude smaller-scale ionisation structures,
formed through the Perkins instability in conjuction with the presence of
TIDs, is likely to be what is detected through LOFAR.
## 6 Conclusions and Outlook
This paper presents the results from one of the first observations of
ionospheric scintillation taken using LOFAR, of the strong natural radio
source Cassiopeia A taken overnight on 18–19 August 2013. The observation
exhibited moderately strong scattering effects in dynamic spectra of intensity
received across an observing bandwidth of 10–80 MHz. Delay–Doppler spectra
from the first hour of observation showed two discrete parabolic arcs, one
with a steep and variable curvature and the other with a shallow and static
curvature, indicating that the scintillation was the result of scattering
through two distinct layers in the ionosphere.
A cross-correlation analysis of the data received by stations in the LOFAR
core reveals two different velocities in the scintillation pattern: A primary
velocity of $\sim$20-40 m s-1 is observed travelling in a north-west to south-
east direction, which is associated with the primary parabolic arc and
altitudes of the scattering layer varying in the range $\sim$200–700 km. A
secondary velocity of $\sim$110 m s-1 is observed travelling in a north-east
to south-west direction, which is associated with the secondary arc and a much
lower scattering altitude of $\sim$60–70 km. The latter velocity is associated
with a secondary “bump” seen at higher spectral frequencies in power spectra
calculated from time series’ of intensities, indicating that it is more
strongly associated with smaller–scale structure in the ionosphere.
GNSS and ionosonde data from the time suggest the presence of two MSTIDs
travelling in perpendicular directions. The F-region scattering altitudes
calculated from the LOFAR primary scintillation arc and primary velocity, and
the larger density scales associated with this, suggest that this is
associated with a larger–scale TID seen in GNSS data potentially resulting
from high–latitude geomagnetic activity. The D-region scattering altitudes of
the secondary arc and secondary velocity suggest an atmospheric gravity wave
source for a smaller-scale TID. These TIDs trigger an instability which leads
to the breakdown of the large-scale density structure into smaller scales,
giving rise to the scintillation observed. In the mid-latitude ionosphere the
Perkins mechanism is the most likely instability and the features of the
smaller-scale density variations observed seem consistent with this. To the
best of our knowledge this is the first time that two TIDs have been directly
observed simultaneously at different altitudes.
This observation demonstrates that LOFAR can be a highly valuable tool for
observing ionospheric scintillation in the mid–latitudes over Europe and
enables methods of analysis to be used which give greater insight into the
likely sources of scattering and could be used to improve modelling of them.
With a far greater range of frequencies (multi–octave if the LOFAR high–band
is also used) and fine sampling both across the frequency band and in time,
LOFAR observations offer a wider sensitivity than that available to GNSS
measurements. The analysis techniques shown in this paper also demonstrate
that LOFAR can observe ionospheric structures at different altitudes
simultaneously; a capability not commonly available for GNSS observations. It
also complements these measurements by probing potentially different
scintillation regimes to those observed by GNSS.
Since this observation was taken, many more have been carried out under a
number of projects, recording ionospheric scintillation data at times when the
telescope would otherwise be idle. These demonstrate a wide range of
scintillation conditions over LOFAR, some of which are seen only very
occasionally and perhaps by only one or two of the international stations,
illustrating the value to be had by monitoring the ionosphere at these
frequencies. A Design Study, LOFAR4SpaceWeather (LOFAR4SW – funded from the
European Community’s Horizon 2020 Programme H2020 INFRADEV-2017-1 under grant
agreement 777442) currently underway will design a possible upgrade to LOFAR
to enable, amongst other space weather observations, ionospheric monitoring in
parallel with the regular radio astronomy observations. Such a design, if
implemented, would enable a full statistical study of ionospheric
scintillation at these frequencies, alongside the advances in scintillation
modelling and our understanding of the ionospheric conditions causing it which
can be gleaned in focussed studies such as that presented here.
###### Acknowledgements.
This paper is based on data obtained with the International LOFAR Telescope
(ILT) under project code “IPS”. LOFAR (van Haarlem et al., 2013) is the Low
Frequency Array designed and constructed by ASTRON. It has observing, data
processing, and data storage facilities in several countries, that are owned
by various parties (each with their own funding sources), and that are
collectively operated by the ILT foundation under a joint scientific policy.
The ILT resources have benefitted from the following recent major funding
sources: CNRS-INSU, Observatoire de Paris and Université d’Orléans, France;
BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of
Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The
Science and Technology Facilities Council, UK; Ministry of Science and Higher
Education, Poland. The work carried out at the University of Bath was
supported by the Natural Environment Research Council [Grant Number
NE/R009082/1] and by the European Space Agency/Thales Alenia Space Italy
[H2020-MOM-TASI-016-00002]. We thank Tromsø Geophysical Observatory, UiT the
Arctic University of Norway, for providing the lyr, bjn, nor, tro, rvk, and
kar magnetometer data. The Kp index and the Chilton ionosonde data were
obtained from the U.K. Solar System Data Centre at the Rutherford Appleton
Laboratory. Part of the research leading to these results has received funding
from the European Community’s Horizon 2020 Programme
H2020-INFRADEV-2017-1under grant agreement 777442.
## References
* Aarons (1982) Aarons, J., 1982. Global morphology of ionospheric scintillations. _Proceedings of the IEEE_ , 70(4), 360–378. https://doi.org/10.1109/PROC.1982.12314.
* Astropy Collaboration et al. (2013) Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, P. Greenfield, M. Droettboom, et al., 2013. Astropy: A community Python package for astronomy. _A &A_, 558, A33. 10.1051/0004-6361/201322068, 1307.6212.
* Basu et al. (1998) Basu, S., E. Weber, T. Bullett, M. Keskinen, E. MacKenzie, P. Doherty, R. Sheehan, H. Kuenzler, P. Ning, and J. Bongiolatti, 1998. Characteristics of plasma structuring in the cusp/cleft region at Svalbard. _Radio Science_ , 33(6), 1885–1899. https://doi.org/10.1029/98RS01597.
* Cordes et al. (2006) Cordes, J. M., B. J. Rickett, D. R. Stinebring, and W. A. Coles, 2006. Theory of parabolic arcs in interstellar scintillation spectra. _The Astrophysical Journal_ , 637(1), 346. https://doi.org/10.1086/498332.
* de Gasperin et al. (2018) de Gasperin, F., M. Mevius, D. Rafferty, H. Intema, and R. Fallows, 2018. The effect of the ionosphere on ultra-low-frequency radio-interferometric observations. _Astronomy & Astrophysics_, 615, A179. https://doi.org/10.1051/0004-6361/201833012.
* Emardson et al. (2013) Emardson, R., P. Jarlemark, J. Johansson, and S. Scäfer, 2013. Spatial variability in the ionosphere measured with GNSS networks. _Radio Sci._ , 48, 646–652. https://dx.doi.org/10.1002/2013RS005152.
* Fallows et al. (2014) Fallows, R., W. Coles, D. McKay-Bukowski, J. Vierinen, I. Virtanen, et al., 2014\. Broadband meter-wavelength observations of ionospheric scintillation. _Journal of Geophysical Research: Space Physics_. https://dx.doi.org/10.1002/2014JA020406.
* Hapgood (2017) Hapgood, M., 2017. Satellite navigation—Amazing technology but insidious risk: Why everyone needs to understand space weather. _Space Weather_ , 15(4), 545–548. https://doi.org/10.1002/2017SW001638.
* Hernández-Pajares et al. (2006) Hernández-Pajares, M., J. M. Juan, and J. Sanz, 2006. Medium-scale travelling ionospheric disturbances affecting GPS measurements: Spatial and temporal analysis. _J. Geophys. Res._ , 111, A07S11. https://dx.doi.org/10.1029/2005JA011474.
* Hernández-Pajares et al. (2012) Hernández-Pajares, M., J. M. Juan, J. Sanz, and A. Aragón-Àngel, 2012. Propagation of medium scale travelling ionospheric disturbances at different latitudes and solar cycle conditions. _Radio Sci._ , 47, RS0K05. https://dx.doi.org/10.1029/2011RS004951.
* Kelley (2009) Kelley, M. C., 2009. The Earth’s ionosphere: plasma physics and electrodynamics, vol. 96 of _International Geophysics Series_. Elsevier, 2 edn.
* Kelley (2011) Kelley, M. C., 2011. On the origin of mesoscale TIDs at midlatitudes. _Ann. Geophys._ , 29, 361–366. https://dx.doi.org/10.5194/angeo-29-361-2011.
* Knepp and Nickisch (2009) Knepp, D. L., and L. Nickisch, 2009. Multiple phase screen calculation of wide bandwidth propagation. _Radio Science_ , 44(1). https://doi.org/10.1029/2008RS004054.
* McKay-Bukowski et al. (2014) McKay-Bukowski, D., J.-P. Vierinen, I. Virtanen, R. Fallows, M. Postila, et al., 2014. KAIRA: the Kilpisjar̈vi Atmospheric Imaging Receiver Array – system overview and first results. _IEEE Transactions on Geoscience and Remote Sensing_ , 53(3), 1440–1451. https://doi.org/10.1109/TGRS.2014.2342252.
* Price-Whelan et al. (2018) Price-Whelan, A. M., B. M. Sipőcz, H. M. Günther, P. L. Lim, S. M. Crawford, et al., 2018. The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package. _AJ_ , 156, 123. 10.3847/1538-3881/aabc4f.
* Saito and Fukao (1998) Saito, A., and S. Fukao, 1998. High resolution mapping of TEC perturbations with the GSI GPS network over Japan. _Geophys. Res. Lett._ , 25(16), 3079–3082.
* Stinebring et al. (2001) Stinebring, D., M. McLaughlin, J. Cordes, K. Becker, J. E. Goodman, M. Kramer, J. Sheckard, and C. Smith, 2001. Faint scattering around pulsars: probing the interstellar medium on solar system size scales. _The Astrophysical Journal Letters_ , 549(1), L97. https://dx.doi.org/10.1086/319133.
* Tsugawa et al. (2007) Tsugawa, T., Y. Otsuka, A. J. Coster, and A. Saito, 2007. Medium-scale travelling ionospheric disturbances detected with dense and wide TEC maps over North America. _Geophys. Res. Lett._ , 34, L22,101. https://dx.doi.org/10.1029/2007GL031663.
* Tsugawa et al. (2004) Tsugawa, T., A. Saito, and Y. Otsuka, 2004. A statistical study of large-scale traveling ionospheric disturbances using the GPS network in Japan. _Journal of Geophysical Research: Space Physics_ , 109(A6).
* van Haarlem et al. (2013) van Haarlem, M. P., M. W. Wise, A. W. Gunst, G. Heald, J. P. McKean, et al., 2013. LOFAR: The LOw-Frequency ARray. _A &A_, 556, A2. https://dx.doi.org/10.1051/0004-6361/201220873, 1305.3550.
|
2024-09-04T02:54:58.353153 | 2020-03-09T12:12:03 | 2003.04055 | {
"authors": "Dai Aoki, Ai Nakamura, Fuminori Honda, DeXin Li, Yoshiya Homma, Yusei\n Shimizu, Yoshiki J. Sato, Georg Knebel, Jean-Pascal Brison, Alexandre\n Pourret, Daniel Braithwaite, Gerard Lapertot, Qun Niu, Michal Valiska,\n Hisatomo Harima, and Jacques Flouquet",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26114",
"submitter": "Dai Aoki",
"url": "https://arxiv.org/abs/2003.04055"
} | arxiv-papers | September 16, 2019
# Spin-Triplet Superconductivity in UTe2 and Ferromagnetic Superconductors
Dai Aoki1,2 E-mail<EMAIL_ADDRESS>Ai Nakamura1 Fuminori Honda1 DeXin
Li1 Yoshiya Homma1 Yusei Shimizu1 Yoshiki J. Sato1 Georg Knebel2 Jean-Pascal
Brison2 Alexandre Pourret2 Daniel Braithwaite2 Gerard Lapertot2 Qun Niu2
Michal Vališka2 Hisatomo Harima3 and Jacques Flouquet2 1IMR1IMR Tohoku
University Tohoku University Oarai Oarai Ibaraki Ibaraki 311-1313 311-1313
Japan
2University Grenoble Alpes Japan
2University Grenoble Alpes CEA CEA IRIG-PHELIQS IRIG-PHELIQS F-38000 Grenoble
F-38000 Grenoble France
3Graduate School of Science France
3Graduate School of Science Kobe University Kobe University Kobe 657-8501 Kobe
657-8501 Japan Japan<EMAIL_ADDRESS>
###### Abstract
The spin-triplet state is most likely realized in uranium ferromagnetic
superconductors, UGe2, URhGe, UCoGe. The microscopic coexistence of
ferromagnetism and superconductivity means that the Cooper pair should be
realized under the strong internal field due the ferromagnetism. leading to
the spin-triplet state with equal spin pairing. The field-reinforced
superconductivity, which is observed in all three materials when the
ferromagnetic fluctuations are enhanced, is one of the strong evidences for
the spin-triplet superconductivity. We present here the results of a newly
discovered spin-triplet superconductor, UTe2, and compare those with the
results of ferromagnetic superconductors. Although no magnetic order is found
in UTe2, there are similarities between UTe2 and ferromagnetic
superconductors. For example, the huge upper critical field exceeding the
Pauli limit and the field-reentrant superconductivity for $H\parallel b$-axis
are observed in UTe2, URhGe and UCoGe. We also show the specific heat results
on UTe2 in different quality samples, focusing on the residual density of
states in the superconducting phase.
ferromagnetism, superconductivity, metamagnetism, reentrant superconductivity,
spin triplet, specific heat
The coexistence of ferromagnetism and superconductivity attracts much
attention because unconventional superconductivity with spin-triplet state is
realized. [1, 2] In general, ferromagnetism and superconductivity are
antagonistic because the large internal field due to the ferromagnetism easily
destroys the superconducting Cooper pairs in conventional superconductors.
Thus it is natural to consider that the spin-triplet superconductivity with
equal spin-pairing is realized in ferromagnetic superconductors. The
microscopic coexistence of ferromagnetism and superconductivity is established
only in uranium compounds so far, namely UGe2 [3], URhGe [4] and UCoGe [5].
All of these materials have fairly small ordered moments ($1$–$0.05\,\mu_{\rm
B}/{\rm U}$) in the ferromagnetic phase compared to that for the U free ion
($\sim 3\,\mu_{\rm B}$). Thus the $5f$ electrons for these compounds are
believed to be itinerant in the first principle, although the magnetic
anisotropy is rather strong, indicating the Ising properties. The
superconductivity occurs well below the Curie temperature, $T_{\rm Curie}$, in
the ferromagnetic state. One of the highlights in ferromagnetic
superconductors is the field-reentrant or field-reinforced superconductivity.
In URhGe, for example, the reentrant superconductivity appears when the field
is applied along the hard-magnetization $b$-axis in the orthorhombic
structure. [6] While the transition temperature $T_{\rm sc}$ is $0.25\,{\rm
K}$ at zero field, the reentrant superconducting phase has a maximum $T_{\rm
sc}$ of $0.4\,{\rm K}$ at $H_{\rm R}\sim 12\,{\rm T}$, indicating that the
superconductivity is indeed reinforced under magnetic field. The similar
field-reinforced superconductivity is also observed in UCoGe. [7]
Recently a new spin-triplet superconductor, namely UTe2 was discovered [8, 9].
UTe2 has the body-centered orthorhombic structure with the space group $Immm$
(#71, $D_{2h}^{25}$). The distance of the first nearest neighbor for U atom is
$3.78\,{\rm\AA}$, which is larger than the so-called Hill limit ($\sim
3.5\,{\rm\AA}$). Although no magnetic order was found down to $0.025\,{\rm
K}$, UTe2 is believed to be at the verge of ferromagnetic order. In fact, the
ferromagnetic fluctuations were observed in $\mu$SR [10] and NMR experiments
[11]. By substituting Te with Se, the ferromagnetic order appears at $69$ and
$33\,{\rm K}$ in UTe0.72Se1.28 and UTe0.24Se1.76, respectively, [12] although
the space group for these materials is $Pnma$, which is different from that in
UTe2. The superconducting transition occurs at $1.6\,{\rm K}$ with the sharp
and large specific heat jump. The large residual density of states nearly
$50\,{\%}$ may suggest the possibility for the spontaneous spin-polarization
and the “partially-gapped” superconductivity similar to the A1 state with non-
unitary state. However, it should be stressed that the direct transition from
the paramagnetic state to the non-unitary state at zero field is forbidden
from the symmetry restriction in this orthorhombic system. [13] Thus, the
hidden feature in the superconducting state is expected. No other transition
in the superconducting state is not observed yet at least at zero field. The
pressure study is definitely important to solve this problem. One of the
strongest support for the spin-triplet superconductivity in UTe2 is the huge
upper critical field, $H_{\rm c2}$. In all the field directions, $H_{\rm c2}$
extremely exceeds the Pauli limit ($\sim 3\,{\rm T}$) expected for the weak-
coupling BCS theory. The values of $H_{\rm c2}$ at $0\,{\rm K}$ are $7$ and
$11\,{\rm T}$ for $H\parallel a$ and $c$-axis, respectively. For $H\parallel
b$-axis, the spectacular field-reentrant superconductivity is observed. [14,
15] The transition temperature monotonously decreases with field down to
$0.4\,{\rm K}$ at $16\,{\rm T}$, then increases with field up to $0.9\,{\rm
K}$ at $35\,{\rm T}$. The first order metamagnetic transtion occurs at $H_{\rm
m}=35\,{\rm T}$ [16, 17], and the superconductivity is abruptly collapsed
above $H_{\rm m}$. The metamagnetic transition at $H_{\rm m}$ is connected to
the so-called $T_{\chi,\rm max}$ at low fields, where the magnetic
susceptibility shows a broad maximum for $H\parallel b$-axis. The magnetic
susceptibility shows the Curie-Weiss behavior at high temperatures. At low
temperatures, the anisotropic susceptibility is observed with the relation,
$\chi_{a}>\chi_{c}>\chi_{b}$, which is consistent with the anisotropy of
$H_{\rm c2}$.
In order to study more details on superconducting properties in UTe2, we have
grown single crystals of UTe2 with different quality, and measured the
specific heat at low temperatures. We compare the ($H,T$) phase diagrams for
$H\parallel b$-axis in UTe2, URhGe and UCoGe.
Figure 1: (Color online) Photographs of UTe2 single crystals grown by (a)
chemical vapor transport method and (b) Te-flux method. (c) Laue photograph of
UTe2 single crystal along $c$-axis. (d) U-Te phase diagram cited from Ref.18
Single crystals of UTe2 were grown using chemical vapor transport method. The
starting materials of U and Te were put into a quartz ampoule with the atomic
ratio, U : Te = 2 : 3, together with iodine as the transport agent to be the
density, $3\,{\rm mg/cm}^{3}$ in the inner volume of the quartz ampoule. The
ampoule was slowly heated and was kept at the temperature gradient of
$1060\,^{\circ}{\rm C}$/$1000\,^{\circ}{\rm C}$ for 10 days. Many single
crystals were obtained at lower temperature side, as shown in Fig. 1(a). The
obtained single crystals were checked by the single crystal X-ray analysis.
The lattice parameters and the atomic coordinates are in good agreement with
the values in the previous report. [19] The single crystals were oriented
using the Laue photograph, as shown in Fig. 1(c). The clear superconducting
transition was observed in resistivity and specific heat. The highest residual
resistivity ratio (RRR) is about 40. Note that the we also obtained single
crystals from the previous recipe, that is, a stoichiometric amount of
starting materials and lower temperature gradient $950\,^{\circ}{\rm
C}$/$850\,^{\circ}{\rm C}$. The single crystals were grown at high temperature
side in this case. However, the quality of the single crystal is lower with
the low RRR ($\sim 2$–$3$), and no superconductivity was observed down to
$0.1\,{\rm K}$.
As shown in Fig. 1(d), UTe2 is an incongruent melting compound in the U-Te
phase diagram, and single crystals of UTe2 can be grown using the Te-flux
method as well. The off-stoichiometric amounts of U and Te ($22$ and $78\,{\rm
at\%}$, respectively) were put into an alumina crucible, which was sealed in a
Ta-tube under Ar atmosphere gas. The Ta-tube was then sealed again in a quartz
ampoule. The quartz ampoule was slowly heated up to $1050\,^{\circ}{\rm C}$
and was cooled down to $960\,^{\circ}{\rm C}$. The Te-flux was removed at
$960\,^{\circ}{\rm C}$ in a centrifuge. The obtained single crystals were
large, as shown in Fig. 1(b). However the residual resistivity ratio is not
very large (${\rm RRR}\sim 3$). Although the superconductivity was confirmed
by the resistivity, it was not a bulk property as no anomaly was detected in
the specific heat. Hereafter, we show the results of single crystals obtained
by the chemical vapor transport method with off-stoichiometric amounts of
starting materials and the high temperature gradient.
Figure 2(a) shows the temperature dependence of the electronic specific heat
in UTe2. The part of the data is replotted from Ref. 8, 20. The data of sample
#1 show the highest $T_{\rm sc}$ with a sharp and large jump at $T_{\rm sc}$,
indicating the highest quality sample. The value of $T_{\rm sc}$ defined by
the entropy balance in #1 is $1.65\,{\rm K}$, and the residual $\gamma$-value,
$\gamma_{0}$, which is extrapolated from the fitting $C_{\rm
e}/T=\gamma_{0}+\alpha T^{2}$ at low temperatures assuming a point node gap,
is $\gamma_{0}=52\,{\rm mJ\,K^{-2}mol^{-1}}$. The residual $\gamma$-value is
equal to $44\,{\%}$ of the $\gamma$-value in the normal state. The lower
quality samples show the lower $T_{\rm sc}$ and the higher residual
$\gamma$-value. For example, $T_{\rm sc}$ and $\gamma_{0}$ in sample #4 are
$1.23\,{\rm K}$ and $89\,{\rm mJ\,K^{-2}mol^{-1}}$, respectively. In sample
#5, no superconductivity was observed down to $0.4\,{\rm K}$ in specific heat,
and it is confirmed by the resistivity measurement down $0.1\,{\rm K}$.
Figure 2(b) shows $T_{\rm sc}$ as a function of residual $\gamma$-value
normalized by the $\gamma$-value in the normal state. It is clear that $T_{\rm
sc}$ decreases with increasing the residual $\gamma$-value. It is known that
the decrease of $T_{\rm sc}$ can be described by the Abrikosov-Gor’kov pair-
breaking theory. On the basis of this model, the relation between $T_{\rm sc}$
and the residual density of states had been studied theoretically [21] and
experimentally [22] in high $T_{\rm c}$ cuprates and heavy fermion systems,
where the rapid increase of the residual density of states is reported,
compared to the decrease of $T_{\rm sc}$. This can be explained by the
unitarity scattering in unconventional superconductivity. The present result
in Fig. 2(b) supports the unconventional superconductivity in UTe2.
An important question is whether the residual density of states exists in the
ideal single crystal without impurities. In that case, the partial density of
states would be gapped, and the so-called A1 state should be realized, where
the time reversal symmetry must be broken at zero field. There is, however, no
experimental evidence for the breaking of time reversal symmetry in UTe2.
Figure 2: (Color online) (a) Electronic specific heat in the form of $C_{\rm
e}/T$ vs $T$ of UTe2 in different samples. The phonon contribution is
subtracted from the fitting at high temperature above $T_{\rm sc}$. Part of
the data is replotted from Ref. 8, 20. (b) $T_{\rm sc}$ as a function of the
normalized residual $\gamma$-value for different quality samples.
Next we show in Fig. 3 the ($H,T$) phase diagrams of UTe2 and two
ferromagnetic superconductors, URhGe and UCoGe for $H\parallel b$-axis,
corresponding to the hard-magnetization axis. The field-reentrant or field-
reinforced superconductivity is observed both in URhGe and in UCoGe. The
enhancement of superconductivity is clearly related to the suppression of
$T_{\rm Curie}$, where the ferromagnetic instabilities are realized.
In URhGe, the suppression of $T_{\rm Curie}$ leads to the spin-reorientation
at $H_{\rm R}$ in field sweep at low temperatures. The slope of magnetization
curve for $H\parallel b$-axis is larger than that for $c$-axis. The moment
gradually tilts from $c$ to $b$-axis, and finally it re-orients to $b$-axis at
$H_{\rm R}\sim 12\,{\rm T}$. The $\gamma$-value is enhanced with increasing
field, taking a maximum at $H_{\rm R}$. In NMR experiments, the spin-spin
relaxation rate, $1/T_{2}$ shows the diverging behavior around $H_{\rm R}$,
indicating the strong enhancement of the longitudinal ferromagnetic
fluctuations [23]. The 2nd order transition of $T_{\rm Curie}$ at low fields
changes into the weak 1st order transition at $H_{\rm R}$ through the
tricritical point (TCP). The reentrant superconductivity appears with the
maximum $T_{\rm sc}=0.4\,{\rm K}$ exactly at $H_{\rm R}$.
In UCoGe, the suppression of $T_{\rm Curie}$ with field is similar to the case
for URhGe. However, the spin reorientation is not observed in magnetization
curve, indicating the strong Ising property compared to URhGe. The
superconductivity shows an “S”-shaped curve, which is also connected to the
suppression of $T_{\rm Curie}$. The enhancement of $\gamma$-value and the
development of longitudinal fluctuation are observed in the field scan for
$H\parallel b$-axis.
In UTe2, the field reentrant superconductivity is also observed, while the
temperature range and field range are much wider, compared to those in
ferromagnetic superconductors, URhGe and UCoGe. The reentrant
superconductivity is again linked to the metamagnetic transition at $H_{\rm
m}$. The clear difference from ferromagnetic superconductors is that $H_{\rm
m}$ at high fields originates from the broad maximum of magnetic
susceptibility, $T_{\chi,\rm max}$, instead of $T_{\rm Curie}$. In the heavy
fermion system, it is well known that $H_{\rm m}$ is scaled with $T_{\chi,\rm
max}$ [24]. The value of $H_{\rm m}=35\,{\rm T}$ in UTe2 is consistent with
$T_{\chi,\rm max}=35\,{\rm K}$. The mass enhancement around $H_{\rm m}$ is
detected in the resistivity $A$ coefficient [16] and $\gamma$-value from the
Maxwell relation in magnetization [17] and the direct specific heat
measurements [25]. The crossover at $T_{\chi,\rm max}$ changes into the 1st
order transition at $H_{\rm m}$ through the critical end point (CEP). It
should be noted that the reentrant superconductivity is abruptly suppressed
above $H_{\rm m}$ in UTe2. On the other hand, the reentrant superconductivity
in URhGe still survives in the small field range above $H_{\rm m}$. This is
probably due to the abrupt change of $T_{\rm sc}$ as it is inferred from the
sharp increase of magnetoresistance at $H_{\rm m}$, implying the drastic
change of the electronic state at $H_{\rm m}$.
Figure 3: (Color online) ($H,T$) phase diagrams for $H\parallel b$-axis in
URhGe, UCoGe and UTe2. The data are taken from Ref. 1, 14, 16, 17
In summary, we presented the single crystal growth of the novel spin-triplet
superconductor, UTe2, and the results of specific heat in different quality
samples. The higher quality sample shows the higher $T_{\rm sc}$ and the lower
residual density of states. The rapid increase of the residual density of
states compared to the decrease of $T_{\rm sc}$ supports the unconventional
superconductivity in UTe2. The unusual field-reentrant superconductivity is a
common feature in ferromagnetic superconductors and UTe2. The precise high
field experiments from the microscopic point of views and pressure experiments
using a high quality single crystal are desired for the future studies.
## Acknowledgements
We thank Y. Tokunaga, S. Ikeda, Y. Ōnuki, K. Ishida, K. Izawa, K. Miyake, V.
Mineev, S. Ran, J. Ishizuka, Y. Yanase, K. Machida, C. Paulsen and K. Miyake
for fruitful discussion. This work was supported by ERC starting grant
(NewHeavyFermion), KAKENHI (JP15H05884, JP15H05882, JP15K21732, JP16H04006,
JP15H05745, JP19H00646), and ICC-IMR.
## References
* [1] D. Aoki, K. Ishida, and J. Flouquet: J. Phys. Soc. Jpn. 88 (2019) 022001\.
* [2] D. Aoki and J. Flouquet: J. Phys. Soc. Jpn. 81 (2012) 011003.
* [3] S. S. Saxena, P. Agarwal, K. Ahilan, F. M. Grosche, R. K. W. Haselwimmer, M. J. Steiner, E. Pugh, I. R. Walker, S. R. Julian, P. Monthoux, G. G. Lonzarich, A. Huxley, I. Sheikin, D. Braithwaite, and J. Flouquet: Nature 406 (2000) 587.
* [4] D. Aoki, A. Huxley, E. Ressouche, D. Braithwaite, J. Flouquet, J.-P. Brison, E. Lhotel, and C. Paulsen: Nature 413 (2001) 613.
* [5] N. T. Huy, A. Gasparini, D. E. de Nijs, Y. Huang, J. C. P. Klaasse, T. Gortenmulder, A. de Visser, A. Hamann, T. Görlach, and H. v. Löhneysen: Phys. Rev. Lett. 99 (2007) 067006.
* [6] F. Lévy, I. Sheikin, B. Grenier, and A. D. Huxley: Science 309 (2005) 1343.
* [7] D. Aoki, T. D. Matsuda, V. Taufour, E. Hassinger, G. Knebel, and J. Flouquet: J. Phys. Soc. Jpn. 78 (2009) 113709.
* [8] S. Ran, C. Eckberg, Q.-P. Ding, Y. Furukawa, T. Metz, S. R. Saha, I.-L. Liu, M. Zic, H. Kim, J. Paglione, and N. P. Butch: Science 365 (2019) 684\.
* [9] D. Aoki, A. Nakamura, F. Honda, D. Li, Y. Homma, Y. Shimizu, Y. J. Sato, G. Knebel, J.-P. Brison, A. Pourret, D. Braithwaite, G. Lapertot, Q. Niu, M. Vališka, H. Harima, and J. Flouquet: J. Phys. Soc. Jpn. 88 (2019) 043702.
* [10] S. Sundar, S. Gheidi, K. Akintola, A. M. Côté, S. R. Dunsiger, S. Ran, N. P. Butch, S. R. Saha, J. Paglione, and J. E. Sonier: arXiv:1905.06901 .
* [11] Y. Tokunaga, H. Sakai, S. Kambe, T. Hattori, N. Higa, G. Nakamine, S. Kitagawa, K. Ishida, A. Nakamura, Y. Shimizu, Y. Homma, D. Li, F. Honda, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 073701.
* [12] H. Noël, M. Potel, R. Troc, and L. Shlyk: J. Solid. State. Chem. 126 (1996) 22.
* [13] V. Mineev: private communication .
* [14] G. Knebel, W. Knafo, A. Pourret, Q. Niu, M. Vališka, D. Braithwaite, G. Lapertot, J.-P. Brison, S. Mishra, I. Sheikin, G. Seyfarth, D. Aoki, and J. Flouquet: J. Phys. Soc. Jpn. 88 (2019) 063707.
* [15] S. Ran, I.-L. Liu, Y. S. Eo, D. J. Campbell, P. Neves, W. T. Fuhrman, S. R. Saha, C. Eckberg, H. Kim, J. Paglione, D. Graf, J. Singleton, and N. P. Butch: arXiv:1905.04343 .
* [16] W. Knafo, M. Vališka, D. Braithwaite, G. Lapertot, G. Knebel, A. Pourret, J.-P. Brison, J. Flouquet, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 063705.
* [17] A. Miyake, Y. Shimizu, Y. J. Sato, D. Li, A. Nakmura, Y. Homma, F. Honda, J. Flouquet, M. Tokunaga, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 063706.
* [18] Y. Xu, M. Yamazaki, and P. Villars: Jpn. J. Appl. Phys. 50 (2011) 11RH02.
* [19] S. Ikeda, H. Sakai, D. Aoki, Y. Homma, E. Yamamoto, A. Nakamura, Y. Shiokawa, Y. Haga, and Y. Ōnuki: J. Phys. Soc. Jpn. Suppl 75 (2006) 116\.
* [20] T. Metz, S. Bae, S. Ran, I.-L. Liu, Y. S. Eo, W. T. Fuhrman, D. F. Agterberg, S. Anlage, N. P. Butch, and J. Paglione: arXiv:1908.01069 .
* [21] A. Okada and K. Miyake: J. Phys. Soc. Jpn. 80 (2011) 084708.
* [22] Y. Kitaoka, K. Ishida, and K. Asayama: J. Phys. Soc. Jpn. 63 (1994) 2052\.
* [23] Y. Tokunaga, D. Aoki, H. Mayaffre, S. Krämer, M.-H. Julien, C. Berthier, M. Horvatić, H. Sakai, S. Kambe, and S. Araki: Phys. Rev. Lett. 114 (2015) 216401.
* [24] D. Aoki, W. Knafo, and I. Sheikin: C. R. Physique 14 (2013) 53.
* [25] S. Imajo, Y. Kohama, A. Miyake, C. Dong, J. Flouquet, K. Kindo, and D. Aoki: J. Phys. Soc. Jpn. 88 (2019) 083705.
|
2024-09-04T02:54:58.365392 | 2020-01-06T12:26:18 | 2003.04100 | {
"authors": "Lixin Ge, Xi Shi, Zijun Xu, and Ke Gong",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26115",
"submitter": "Lixin Ge",
"url": "https://arxiv.org/abs/2003.04100"
} | arxiv-papers | # Tunable Casimir equilibria with phase change materials: from quantum
trapping to its release
Lixin Ge<EMAIL_ADDRESS>School of Physics and Electronic Engineering,
Xinyang Normal University, Xinyang 464000, China Xi Shi Department of
physics, Shanghai Normal University, Shanghai, 200234, China Zijun Xu School
of Physics and Electronic Engineering, Xinyang Normal University, Xinyang
464000, China Ke Gong School of Physics and Electronic Engineering, Xinyang
Normal University, Xinyang 464000, China
###### Abstract
A stable suspension of nanoscale particles due to the Casimir force is of
great interest for many applications such as sensing, non-contract nano-
machines. However, the suspension properties are difficult to change once the
devices are fabricated. Vanadium dioxide (VO2) is a phase change material,
which undergoes a transition from a low-temperature insulating phase to a
high-temperature metallic phase around a temperature of 340 K. In this work,
we study Casimir forces between a nanoplate (gold or Teflon) and a layered
structure containing a VO2 film. It is found that stable Casimir suspensions
of nanoplates can be realized in a liquid environment, and the equilibrium
distances are determined, not only by the layer thicknesses but also by the
matter phases of VO2. Under proper designs, a switch from quantum trapping of
the gold nanoplate (“on” state) to its release (“off” state) as a result of
the metal-to-insulator transition of VO2, is revealed. On the other hand, the
quantum trapping and release of a Teflon nanoplate is found under the
insulator-to-metal transition of VO2. Our findings offer the possibility of
designing switchable devices for applications in micro-and nano-
electromechanical systems.
## I Introduction
Micro- and nano-electromechanical systems (MEMS and NEMS), which integrate
electrical and mechanical functionality on the micro- and nano-scales, have
attracted enormous attention Lyshevski (2018); Craighead (2000). Thanks to
small sizes, the MEMS and NEMS exhibit low mass, high mechanical resonance
frequencies and quantum effects, leading to a broad range of applications such
as biological/chemical detections Eom et al. (2011), accelerometers Xu et al.
(2011) and micro/nanomachines Wang (2013). One major problem in MEMS and NEMS
is the $stiction$ which makes the systems collapse and permanent adhesion
caused by the attractive Casimir forces Buks and Roukes (2001); Chan et al.
(2001). The Casimir force is a macroscopic quantum effect which arises from
quantum fluctuations of the electromagnetic field Casimir (1948). In most
cases, two neutral, parallel plates consisted of the same materials are
attractive to each other, and the magnitudes of the attraction depend on
several parameters such as separations, geometric thicknesses, finite
conductivities and temperatures (see, e.g., the review Klimchitskaya et al.
(2009) and Refs.Yampol’skii et al. (2008, 2010). Therefore, repulsive Casimir
forces are highly required for non-contact and low-friction MEMS and NEMS. The
repulsive Casimir forces have been intensively studied in many systems Woods
et al. (2016) including liquid-separated environments Munday et al. (2009);
van Zwol and Palasantzas (2010); Phan and Viet (2011); Dou et al. (2014),
meta-materials Rosa et al. (2008); Zhao et al. (2009, 2011); Song et al.
(2018), topological insulators Grushin and Cortijo (2011); Chen and Wan
(2012); Nie et al. (2013) and specific geometrics Tang et al. (2017); Levin et
al. (2010). In addition, the concept of Casimir equilibria was also
investigated, using the enclosed geometries Rodriguez et al. (2008); Rahi and
Zaheer (2010) and dispersive materials Rodriguez et al. (2010a). Lately,
stable Casimir equilibria of nanoplates above a Teflon-coated gold substrate
were reported by Zhao et al Zhao et al. (2019). However, the Casimir
equilibria of previous studies were mainly in passive systems. Once the
devices are fabricated, the trapping properties are difficult to change. Thus,
the tunable trapping or even the switching from the trapping to its release by
external stimuli (e.g., heating, electric fields or optical waves) is highly
desired in MEMS and NEMS.
In order to active modulate the Casimir effect, one straight way is to change
the dielectric properties of materials under external means Torricelli et al.
(2012); Sedighi et al. (2013); Torricelli et al. (2010). Vanadium dioxide
(VO2) Shao et al. (2018); Zylbersztejn and Mott (1975) is a phase change
material(PCM), which undergoes a transition from a low-temperature insulating
phase to a high-temperature metallic phase at critical temperature 340 K. The
phase transition of VO2 is accompanied by a structural transformation from the
monoclinic phase to the tetragonal one. Meanwhile, the dielectric function of
VO2 changes dramatically during the phase transition, leading to many
interesting applications Wu et al. (2017); Liu et al. (2017); Kats et al.
(2012); van Zwol et al. (2012). In general, the phase transition of VO2 can be
induced by changing the temperature of systems. Alternatively, the phase
transition can be driven by optical lasers Cavalleri et al. (2001); Rini et
al. (2008) or electrical gratings Qazilbash et al. (2008); Nakano et al.
(2012) on a sub-picosecond timescale. Recently, VO2 has been employed to study
the tunable Casimir effect in the vacuum Galkina et al. (2009); Pirozhenko and
Lambrecht (2008); Castillo-Garza et al. (2007). For a large separation (e.g.,
$>$1 $\mu$m), the contrast of Casimir forces due to the phase-transition is
quite large (e.g., over 2 times for two semi-infinite plates of VO2, this
value could be even larger for the case of finite thickness Galkina et al.
(2009); Pirozhenko and Lambrecht (2008)). As the separation is small (e.g.,
$\sim$100 nm), however, the modulation of Casimir forces owning to the phase
transition and finite-thickness decreases greatly Pirozhenko and Lambrecht
(2008); Castillo-Garza et al. (2007). Nonetheless, the Casimir forces are
always attractive and only magnitude modulations have been reported in a
vacuum-separated configuration. The influences of phase transition of VO2 on
the sign modulation of Casimir forces (e.g., from attraction to repulsion) are
yet less explored. In a liquid environment, the function of sign modulation
and the related phenomena such as tunable Casimir equilibria are expected
based on the phase transition of VO2.
Here, the Casimir forces between a nanoplate and a layered structure separated
by a liquid are investigated. The layered structure consists of two kinds of
materials, i.e., Vanadium dioxide (VO2) and Teflon. It is found that stable
Casimir equilibria of gold nanoplates can be realized when a VO2 film is
buried under a semi-infinite Teflon. The properties of Casimir equilibria are
determined, not only by the layer thicknesses but also by the matter phases of
VO2. For thick-film VO2, the Casimir equilibria and quantum traps can be
achieved for both the metallic and insulating phases. On the other hand, a
switch from quantum trapping of the gold nanoplate(“on” state) to its release
(“off” state) can be triggered by the metal-to-insulator phase transition when
the thickness of VO2 is thin (e.g., 20 nm). Finally, stable suspensions of
Teflon nanoplates are also proposed with a complementary design, where the
Teflon substrate is coated by a VO2 film. Unlike the case of gold nanoplates,
the quantum trapping of Teflon nanoplates and its release correspond to the
insulating and metallic phases of VO2. Moreover, the switching phenomena can
be realized only with a several-nanometers thickness of VO2.
Figure 1: (color online) (a) Schematic view of a gold nanoplate suspended in a
liquid environment. (b) The permittivity of different materials (gold, VO2,
bromobenzene and Teflon) as a function of imaginary frequency.
## II Theoretical models
The system in this work is schematically shown in Fig. 1(a), where a gold
nanoplate with thickness $L_{g}$ is suspended in a liquid of bromobenzene. The
separation between the nanoplate and the substrate is $d$. The substrate is
composed of a VO2 film buried under a semi-infinite plate of Teflon. The
thicknesses of the top-layer Teflon and VO2 are denoted as $L_{T}$ and
$L_{\mathrm{V}}$, respectively. The in-plane dimension of the gold nanoplate
is much larger than $L_{g}$ and $d$, and it is considered as a slab during our
calculations. The Casimir force is calculated by $F_{c}=-\partial
E_{c}(d)/\partial d$, where $E_{c}(d)$ is the Casimir energy between the gold
nanoplate and the substrate, having the form Nie et al. (2013); Zhao et al.
(2019)
$E_{c}(d)=A\hbar\int_{0}^{\infty}\frac{d\xi}{2\pi}\int\frac{d^{2}\mathbf{k_{\|}}}{(2\pi)^{2}}\log\det\left[1-\mathbf{R_{1}}\cdot\mathbf{R_{2}}e^{-2k_{3}d}\right],$
(1)
where $\hbar$ is the reduced Planck constant, $A$ is the in-plane area,
$\mathbf{k_{\parallel}}$ is the parallel wavevector,
$k_{3}=\sqrt{k_{\parallel}^{2}+\varepsilon_{liq}(i\xi)\xi^{2}/c^{2}}$ is the
vertical wavevector, $c$ is the speed of light in vacuum,
$\varepsilon_{liq}(i\xi)$ is the permittivity of the intervening liquid
evaluated with imaginary frequency $\omega=i\xi$, $\mathbf{R}_{1,2}$ is the
$2\times 2$ reflection matrix for layered structures, having the form
$\mathbf{R_{j}}=\left(\begin{array}[]{cc}r_{j}^{s}&0\\\
0&r_{j}^{p}\end{array}\right),$ (2)
where $r_{j}$ with $j$=1 and $j$=2 are the reflection coefficients for the
upper and lower layered structures, and the superscripts $s$ and $p$
correspond to the polarizations of transverse electric ($\mathbf{TE}$) and
transverse magnetic ($\mathbf{TM}$) modes, respectively. Note that the
temperature $T$ for Eq. (1) equals 0 K and it is an effective approximation as
the separation $d$ is smaller than 1 $\mu m$ for finite temperatures Milton
(2004). For a nanoplate suspended in a liquid, the reflection coefficients can
be given analytically as follows Zhao et al. (2011)
$r^{\alpha}=\frac{r_{0,j}^{\alpha}+r_{j,0}^{\alpha}e^{-2K_{j}L_{j}}}{1+r_{0,j}^{\alpha}r_{j,0}^{\alpha}e^{-2K_{j}L_{j}}},$
(3)
where $\alpha=s$ and $p$, $L_{j}$ is the thickness of the nanoplate,
$K_{j}=\sqrt{k_{\parallel}^{2}+\varepsilon_{j}(i\xi)\xi^{2}/c^{2}}$ is the
vertical wavevector, $\varepsilon_{j}(i\xi)$ is the permittivity of the
nanoplate. The subscripts of $r_{m,n}^{\alpha}$ represent the light is
incident from the medium $m$ to $n$ (0 means the liquid).
Alternatively, the reflection coefficients for layered structures can be
calculated by a transfer matrix method. The general form is given as
$r=M_{21}/M_{11}$, where $M_{21}$ and $M_{11}$ are the elements of the $M$
matrixZhan et al. (2013). The $M$ matrix is the multiplications of
transmission matrices across different interfaces and propagation matrices in
different layers. Considering an arbitrary $N$-layer system, the $M$-matrix is
given as :
$M=D_{0,1}P(L_{1})D_{1,2}P(L_{2})...D_{N-1,N}P(L_{N})D_{N,N+1},$ (4)
where the transmission matrix $D_{j,j+1}$ is given as:
$D_{j,j+1}=\frac{1}{2}\left[\begin{array}[]{cc}1+\eta&1-\eta\\\
1-\eta&1+\eta\end{array}\right],$ (5)
where $\eta=\varepsilon_{j}(i\xi)K_{j+1}/(\varepsilon_{j+1}(i\xi)K_{j})$ for
p-polarization and $\eta=K_{j+1}/K_{j}$ for s-polarization. The propagation
matric in the $j$-th layer (for both $s$ and $p$ polarizations) is written as:
$P(L_{j})=\left[\begin{array}[]{cc}e^{K_{j}L_{j}}&0\\\
0&e^{-K_{j}L_{j}}\end{array}\right].$ (6)
For example, we have $N=2$ for the multilayered substrate in Fig. 1. The $M$
matrix is given by $M=D_{0,1}P(L_{1})D_{1,2}P(L_{2})D_{2,3}$, where the
subscripts 0, 1, 2 and 3 represent the media of liquid, Teflon, VO2 and Teflon
(from top to down); the thicknesses $L_{1}=L_{T}$, $L_{2}=L_{V}$.
## III Results and discussions
Figure 1(b) shows the permittivity for different materials, where the used
models and parameters are given in the Appendixes. The dielectric function of
VO2 changes dramatically under different temperatures. For temperature
$T>T_{c}$, VO2 is in the metallic phase and it acts as a poor metal. For
$T<T_{c}$, it is in the insulating phase (or called semiconducting phase), and
the corresponding dielectric function nearly matches that of intrinsic silicon
at low frequency Pirozhenko and Lambrecht (2008). To create repulsive Casimir
forces between two dissimilar plates separated by a liquid, the permittivity
should satisfy
$\varepsilon_{1}(i\xi)>\varepsilon_{liq}(i\xi)>\varepsilon_{2}(i\xi)$ for a
vast range of frequency Munday et al. (2009). Clearly, the dielectric
functions of gold and VO2 (either metallic or insulating phase) are larger
than that of bromobenzene over a wide range of frequency. Therefore, the
Casimir force is always attractive for the layered structure of
gold/bromobenzene/VO2. While the Casimir force for the structure of
gold/bromobenzene/Teflon is repulsive instead. Nonetheless, the Casimir
equilibria can not be found for above two layered structures.
Figure 2: (color online) Casimir pressure via different thicknesses of VO2,
where the thickness $L_{T}$=45 nm and $L_{g}$=40 nm are fixed. (a) Thick
films. The solid and dashed lines represent the pressure for the metallic and
insulating phases of VO2, respectively. (b) Thin films. The positive
(negative) sign of the pressure corresponds to the repulsive (attractive)
force.
### III.1 Tunable Casimir equilibria for gold nanoplates
Now we consider the Casimir forces as the substrate is composed of a VO2 film
and Teflon (see Fig. 1(a)). The Casimir pressure ($P_{c}=F_{c}/A$) for the
thick film of VO2 is given in Fig. 2(a). The results show that the curves are
almost identical for $L_{\mathrm{V}}$=200, 500 and 1000 nm, indicating the
weak impact of the thickness for thick-film configurations. The pressure is
repulsive at small separation (e.g., $d<60$ nm), making the nanoplate stay
away from the substrate. As the separation increases further, the Casimir
equilibria (zero pressure) occur and quantum traps can be realized for both
metallic (solid lines) and insulating phases (dashed lines). In addition, the
equilibrium distance $d_{c}$ is shifted under the phase transition of VO2. On
the other hand, the thin-film thickness and the phase transition of VO2 can
play an important role in Casimir pressure as shown in Fig. 2(b). For the
thickness $L_{\mathrm{V}}$ =10 and 20 nm, quantum traps can be realized for
the metallic phase, whereas no trap is found for the insulating phase. Under
such configurations, a switch from quantum trapping of the nanoplate(“on”
state) to its release (“off” state) can be triggered by the metal-insulator
transition of VO2. However, the quantum trapping occurs for both metallic and
insulating phases as the thickness $L_{\mathrm{V}}$ increases to 30 nm, and
the “off” state disappears. Compared with the vacuum-separated configuration
Castillo-Garza et al. (2007), not only the magnitude of Casimir forces can be
modified in a liquid environment, but also the sign could be switched (e.g.,
from attraction to repulsion for $d$=100 nm, $L_{V}$=30 nm), due to the phase-
transition of VO2.
Figure 3: (color online) Casimir pressure contributed from different
frequencies and different parallel wavevectors. (a) and (b) $d$=30 nm; (c) and
(d) $d$=85 nm (close to critical separation); (e) and (f) $d$=150 nm. (a),
(c)and (e) VO2 in the metallic phase ($T>T_{c}$); (b), (d) and (f) VO2 in the
insulating phase ($T<T_{c}$). The layer thicknesses are set as
$L_{\mathrm{V}}$=20 nm and $L_{T}$=45 nm.
To understand the switch transition from the “on” to the “off” state, the
contour plots of Casimir pressure are shown in Fig. 3 under different
separations. The sign of pressure is determined by the competition of VO2 film
(attraction) and low-refractive-index Teflon (repulsion). For small separation
$d$=30 nm, the pressure is dominant by the repulsive component as shown in
Figs. 3(a) and 3(b). For the metallic phase, the attractive component
increases and it compensates the repulsive one as the separation becomes 85 nm
($d\approx d_{c}$), resulting in Casimir equilibrium (see Fig. 3(c)). While
the repulsion is still dominant for the insulating phase as shown in Fig.
3(d). As $d$ increases further to 150 nm, the Casimir pressure turns out to be
dominantly attractive in Fig. 3(e) for the metallic phase, resulting in a
restoring force for stable trapping. By contrast, the pressure is still
dominant by repulsion for the insulating phase as shown in Fig. 3(f). The
pressure maps between the metallic and insulating phases are almost identical
for large energy (e.g., $>$2 eV), whereas the discrepancy manifests at low
energy. The results indicate that the attractive component appears only at low
frequency and small $k$ vector for metallic VO2, where the field cannot
penetrate the metalZhao et al. (2019). Conversely, the field can penetrate the
thin-film of insulating VO2 easily, leading to repulsive Casimir forces.
Figure 4: (color online) (a)The equilibrium distances via the thicknesses of
VO2 under three different configurations (see the inset on the right). The
thickness $L_{T}$ is set as 45 nm. The solid (dashed) curves for type III
represent stable (unstable) equilibria. Contour plots of Casimir pressure via
the thicknesses of coating Teflon for (b) metallic VO2 and (c) insulating VO2,
where the thickness $L_{\mathrm{V}}$=20 nm is fixed. In (b) and (c), the gray
zones represent a strong repulsive pressure larger than 1 Pa. The colors of
the curves denote the same meaning as those in (a).
Practically, the influences of gravitation and buoyancy on the force balances
should be taken into account. The condition for the force equilibrium is
written as $\vec{n}\cdot(\mathbf{F}_{c}+\mathbf{F}_{\mathrm{GB}})$=0, where
$\vec{n}$ is the unit vector normal to the surface,
$F_{\mathrm{GB}}=(\rho_{g}-\rho_{liq})gL_{g}A$ is the sum of gravity and
buoyancy, $g$ is the gravitational acceleration, $\rho_{g}\approx$19.3 g/cm3
and $\rho_{liq}\approx$1.50 g/cm3 is the density of gold and liquid
bromobenzene, respectively. The magnitude of $F_{\mathrm{GB}}/A$ is about 7.0
mPa as the thickness $L_{g}$=40 nm. Three types of configurations are depicted
in the inset of Fig. 4(a) for the cross-section views. The type I
configuration corresponds to a zero-projection (or weightlessness in
aerospace), where the switching from quantum trapping (metallic state) to its
release (insulating state) can be obtained as $L_{\mathrm{V}}$ in a proper
range, from about 2 to 22 nm. For type II configuration, the attractive
$F_{\mathrm{GB}}$ can compensate the long-range repulsive Casimir force at
large $d$, leading to stable suspensions for both $T>T_{c}$ and $T<T_{c}$.
However, the equilibrium distances are different, and it can be inferred that
the stiffness of trapping for metallic phase is stronger than that of the
insulating phase. For type III configuration (a flipped down system), the
switching between trapping and its release can also be realized.
Interestingly, there are two equilibrium distances for this configuration. It
is not difficult to know that the smaller equilibrium distance (solid lines)
is stable, whereas the other one (dashed lines) with larger distance is
unstable to small perturbations in position. For both type II and III
configurations, the deviations from Type I become strong as $d_{c}$ is large.
In addition to the thickness of VO2 film, the top-layer Teflon can also play a
significant role in the Casimir effect. The plots of Casimir pressure via the
thicknesses of the coating Teflon $L_{T}$ are shown in Figs. 4(b) and 4(c),
where $L_{\mathrm{V}}$=20 nm is fixed. The results show that the switching
between quantum trapping and it release occurs only when $L_{T}$ is larger
than about 42 nm (no gravity). The larger the $L_{T}$, the larger of the
position for the Casimir equilibrium. As $L_{T}$ is smaller than 42 nm, the
equilibrium distance is also small, and quantum trappings can be realized for
both metallic and insulating phases. For comparison, the gravitation and
buoyancy are taken into account. Again, strong discrepancies among three
configurations occur as the equilibrium positions larger than about 150 nm,
resulting from the comparable magnitude of $F_{GB}$ and the Casimir force. The
impact of $F_{GB}$ can be further reduced by decreasing the thickness $L_{g}$
near the skin depth (about 22 nm) Lisanti et al. (2005).
Figure 5: (color online) Casimir pressure for a complementary design. A thin
film of VO2 with thickness $L_{V}$ is deposited on a Teflon substrate. (a)The
metallic VO2. (b)The insulating VO2. The thickness of the suspended nanoplate
is set as 100 nm.
Figure 6: (color online) Casimir pressure calculated for finite temperatures
and 0 K approximation from Eq. (1). (a)The trapping and release of a gold
nanoplate. The parameters for the substrate are $L_{T}$=45 nm and $L_{V}$=20
nm. (b)The trapping and release of a Teflon nanoplate. The thickness $L_{V}$
is set as 2 nm.
### III.2 Tunable Casimir equilibria for Teflon nanoplates
The active control of the low-refractive-index nanoplates can also be
significant in many applications. Inspiring by the work Zhao et al. (2019), a
complementary design is schematically shown in the inset of Fig. 5(a). A
Teflon nanoplate is suspended in a liquid of bromobenzene, and the substrate
is a semi-infinite plate of Teflon coated by a VO2 film (high refractive
index). Under such design, the Casimir force is repulsive at very short
separation, due to the dominant interaction between Teflon/bromobenzene/VO2.
As the separation increases, the attractive interaction from
Teflon/bromobenzene/Teflon can be dominant instead, resulting in a stable
Casimir trapping. To verify the design, the Casimir pressure is given
quantitatively in Figs. 5(a) and 5(b) as a function of separation.
Interestingly, the Casimir pressure shows a long-range repulsive behavior for
the metallic VO2, which corresponds to the “off” state. The repulsion pressure
becomes stronger as the thickness $L_{\mathrm{V}}$ enlarges from 2 to 6 nm.
For $L_{\mathrm{V}}$= 2 nm, a Casimir equilibria and strong restoring forces
can be found when VO2 is in the insulating phase. Therefore, the quantum
trapping and release of a Teflon nanoplate can be achieved under the
insulator-to-metal transition of VO2. As the thickness is 4 nm, the restoring
force decreases and the trapping stiffness drops considerably. The calculation
results indicate that the Casimir pressure is quite sensitive to the thickness
of VO2. Due to the low density of Teflon (2.1 g/cm3), the pressure $F_{GB}/A$
for the Teflon nanoplate is about 0.6 mPa, which is reduced significantly
compared with those of gold nanoplates.
### III.3 Finite temperatures effect
To achieve the phase transition of VO2, the temperatures of the devices need
to be changed. We assume that the dielectric functions of the gold and Teflon
are temperature-independent. For organic liquids, the change of refractive
index due to the temperature Li et al. (1994) is an order of $10^{-4}/$ K, and
the permittivity of bromobenzene is also treated as temperature-independent.
Nonetheless, it is interesting to check the finite temperature effect on
Casimir forces. The integral over frequency $\xi$ in Eq. (1) now is replaced
by a discrete summation Rahi et al. (2009):
$\frac{\hbar}{2\pi}\int_{0}^{\infty}d\xi\leftrightarrow
k_{b}T\overset{\infty}{\underset{n=0}{\sum}}^{\prime},$ (7)
where $\xi$ is replaced by discrete Matsubara frequencies
$\xi_{n}=2\pi\frac{k_{b}T}{\hbar}n(n=0,1,2,3\ldots),$ $k_{B}$ is the
Boltzmann’s constant and the prime denotes a prefactor 1/2 for the term $n$=0.
The Casimir pressures under different temperatures are shown in Figs. 6(a) and
6(b), where two different designs are demonstrated. It is found that the
curves for temperature 320 K (insulating phase) overlap with those calculated
from Eq. (1). For the temperature of 360 K, there is only a small deviation
between 0 K and 360 K. Overall, the calculation results from 320 and 360 K
confirm the accuracy of the 0 K approximation. Recently, the switching between
repulsive and attractive Casimir forces based on PCM has also been reported
Boström et al. (2018), where the equilibrium distances for switching occur
only at several nanometers. The equilibrium distances in our work are more
accessible to experiments, and it can be tuned by designing the geometric
thickness of VO2 and the Teflon.
Figure 7: (color online) The total energy of a suspended gold nanoplate (a)
and a Teflon nanoplate (b) under different types of gravity projection. The
solid and dashed lines represent the cases for the metallic VO2 ($T$=360 K)
and insulating VO2 ($T$=320 K), respectively. The in-plane area $A$ is set as
10 $\mu m\times$10 $\mu m$. Other parameters are kept the same as those in
Fig. 6.
### III.4 The effect of Brownian motion
In a real configuration, the position of a nanoplate has a fluctuation around
the equilibrium distances due to the Brownian motion. To evaluate the effect
of Brownian motion, the total energy of the suspended nanoplate should be
known, which are written as $U(d)=E_{c}+\Lambda\times(E_{g}+E_{b})$, where
$E_{c}$ is the Casimir energy given by Eq. [1], $E_{g}=\rho_{p}gL_{p}Ad$ and
$E_{b}=-\rho_{liq}gL_{p}Ad$ are respectively the energies caused by the
gravity and buoyancy Phan et al. (2012), $\rho_{p}$ and $L_{p}$ represent the
density and thickness of the suspended nanoplate. The coefficient $\Lambda$ is
the parameter depending on the gravity projection. For type I configuration
(see the inset of Fig. 4), $\Lambda$=0. While we have $\Lambda$=1 and -1 for
type II and type III configurations. The total energy of a gold and Teflon
nanoplate are shown in Figs. 7(a) and 7(b), respectively. The minimum of
$U(d)/k_{B}T$ corresponds to the equilibrium distance $d_{c}$. Clearly, stable
quantum trapping can be realized for a gold (Teflon) nanoplate when VO2 is in
the metallic (insulating) phase. Due to the balance of repulsive Casimir force
and gravity, stable trapping can also be realized for type II configuration.
Theoretically, the transition rate from the equilibrium distance to another
position due to the Brownian motion is proportional to $\exp(-\triangle
U/k_{B}T)$ Phan et al. (2012); Rodriguez et al. (2010b), where $\triangle U$
represents the energy barrier between these two positions. The calculated
results indicate that the transition rates from Casimir equilibria to stiction
are negligible since the energy barriers $\triangle U//k_{B}T$ are quite large
(e.g., over $10^{4}$) for the gold and Teflon nanoplates. For a flipped-down
system (type III), quantum trapping can be realized for gold (Teflon)
nanoplate when VO2 is in the metallic (insulating) phase. However, there is a
nonzero possibility that the nanoplates can escape from the equilibrium
distances to the free-liquid regime ($d\rightarrow\infty$). Fortunately, the
energy barrier $\triangle U/k_{B}T$ for such a transition is the order of
$10^{2}$ as shown in Figs. 7(a) and 7(b), and the transition rate of the
escape is also negligible.
## IV Conclusions
In summary, the Casimir forces between a nanoplate and a layered structure
containing VO2 films are investigated. In a liquid-separated environment, not
only the magnitude of Casimir forces can be modified, but also the sign could
be switched (e.g., from attraction to repulsion), due to the phase-transition
of VO2. Moreover, a stable Casimir suspension of nanoplates and its tunability
are revealed. For a gold nanoplate, a switch from the quantum trapping to its
release is obtained under the metal-to-insulator transition of VO2. In
addition, the quantum trapping and release of a Teflon nanoplate are
demonstrated with a complementary design. The switching performances due to
the layer thicknesses, gravitation and temperatures are discussed as well.
Theoretically, the bromobenzene can be substituted by other high-refractive-
index liquids (e.g., glycerol and styrene van Zwol and Palasantzas (2010)) as
long as the boiling points are larger than $T_{c}$. The Teflon can also be
replaced by other low-refractive-index materials (e.g., mesoporous silica Dou
et al. (2014)). This work offers the possibility of designing switchable
devices in MEMS/NEMS, resulting from the quantum fluctuations of the
electromagnetic field.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China
(Grant No. 11804288, No. 11704254, No. 61571386 and No. 61974127), and the
Innovation Scientists and Technicians Troop Construction Projects of Henan
Province. The research of L.X. Ge is further supported by Nanhu Scholars
Program for Young Scholars of XYNU.
## Appendix A The permittivity of gold
Here, a generalized Drude-Lorentz model is applied for the permittivity of
gold Sehmi et al. (2017):
$\varepsilon(i\xi)=\varepsilon_{D}(i\xi)+\varepsilon_{L}(i\xi),$ (8)
where the Drude term is given by:
$\varepsilon_{D}(i\xi)=\varepsilon_{\infty}+\frac{\gamma\sigma}{\xi(\xi+\gamma)},$
(9)
where $\varepsilon_{\infty}=$0.83409, $\sigma=$3134.5 eV, and $\gamma=$0.02334
eV. The Lorentz term is described by four pairs of poles:
$\varepsilon_{L}(i\xi)=\overset{4}{\underset{j=1}{\sum}}\left(\frac{i\sigma_{j}}{i\xi-\Omega_{j}}+\frac{i\sigma_{j}^{\ast}}{i\xi+\Omega_{j}^{\ast}}\right)$
(10)
where $\sigma_{j}$ and $\Omega_{j}$ are the generalized conductivity and
resonant frequency of the $j$-th Lorentz pole. The star superscripts represent
the operation of complex conjugation. The generalized Drude-Lorentz model
respects causality, and it can represent the exact physical resonances in the
material. The parameters for the model are listed in the Table I.
Table 1: The fitted parameters for Lorentz poles of gold Sehmi et al. (2017). $j$-th | $\sigma_{j}(\mathrm{eV})$ | $\Omega_{j}(\mathrm{eV})$
---|---|---
1 | -0.01743+0.3059*I | 2.6905-0.16645*I
2 | 1.0349+1.2919*I | 2.8772-0.44473*I
3 | 1.2274+2.5605*I | 3.7911-0.81981*I
4 | 9.85+37.614*I | 4.8532-13.891*I
## Appendix B The permittivity of VO2
For temperature $T>T_{c}$, VO2 is in the metallic phase, and the permittivity
is given by Pirozhenko and Lambrecht (2008); Castillo-Garza et al. (2007)
$\displaystyle\varepsilon(i\xi)$ $\displaystyle=$ $\displaystyle
1+\frac{\omega_{p}^{2}}{\xi(\xi+\gamma)}+\frac{\varepsilon_{\infty}-1}{1+\xi^{2}/\omega_{\infty}^{2}}$
(11)
$\displaystyle+\underset{j=1}{\overset{4}{\sum}}\frac{s_{j}}{1+(\xi/\omega_{j})^{2}+\Gamma_{j}\xi/\omega_{j}},$
where $\varepsilon_{\infty}=3.95,\omega_{p}=3.33$ eV, and $\gamma=0.66$ eV.
The parameters $s_{j}$ and $\Gamma_{j}$ represent respectively the strength
and linewidth of the $j$-th oscillator (resonant frequency $\omega_{j}$).
For temperature $T<T_{c}$, VO2 is in the insulating phase, and the
permittivity is described as
$\varepsilon(i\xi)=1+\frac{\varepsilon_{\infty}-1}{1+\xi^{2}/\omega_{\infty}^{2}}+\underset{j=1}{\overset{7}{\sum}}\frac{s_{j}}{1+(\xi/\omega_{j})^{2}+\Gamma_{j}\xi/\omega_{j}},$
(12)
where $\varepsilon_{\infty}=4.26$ and $\omega_{\infty}=15$ eV. The above
equations for metallic and insulating VO2 are valid for a wide range of
frequency (up to about 10 eV)Castillo-Garza et al. (2007), which are modified
versions of Ref. Verleur et al. (1968). The parameters are listed in Table II.
Table 2: The parameters for the metallic and insulating VO2 Castillo-Garza et al. (2007). $j$-th ($T>T_{c}$) | $S_{j}$ | $\omega_{j}(\mathrm{eV})$ | $\Gamma_{j}$
---|---|---|---
1 | 1.816 | 0.86 | 0.95
2 | 0.972 | 2.8 | 0.23
3 | 1.04 | 3.48 | 0.28
4 | 1.05 | 4.6 | 0.34
$j$-th ($T<T_{c}$) | $S_{j}$ | $\omega_{j}(\mathrm{eV})$ | $\Gamma_{j}$
1 | 0.79 | 1.02 | 0.55
2 | 0.474 | 1.30 | 0.55
3 | 0.483 | 1.50 | 0.50
4 | 0.536 | 2.75 | 0.22
5 | 1.316 | 3.49 | 0.47
6 | 1.060 | 3.76 | 0.38
7 | 0.99 | 5.1 | 0.385
Table 3: The parameters for Teflon(left) and bromobenzene (right)van Zwol and Palasantzas (2010). $j$-th | $C_{j}$ | $\omega_{j}(\mathrm{eV})$ | $C_{j}$ | $\omega_{j}(\mathrm{eV})$
---|---|---|---|---
1 | 0.0093 | 0.0003 | 0.0544 | 0.00502
2 | 0.0183 | 0.0076 | 0.0184 | 0.0309
3 | 0.139 | 0.0557 | 0.0475 | 0.111
4 | 0.112 | 0.126 | 0.532 | 6.75
5 | 0.195 | 6.71 | 0.645 | 13.3
6 | 0.438 | 18.6 | 0.240 | 24.0
7 | 0.106 | 42.1 | 0.00927 | 99.9
8 | 0.0386 | 77.6 | |
## Appendix C The permittivity of Teflon and bromobenzene
The permittivity for the Teflon and bromobenzene are given by the oscillator
model van Zwol and Palasantzas (2010):
$\varepsilon(i\xi)=1+\underset{j}{\overset{}{\sum}}\frac{C_{j}}{1+(\xi/\omega_{j})^{2}},$
(13)
where $C_{j}$ corresponds to the oscillator strength for the $j$-th resonance,
and $\omega_{j}$ is the corresponding resonant frequency. The values of
$C_{j}$ and $\omega_{j}$ listed in Table III are fitted from the experimental
data in a wide range of frequency.
## References
* Lyshevski (2018) S. E. Lyshevski, _MEMS and NEMS: systems, devices, and structures_ (CRC press, 2018).
* Craighead (2000) H. G. Craighead, Science 290, 1532 (2000).
* Eom et al. (2011) K. Eom, H. S. Park, D. S. Yoon, and T. Kwon, Phys. Rep. 503, 115 (2011).
* Xu et al. (2011) R. Xu, S. Zhou, and W. J. Li, IEEE Sens. J. 12, 1166 (2011).
* Wang (2013) J. Wang, _Nanomachines: fundamentals and applications_ (John Wiley & Sons, 2013).
* Buks and Roukes (2001) E. Buks and M. L. Roukes, Phys. Rev. B 63, 033402 (2001).
* Chan et al. (2001) H. Chan, V. Aksyuk, R. Kleiman, D. Bishop, and F. Capasso, Science 291, 1941 (2001).
* Casimir (1948) H. B. Casimir, Proc. Kon. Ned. Akad. Wet. 51, 793 (1948).
* Klimchitskaya et al. (2009) G. L. Klimchitskaya, U. Mohideen, and V. M. Mostepanenko, Rev. Mod. Phys. 81, 1827 (2009).
* Yampol’skii et al. (2008) V. A. Yampol’skii, S. Savel’ev, Z. A. Mayselis, S. S. Apostolov, and F. Nori, Phys. Rev. Lett. 101, 096803 (2008).
* Yampol’skii et al. (2010) V. A. Yampol’skii, S. Savel’ev, Z. A. Maizelis, S. S. Apostolov, and F. Nori, Phys. Rev. A 82, 032511 (2010).
* Woods et al. (2016) L. M. Woods, D. A. R. Dalvit, A. Tkatchenko, P. Rodriguez-Lopez, A. W. Rodriguez, and R. Podgornik, Rev. Mod. Phys. 88, 045003 (2016).
* Munday et al. (2009) J. N. Munday, F. Capasso, and V. A. Parsegian, Nature 457, 170 (2009).
* van Zwol and Palasantzas (2010) P. J. van Zwol and G. Palasantzas, Phys. Rev. A 81, 062502 (2010).
* Phan and Viet (2011) A. D. Phan and N. A. Viet, Phys. Rev. A 84, 062503 (2011).
* Dou et al. (2014) M. Dou, F. Lou, M. Boström, I. Brevik, and C. Persson, Phys. Rev. B 89, 201407(R) (2014).
* Rosa et al. (2008) F. S. S. Rosa, D. A. R. Dalvit, and P. W. Milonni, Phys. Rev. Lett. 100, 183602 (2008).
* Zhao et al. (2009) R. Zhao, J. Zhou, T. Koschny, E. N. Economou, and C. M. Soukoulis, Phys. Rev. Lett. 103, 103602 (2009).
* Zhao et al. (2011) R. Zhao, T. Koschny, E. N. Economou, and C. M. Soukoulis, Phys. Rev. B 83, 075108 (2011).
* Song et al. (2018) G. Song, R. Zeng, M. Al-Amri, J. Xu, C. Zhu, P. He, and Y. Yang, Opt. Express 26, 34461 (2018).
* Grushin and Cortijo (2011) A. G. Grushin and A. Cortijo, Phys. Rev. Lett. 106, 020403 (2011).
* Chen and Wan (2012) L. Chen and S. Wan, Phys. Rev. B 85, 115102 (2012).
* Nie et al. (2013) W. Nie, R. Zeng, Y. Lan, and S. Zhu, Phys. Rev. B 88, 085421 (2013).
* Tang et al. (2017) L. Tang, M. Wang, C. Y. Ng, M. Nikolic, C. T. Chan, A. W. Rodriguez, and H. B. Chan, Nat. Photonics 11, 97 (2017).
* Levin et al. (2010) M. Levin, A. P. McCauley, A. W. Rodriguez, M. T. Homer Reid, and S. G. Johnson, Phys. Rev. Lett. 105, 090403 (2010).
* Rodriguez et al. (2008) A. W. Rodriguez, J. N. Munday, J. D. Joannopoulos, F. Capasso, D. A. R. Dalvit, and S. G. Johnson, Phys. Rev. Lett. 101, 190404 (2008).
* Rahi and Zaheer (2010) S. J. Rahi and S. Zaheer, Phys. Rev. Lett. 104, 070405 (2010).
* Rodriguez et al. (2010a) A. W. Rodriguez, A. P. McCauley, D. Woolf, F. Capasso, J. D. Joannopoulos, and S. G. Johnson, Phys. Rev. Lett. 104, 160402 (2010a).
* Zhao et al. (2019) R. Zhao, L. Li, S. Yang, W. Bao, Y. Xia, P. Ashby, Y. Wang, and X. Zhang, Science 364, 984 (2019).
* Torricelli et al. (2012) G. Torricelli, P. J. Van Zwol, O. Shpak, G. Palasantzas, V. B. Svetovoy, C. Binns, B. J. Kooi, P. Jost, and M. Wuttig, Adv. Funct. Mater. 22, 3729 (2012).
* Sedighi et al. (2013) M. Sedighi, W. H. Broer, G. Palasantzas, and B. J. Kooi, Phys. Rev. B 88, 165423 (2013).
* Torricelli et al. (2010) G. Torricelli, P. J. van Zwol, O. Shpak, C. Binns, G. Palasantzas, B. J. Kooi, V. B. Svetovoy, and M. Wuttig, Phys. Rev. A 82, 010101(R) (2010).
* Shao et al. (2018) Z. Shao, X. Cao, H. Luo, and P. Jin, NPG Asia Mater. 10, 581 (2018).
* Zylbersztejn and Mott (1975) A. M. N. F. Zylbersztejn and N. F. Mott, Phys. Rev. B 11, 4383 (1975).
* Wu et al. (2017) S.-H. Wu, M. Chen, M. T. Barako, V. Jankovic, P. W. C. Hon, L. A. Sweatlock, and M. L. Povinelli, Optica 4, 1390 (2017).
* Liu et al. (2017) H. Liu, J. Lu, and X. R. Wang, Nanotechnology 29, 024002 (2017).
* Kats et al. (2012) M. A. Kats, D. Sharma, J. Lin, P. Genevet, R. Blanchard, Z. Yang, M. M. Qazilbash, D. Basov, S. Ramanathan, and F. Capasso, Appl. Phys. Lett. 101, 221101 (2012).
* van Zwol et al. (2012) P. J. van Zwol, L. Ranno, and J. Chevrier, Phys. Rev. Lett. 108, 234301 (2012).
* Cavalleri et al. (2001) A. Cavalleri, C. Tóth, C. W. Siders, J. A. Squier, F. Ráksi, P. Forget, and J. C. Kieffer, Phys. Rev. Lett. 87, 237401 (2001).
* Rini et al. (2008) M. Rini, Z. Hao, R. W. Schoenlein, C. Giannetti, F. Parmigiani, S. Fourmaux, J. C. Kieffer, A. Fujimori, M. Onoda, S. Wall, et al., Appl. Phys. Lett. 92, 181904 (2008).
* Qazilbash et al. (2008) M. M. Qazilbash, Z. Q. Li, V. Podzorov, M. Brehm, F. Keilmann, B. G. Chae, H.-T. Kim, and D. N. Basov, Appl. Phys. Lett. 92, 241906 (2008).
* Nakano et al. (2012) M. Nakano, K. Shibuya, D. Okuyama, T. Hatano, S. Ono, M. Kawasaki, Y. Iwasa, and Y. Tokura, Nature 487, 459 (2012).
* Galkina et al. (2009) E. G. Galkina, B. A. Ivanov, S. Savel’ev, V. A. Yampol’skii, and F. Nori, Phys. Rev. B 80, 125119 (2009).
* Pirozhenko and Lambrecht (2008) I. Pirozhenko and A. Lambrecht, Phys. Rev. A 77, 013811 (2008).
* Castillo-Garza et al. (2007) R. Castillo-Garza, C.-C. Chang, D. Jimenez, G. L. Klimchitskaya, V. M. Mostepanenko, and U. Mohideen, Phys. Rev. A 75, 062114 (2007).
* Milton (2004) K. A. Milton, J. Phys. A 37, R209 (2004).
* Zhan et al. (2013) T. Zhan, X. Shi, Y. Dai, X. Liu, and J. Zi, J. Phys.: Condens. Matter 25, 215301 (2013).
* Lisanti et al. (2005) M. Lisanti, D. Iannuzzi, and F. Capasso, Proc. Natl. Acad. Sci. U.S.A. 102, 11989 (2005).
* Li et al. (1994) W. Li, P. N. Segre, R. Gammon, J. V. Sengers, and M. Lamvik, J. Chem. Phys. 101, 5058 (1994).
* Rahi et al. (2009) S. J. Rahi, T. Emig, N. Graham, R. L. Jaffe, and M. Kardar, Phys. Rev. D 80, 085021 (2009).
* Boström et al. (2018) M. Boström, M. Dou, O. I. Malyi, P. Parashar, D. F. Parsons, I. Brevik, and C. Persson, Phys. Rev. B 97, 125421 (2018).
* Phan et al. (2012) A. D. Phan, L. M. Woods, D. Drosdoff, I. V. Bondarev, and N. A. Viet, Appl. Phys. Lett. 101, 113118 (2012).
* Rodriguez et al. (2010b) A. W. Rodriguez, D. Woolf, A. P. McCauley, F. Capasso, J. D. Joannopoulos, and S. G. Johnson, Phys. Rev. Lett. 105, 060401 (2010b).
* Sehmi et al. (2017) H. S. Sehmi, W. Langbein, and E. A. Muljarov, Phys. Rev. B 95, 115444 (2017).
* Verleur et al. (1968) H. W. Verleur, A. S. Barker Jr, and C. N. Berglund, Phys. Rev. 172, 788 (1968).
|
2024-09-04T02:54:58.377245 | 2020-03-09T13:08:27 | 2003.04110 | {
"authors": "Sajid Ali, Georg Bergner, Henning Gerber, Istvan Montvay, Gernot\n M\\\"unster, Stefano Piemonte and Philipp Scior",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26116",
"submitter": "Sajid Ali",
"url": "https://arxiv.org/abs/2003.04110"
} | arxiv-papers | # MS-TP-20-17
Continuum extrapolation of Ward identities in $\mathbf{\mathcal{N}=1}$
supersymmetric SU(3) Yang-Mills theory
Sajid Ali<EMAIL_ADDRESS>University of Münster, Institute
for Theoretical Physics, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany
Government College University Lahore, Department of Physics, Lahore 54000,
Pakistan Georg Bergner<EMAIL_ADDRESS>University of Jena,
Institute for Theoretical Physics, Max-Wien-Platz 1, D-07743 Jena, Germany
University of Münster, Institute for Theoretical Physics, Wilhelm-Klemm-Str.
9, D-48149 Münster, Germany Henning Gerber<EMAIL_ADDRESS>University of Münster, Institute for Theoretical Physics, Wilhelm-Klemm-Str.
9, D-48149 Münster, Germany Istvan Montvay<EMAIL_ADDRESS>Deutsches
Elektronen-Synchrotron DESY, Notkestr. 85, D-22607 Hamburg, Germany Gernot
Münster11footnotemark: 1 University of Münster, Institute for Theoretical
Physics, Wilhelm-Klemm-Str. 9, D-48149 Münster, Germany Stefano Piemonte
<EMAIL_ADDRESS>University of Regensburg, Institute for Theoretical
Physics, Universitätsstr. 31, D-93040 Regensburg, Germany Philipp Scior
<EMAIL_ADDRESS>Universität Bielefeld, Fakultät für Physik,
Universitätsstr. 25, D-33615 Bielefeld, Germany
(14th May 2020)
###### Abstract
Abstract: In $\mathcal{N}=1$ supersymmetric Yang-Mills theory, regularised on
a space-time lattice, in addition to the breaking by the gluino mass term,
supersymmetry is broken explicitly by the lattice regulator. In addition to
the parameter tuning in the theory, the supersymmetric Ward identities can be
used as a tool to investigate lattice artefacts as well as to check whether
supersymmetry can be recovered in the chiral and continuum limits. In this
paper we present the numerical results of an analysis of the supersymmetric
Ward identities for our available gauge ensembles at different values of the
inverse gauge coupling $\beta$ and of the hopping parameter $\kappa$. The
results clearly indicate that the lattice artefacts vanish in the continuum
limit, confirming the restoration of supersymmetry.
## 1 Introduction
Supersymmetry (SUSY) is an elegant idea which relates fermions and bosons,
whose spin differs by 1/2, through supercharges [1]. SUSY provides dark matter
candidates, arising from the lightest supersymmetric particles [2]. In
addition to that, supersymmetric extensions of the Standard Model would
resolve the hierarchy problem [3]. $\mathcal{N}=1$ supersymmetric Yang-Mills
(SYM) theory, which is being considered in this article, provides an extension
of the pure gluonic part of the Standard Model [4]. It describes the strong
interactions between gluons and gluinos, the superpartners of the gluons.
Gluinos are Majorana particles that transform under the adjoint representation
of the gauge group. The on-shell Lagrangian of $\mathcal{N}=1$ SYM theory,
which consists of the gluon fields $A^{a}_{\mu}(x)$ and the gluino fields
$\lambda^{a}(x)$, where $a=1,\ldots,N^{2}_{c}-1$, can be written in Minkowski
space as
$\mathcal{L}_{\text{SYM}}=-\frac{1}{4}F^{a}_{\mu\nu}F^{a,\mu\nu}+\frac{\mathrm{i}}{2}\bar{\lambda}^{a}\gamma^{\mu}\left(\mathcal{D}_{\mu}\lambda\right)^{a}-\frac{m_{\tilde{g}}}{2}\bar{\lambda}^{a}\lambda^{a},$
(1)
where the first term, containing the field strength tensor $F^{a}_{\mu\nu}$,
is the gauge part, and $\mathcal{D}_{\mu}$ in the second term is the covariant
derivative in the adjoint representation of the gauge group SU($N_{c}$),
$N_{c}$ being the number of colors. The last part of the above Lagrangian is a
gluino mass term which breaks SUSY softly for $m_{\tilde{g}}\neq 0$, which
means that it does not affect the renormalisation properties of the theory and
that the spectrum of the theory depends on the gluino mass in a continuous
way. The physical spectrum of this theory is expected to consist of bound
states of gluons and gluinos, arranged in mass degenerate supermultiplets if
SUSY is not broken [5, 6].
In order to perform Monte-Carlo simulations of the theory, we discretise the
Euclidean action and put it onto a four-dimensional hypercubic lattice. We use
the Curci-Veneziano version [7] of the lattice action $S=S_{g}+S_{f}$, where
the gauge part $S_{g}$ is defined by the usual plaquette action
$S_{g}=-\frac{\beta}{N_{c}}\sum_{p}\mathrm{Re}\left[\mathrm{tr}\left(U_{p}\right)\right],$
(2)
with the inverse gauge coupling given by $\beta=2N_{c}/g^{2}$, and the
fermionic part
$S_{f}=\frac{1}{2}\sum_{x}\left\\{\bar{\lambda}^{a}_{x}\lambda_{x}^{a}-\kappa\sum_{\mu=1}^{4}\left[\bar{\lambda}^{a}_{x+\hat{\mu}}V_{ab,x\mu}(1+\gamma_{\mu})\lambda^{b}_{x}+\bar{\lambda}^{a}_{x}V^{T}_{ab,x\mu}(1-\gamma_{\mu})\lambda^{b}_{x+\hat{\mu}}\right]\right\\}$
(3)
implements the gluinos as Wilson fermions. Here the adjoint link variables are
defined by
$V_{ab,x\mu}=2\,\mathrm{tr}\,(U_{x\mu}^{\dagger}T_{a}U_{x\mu}T_{b})$, where
$T_{a}$ are the generators of the gauge group, and the hopping parameter
$\kappa$ is related to the bare gluino mass $m_{\tilde{g}}$ by
$\kappa=1/(2m_{\tilde{g}}+8)$. In order to approach the limit of vanishing
gluino mass, the hopping parameter has to be tuned properly. In our numerical
investigations the fermionic part is additionally $O(a)$ improved by adding
the clover term
$-(c_{sw}/4)\,\bar{\lambda}(x)\sigma_{\mu\nu}F^{\mu\nu}\lambda(x)$ [8].
In our previous investigations we have determined the low-lying mass spectrum
of the theory with gauge group SU(2) and SU(3) non-perturbatively from first
principles using Monte Carlo techniques [4, 9, 10, 11], and obtained mass
degenerate supermultiplets [12].
## 2 SUSY Ward identities
In classical physics, Noether’s theorem provides a relation between symmetries
and conservation laws. In the case of quantum field theories, symmetries are
translated to Ward identities, representing quantum versions of Noether’s
theorem. In $\mathcal{N}=1$ supersymmetric Yang-Mills theory a gluino mass
term breaks SUSY softly. The soft breaking effects vanish in the chiral limit,
a limit where theory is characterised by massless gluinos. In order to analyse
this breaking of supersymmetry and to identify the chiral limit, we employ the
Ward identities for supersymmetry. Moreover, on the lattice supersymmetry is
broken explicitly due to the introduction of the discretisation of space-time
lattice as a regulator of the theory. SUSY Ward identities can be used to
check whether supersymmetry is restored in the continuum limit.
In the Euclidean continuum, on-shell supersymmetry transformations of the
gauge and gluino fields are given by
$\delta
A_{\mu}^{a}=-2\,\mathrm{i}\,\overline{\lambda}^{a}\gamma_{\mu}\,\varepsilon\,,\quad\delta\lambda^{a}=-\sigma_{\mu\nu}F_{\mu\nu}^{a}\,\varepsilon\,,$
(4)
where the transformation parameter $\varepsilon$ is an anticommuting Majorana
spinor. From the variation of the action under a supersymmetry transformation
with a space-time-dependent parameter $\varepsilon(x)$ one derives the SUSY
Ward identities. For any suitable gauge invariant local operator $Q(y)$, they
read
$\left\langle\partial^{\mu}S_{\mu}(x)Q(y)\right\rangle=m_{\tilde{g}}\left\langle\chi(x)Q(y)\right\rangle-\left\langle\frac{\delta
Q(y)}{\delta\bar{\epsilon}(x)}\right\rangle,$ (5)
where $S_{\mu}(x)=(S_{\mu}^{\alpha}(x))$ is the supercurrent of spin 3/2, and
the term $m_{\tilde{g}}\left\langle\chi(x)Q(y)\right\rangle$ is due to the
gluino mass in the action of the theory. In the continuum the supercurrent
$S_{\mu}(x)$ and the operator $\chi(x)$ are given by
$\displaystyle S_{\mu}(x)$
$\displaystyle=-\frac{2\,\mathrm{i}}{g}\mathrm{tr}\left[F^{\nu\rho}(x)\sigma_{\nu\rho}\gamma_{\mu}\lambda(x)\right],$
(6) $\displaystyle\chi(x)$
$\displaystyle=+\frac{2\,\mathrm{i}}{g}\mathrm{tr}\left[F^{\mu\nu}(x)\sigma_{\mu\nu}\lambda(x)\right].$
(7)
The last term of Eq. (5) is a contact term, which contributes only if $x=y$,
and it can be avoided if $Q(y)$ is not localised at $x$. Therefore the contact
term is ignored in the following discussions.
The four-dimensional space-time lattice breaks SUSY explicitly. As a
consequence, the lattice versions of the Ward identities differ from their
continuum counter parts by an additional term $\left\langle
X_{S}(x)Q(y)\right\rangle$. The explicit form of this term is known, but need
not be displayed here. At tree level this term is proportional to the lattice
spacing $a$ and vanishes in the limit of zero lattice spacing. At higher
orders in perturbation theory, nevertheless, the contribution of this term is
finite in the continuum limit due to divergences proportional to 1/$a$ that
multiply the factor $a$. This plays a role for the renormalisation of the
supercurrent and of the gluino mass [7, 13]. In the renormalisation of SUSY
Ward identities, operators of dimensions $\leq 11/2$ have to be taken into
account. They lead to a modification of the gluino mass, and in addition a
current $T_{\mu}$, mixing with the supercurrent, appears, corresponding to an
operator of dimension $9/2$. Consequently, on the lattice the following Ward
identities are obtained
$Z_{S}\left\langle\nabla_{\mu}S_{\mu}(x)Q(y)\right\rangle+Z_{T}\left\langle\nabla_{\mu}T_{\mu}(x)Q(y)\right\rangle=m_{S}\left\langle\chi(x)Q(y)\right\rangle+O(a),$
(8)
where $Z_{S}$ and $Z_{T}$ are renormalisation coefficients. The subtracted
gluino mass is defined as $m_{S}=m_{\tilde{g}}-\bar{m}$, where $\bar{m}$ is
the mass subtraction coming from the operators of dimension $7/2$. The mixing
current is defined as
$T_{\mu}(x)=\frac{2\,\mathrm{i}}{g}\mathrm{tr}\left[F_{\mu\nu}(x)\gamma_{\nu}\lambda(x)\right].$
(9)
Regarding the local insertion operator $Q(y)$, our choice is the spinor
$Q(y)=\chi^{(\mathrm{sp})}(y)$, with
$\chi^{(\mathrm{sp})}(y)=\sum_{i<j}\mathrm{tr}\left[F_{ij}(y)\sigma_{ij}\lambda(y)\right],$
(10)
where the indices $i,j\in\\{1,2,3\\}$. The reason behind this choice is that
it gives the best signal [13].
## 3 Numerical analysis of SUSY Ward identities
We have analysed the SUSY Ward identities numerically, employing the
configurations produced in our project on $\mathcal{N}=1$ supersymmetric Yang-
Mills theory with gauge group SU(3). Numerically it is convenient to use
integrated Ward identities where integration or sum is performed over all
three spatial coordinates. The resulting identities will then hold for every
time-slice distance $t$. In the analysis the data from all time-slice
distances in an interval $t_{min}\leq t\leq t_{max}$ are included. The lower
limit $t_{min}$ is always taken to be larger or equal than 3 in order to avoid
contamination from contact terms. The choice of $t_{min}$ for the different
ensembles of configurations is discussed below. Since the correlation
functions are symmetric or antisymmetric in $t$, the upper limit $t_{max}$ is
chosen to be half of the time extent of the lattice. Each term in Eq. (8) is a
4$\times$4 matrix in spin-space and can be expanded in the basis of 16 Dirac
matrices, i. e.
$\left\\{\boldsymbol{1},\gamma_{5},\gamma_{\mu},\gamma_{\mu}\gamma_{5},\mathrm{i}\sigma_{\mu\nu}\right\\}$.
It can be shown, with the help of discrete symmetries, that only the following
two contributions are non-zero [13]:
$\hat{x}_{b,t,1}+A\hat{x}_{b,t,2}=B\hat{x}_{b,t,3},\qquad\text{with}\quad
b=1,2\,,$ (11)
where $A=Z_{T}Z^{-1}_{S}$, $B=am_{S}Z^{-1}_{S}$, and
$\displaystyle\hat{x}_{1,t,1}$
$\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}S_{4}(x)Q(0)\right\rangle,$
$\displaystyle\hat{x}_{2,t,1}$
$\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}S_{4}(x)\gamma_{4}Q(0)\right\rangle,$
$\displaystyle\hat{x}_{1,t,2}$
$\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}T_{4}(x)Q(0)\right\rangle,$
$\displaystyle\hat{x}_{2,t,2}$
$\displaystyle\equiv\sum_{\vec{x}}\left\langle\nabla_{4}T_{4}(x)\gamma_{4}Q(0)\right\rangle,$
(12) $\displaystyle\hat{x}_{1,t,3}$
$\displaystyle\equiv\sum_{\vec{x}}\left\langle\chi(x)Q(0)\right\rangle,$
$\displaystyle\hat{x}_{2,t,3}$
$\displaystyle\equiv\sum_{\vec{x}}\left\langle\chi(x)\gamma_{4}Q(0)\right\rangle.$
In these equations the Dirac indices of $S_{4}(x)$, $T_{4}(x)$, $\chi(x)$ and
of the insertion operator $Q(0)$ are not written, and sums over repeated
(hidden) Dirac indices are implied. Also, $O(a)$ terms that vanish in the
continuum limit are not written explicitly in these equations. Introducing a
double index $i=(b,t)$, running over $2T$ values, where $T$ is the time extent
of the lattice, and denoting $A_{1}=1,A_{2}=A,A_{3}=-B$, Eq. (11) is written
compactly
$\sum_{\alpha=1}^{3}A_{\alpha}\hat{x}_{i\alpha}=0\,.$ (13)
In these equations the $\hat{x}_{i\alpha}=\langle x_{i\alpha}\rangle$ are the
expectation values of random variables $x_{i\alpha}$, which themselves are
considered to be the results of a finite Markov chain. We compute the
estimators $x_{i\alpha}$ for the correlation functions $\hat{x}_{i\alpha}$
numerically using high performance facilities. The Eqs. (13), including all
time-slice distances $t$ from $t_{min}$ to $t_{max}$, are solved
simultaneously for $A_{\alpha}$ by means of minimal chi-squared methods. Two
methods, namely the so-called Local Method and Global Method, have been used
in the past by our collaboration [4, 13]. These methods, however, do not take
properly into account correlations between the different quantities appearing
in Eq. (13). For this purpose we have developed a new method based on a
generalised least squares fit, the so-called GLS Method [14], based on the
maximum likelihood. For fixed $A_{\alpha}$ ($\alpha=1,2,3$) and given
numerical data $x_{i\alpha}$, the probability distribution $P\sim\exp(-L)$ of
the quantities $\hat{x}_{i\alpha}$, subject to the constraints (13), has its
maximum at a point where $L=L_{min}$, with
$L_{min}=\frac{1}{2}\sum_{i,\alpha,j,\beta}(A_{\alpha}x_{i\alpha})(D^{-1})_{ij}(A_{\beta}x_{j\beta})\,,$
(14)
where
$D_{ij}=\sum_{\alpha,\beta}A_{\alpha}A_{\beta}(\langle
x_{i\alpha}x_{j\beta}\rangle-\langle x_{i\alpha}\rangle\langle
x_{j\beta}\rangle).$ (15)
Next, the desired coefficients $A_{\alpha}$ have to be found such that
$L_{min}$ as a function of $A_{2}$ and $A_{3}$ is minimised. This cannot be
solved analytically, and we find $A_{\alpha}$ numerically such that the global
minimum of $L_{min}(A_{2},A_{3})$ is reached; for details see Ref. [15]. In
particular, owing to $A_{3}=-am_{S}Z^{-1}_{S}$ this provides us with the
subtracted gluino mass $m_{S}$ up to the renormalisation factor. To estimate
the statistical uncertainties we employ the standard Jackknife procedure.
### 3.1 Discretisation effects
All terms in the Ward identity (8), including the $O(a)$ term $\left\langle
X_{S}(x)Q(y)\right\rangle$, are correlation functions of gauge invariant
operators. In the corresponding Eqs. (11) they are correlation functions of
operators localised on time slices or pairs of adjacent time slices at
distance $t$. As for any gauge invariant correlation function of this type,
they decay exponentially in $t$, with a decay rate given by the mass gap of
the theory. For very small $t$ the contributions of higher masses will affect
the impact of the $O(a)$ term on the Ward identities. Therefore we expect that
the value of the obtained gluino mass will depend on the minimal time slice
distance $t_{min}$. This effect should become negligible at sufficiently large
$t_{min}$. On the other hand, if $t_{min}$ is chosen too large, noise in the
data will dominate. The behaviour that can be observed in Fig. 1 is compatible
with these expectations.
Figure 1: The subtracted gluino mass $am_{S}Z^{-1}_{S}$ as a function of
$t_{min}$ calculated with the GLS Method at $\beta=5.6$. At small values of
$t_{min}$ the subtracted gluino mass is affected by contact terms and by
$O(a)$ terms. Data from $t_{min}=2$ and $t_{min}=3$ are shown, but do not
enter our final analysis.
An adequate choice of $t_{min}$ is therefore important for the quality of the
results. We cope with this in two ways.
In order to avoid perturbing effects at too small $t_{min}$ and a poor signal-
to-noise ratio at too large $t_{min}$, for each hopping parameter and inverse
gauge coupling, the value of $t_{min}$ is selected by finding an optimal
starting point where a plateau in the subtracted gluino mass begins. The
results are presented in Tab. 1.
$\beta=5.4$ | $\beta=5.4$ | $\beta=5.45$ | $\beta=5.5$ | $\beta=5.6$
---|---|---|---|---
$V=12^{3}\times 24$ | $V=16^{3}\times 32$ | $V=16^{3}\times 32$ | $V=16^{3}\times 32$ | $V=24^{3}\times 48$
$\kappa$ | $t_{min}$ | $\kappa$ | $t_{min}$ | $\kappa$ | $t_{min}$ | $\kappa$ | $t_{min}$ | $\kappa$ | $t_{min}$
0.1695 | 4 | 0.1692 | 4 | 0.1685 | 5 | 0.1667 | 5 | 0.1645 | 7
0.1700 | 4 | 0.1695 | 4 | 0.1687 | 5 | 0.1673 | 5 | 0.1650 | 7
0.1703 | 4 | 0.1697 | 4 | 0.1690 | 5 | 0.1678 | 5 | 0.1655 | 6
0.1705 | 4 | 0.1700 | 4 | 0.1692 | 5 | 0.1680 | 5 | 0.1660 | 7
- | - | 0.1703 | 4 | 0.1693 | 4 | 0.1683 | 5 | - | -
- | - | 0.1705 | 4 | - | - | - | - | - | -
Table 1: The values of $t_{min}$ for all available gauge ensembles, chosen
such that a plateau is formed.
In the second approach, we consider that our simulations of the theory are
done at different values of the lattice spacing $a$, which leads to different
$O(a)$ terms in the Ward identities. A fixed value of $t_{min}$ in lattice
units would mean a lower limit on the time-slice distances in physical units,
that is on the cutoff-scale and shrinks to zero in the continuum limit.
Instead it would be more appropriate to consider $t_{min}$ at constant
physical distance for all gauge ensembles. This is done in the following way.
At the coarsest lattice spacing, at inverse gauge coupling $\beta_{0}$, the
value of $t_{min}$ is selected according to the plateau criterion explained
above. For finer lattice spacings at inverse gauge couplings $\beta_{i}$ the
corresponding $t_{min}$ are then obtained by scaling with a physical scale. In
order to determine the physical scale we use the mass $m_{g\tilde{g}}$ of the
gluino-glue particle and the Wilson flow parameter $w_{0}$. Correspondingly,
$t_{min}$ is scaled according to
$\displaystyle t_{min,{\beta_{i}}}$
$\displaystyle=t_{min,\beta_{0}}\frac{m_{g\tilde{g},\beta_{0}}}{m_{g\tilde{g},\beta_{i}}}\,,$
(16) $\displaystyle\text{or}\quad t_{min,{\beta_{i}}}$
$\displaystyle=t_{min,\beta_{0}}\frac{w_{0,\beta_{i}}}{w_{0,\beta_{0}}}\,,$
(17)
where $\beta_{0}=5.4$, $\beta_{1}=5.45$, $\beta_{2}=5.5$, and $\beta_{3}=5.6$.
The resulting $t_{min}$ is rounded to the nearest integer value. The values
obtained by this method are collected in Tab. 2. In most points they are equal
or almost equal to those in Tab. 1.
$\beta$ | $t_{min}$ from $m_{g\tilde{g}}$ | $t_{min}$ from $w_{0}$
---|---|---
5.4 | 4 | 4
5.45 | 5 | 5
5.5 | 5 | 6
5.6 | 7 | 7
Table 2: The values of $t_{min}$ at fixed physical temporal distance from
scaling with the gluino-glue mass $m_{g\tilde{g}}$ and with the Wilson flow
parameter $w_{0}$.
### 3.2 Adjoint pion and remnant gluino mass
The chiral limit is defined by the vanishing of the subtracted gluino mass.
Its measured values can therefore be employed for the tuning of the hopping
parameter $\kappa$ to the chiral limit. On the other hand, we can also use the
vanishing of the adjoint pion mass $m_{\text{a-}\pi}$ for the tuning [16]. The
adjoint pion $\text{a-}\pi$ is an unphysical particle in the SYM theory, that
can be defined in partially quenched chiral perturbation theory [17]. In the
numerical simulations its correlation function can be computed as the
connected piece of the correlation function of the $\text{a-}\eta^{\prime}$
particle. Similar to the Gell-Mann-Oakes-Renner relation of QCD [5], in the
continuum limit there is a linear relation between the adjoint pion mass
squared and the gluino mass: $m^{2}_{\text{a-}\pi}\propto m_{\tilde{g}}$.
The numerical results for the subtracted gluino mass from the Ward identities
and the adjoint pion mass squared in lattice units are shown for $\beta=5.6$
in Fig. 2 together with their extrapolations towards the chiral limit.
(a) The subtracted gluino mass $am_{S}Z^{-1}_{S}$ and the adjoint pion mass
squared $(am_{\text{a-}\pi})^{2}$ as a function of $1/(2\kappa)$, and the
corresponding extrapolations towards the chiral limit ($\kappa_{c}$).
(b) The subtracted gluino mass $am_{S}Z^{-1}_{S}$ as a function of the adjoint
pion mass squared $(am_{\text{a-}\pi})^{2}$ in order to obtain the remnant
gluino mass $\Delta(am_{S}Z^{-1}_{S})$.
Figure 2: Chiral limit and determination of the remnant gluino mass at
$\beta=5.6$. All quantities are in lattice units.
In the continuum the subtracted gluino mass and the adjoint pion mass should
vanish at the same point. On the lattice, however, this is not the case due to
lattice artefacts. As an estimate for this discrepancy we determine the value
of the subtracted gluino mass at vanishing adjoint pion mass. This quantity is
called the remnant gluino mass $\Delta(am_{S}Z^{-1}_{S})$, and it is expected
to vanish in the continuum limit. The values of the remnant gluino mass,
obtained by taking an average of the values calculated using the procedures
explained above, are presented in Tab. 3.
$\beta$ | 5.4 | 5.45 | 5.5 | 5.6
---|---|---|---|---
$\Delta(am_{S}Z^{-1}_{S})$ | 0.0334(48) | 0.019(12) | 0.0099(88) | 0.0103(33)
Table 3: The values of the remnant gluino mass $\Delta(am_{S}Z^{-1}_{S})$
obtained at four different values of the inverse gauge coupling.
### 3.3 Continuum limit
The remnant gluino mass is a lattice artefact and should vanish in the
continuum limit $a\rightarrow 0$. It is therefore a quantity to check on
whether supersymmetry is recovered or not. Concerning the dependence of the
remnant gluino mass on the lattice spacing, arguments based on partially
quenched chiral perturbation theory suggest that the remnant gluino mass is of
order $a^{2}$ at $m^{2}_{\text{a-}\pi}=0$ [13]. In order to investigate this
relation, the remnant gluino mass has to be expressed in physical units. Our
choice for the scale is the Wilson flow parameter $w_{0}$, which is defined
through the gradient flow [10]. We use its values extrapolated to the chiral
limit, $w_{0,\chi}$. Similarly the lattice spacing is represented by
$a/w_{0,\chi}$. Our numerical results for the remnant gluino mass as a
function of the lattice spacing and its extrapolation towards the continuum
limit are shown in Fig. 3. The data points in Fig. 3(a) show the results from
separate chiral extrapolations for each lattice spacing and the corresponding
extrapolation to the continuum limit. The extrapolation to the continuum and
the error of this extrapolation are obtained by means of parametric bootstrap
with linear fits. On the other hand, Fig. 3(b) is obtained by means of a
simultaneous fit of the dependence on the hopping parameter and the lattice
spacing [18].
(a) The remnant gluino mass from separate extrapolations to the chiral limit
where $m^{2}_{\text{a-}\pi}$ is zero, and the extrapolation to the continuum
limit.
(b) The remnant gluino mass from a simultaneous chiral and continuum
extrapolation. By construction, in this method the data points coincide with
the error band.
Figure 3: The remnant gluino mass $\Delta{(w_{0}m_{S}Z^{-1}_{S})}$ in physical
units $w_{0}$ as a function of the lattice spacing squared, and its linear
extrapolation towards the continuum limit.
The remnant gluino mass in the continuum limit is compatible with zero within
one standard-deviation, confirming the preliminary results present in Ref.
[15] with only two data points. Lattice artefacts vanish in the continuum
limit as expected, and supersymmetry is recovered in the chiral and continuum
limits, in agreement with our findings from the mass spectrum [12].
## 4 Conclusion
In this paper we have presented numerical results of an analysis of SUSY Ward
identities in $\mathcal{N}=1$ supersymmetric Yang-Mills theory on the lattice
with gauge group SU(3). Contact terms and $O(a)$ lattice artefacts in the Ward
identities have been controlled by suitable choices of time-slice distances.
Ensembles of gauge configurations at four different values of the lattice
spacing and various hopping parameters have been analysed, allowing us for the
first time to perform an extrapolation to the continuum limit, where the
lattice artefacts vanish. The remnant gluino mass has been extrapolated in two
alternative ways, on the one hand by extrapolating to the chiral limit at each
lattice spacing separately and then to the continuum limit, and on the other
hand by means of a simultaneous extrapolation to the chiral and continuum
limit. With both extrapolations the lattice artefacts in the subtracted gluino
mass appear to scale to zero as of order $a^{2}$ in agreement with the
theoretical expectations. Our findings support the validity of SUSY Ward
identities and the restoration of supersymmetry in the continuum limit.
## Acknowledgments
The authors gratefully acknowledge the Gauss Centre for Supercomputing e. V.
(www.gauss-centre.eu) for funding this project by providing computing time on
the GCS Supercomputer JUQUEEN and JURECA at Jülich Supercomputing Centre (JSC)
and SuperMUC at Leibniz Supercomputing Centre (LRZ). Further computing time
has been provided on the compute cluster PALMA of the University of Münster.
This work is supported by the Deutsche Forschungsgemeinschaft (DFG) through
the Research Training Group “GRK 2149: Strong and Weak Interactions - from
Hadrons to Dark Matter”. G. Bergner acknowledges support from the Deutsche
Forschungsgemeinschaft (DFG) Grant No. BE 5942/2-1. S. Ali acknowledges
financial support from the Deutsche Akademische Austauschdienst (DAAD).
## References
* [1] J. Wess and J. Bagger, Supersymmetry and Supergravity, Princeton University Press, 1992.
* [2] G. Jungman, M. Kamionkowski and K. Griest, Phys. Rept. 267 (1996) 195, [arXiv: hep-ph/9506380 ].
* [3] J. D. Lykken, [arXiv: 1005.1676[hep-ph]].
* [4] G. Bergner, P. Giudice, I. Montvay, G. Münster and S. Piemonte, JHEP 1603 (2016) 080, [arXiv: 1512.07014[hep-lat]].
* [5] G. Veneziano and S. Yankielowicz, Phys. Lett. B 113 (1982) 231.
* [6] G. R. Farrar, G. Gabadadze and M. Schwetz, Phys. Rev. D 58 (1998) 015009, [arXiv: hep-th/9711166 ].
* [7] G. Curci and G. Veneziano, Nucl. Phys. B 292 (1987) 555.
* [8] S. Musberg, G. Münster and S. Piemonte, JHEP 1305 (2013) 143,
[arXiv: 1304.5741[hep-lat]].
* [9] S. Ali, G. Bergner, H. Gerber, P. Giudice, S. Kuberski, I. Montvay, G. Münster and S. Piemonte, EPJ Web Conf. 175 (2018) 08016, [arXiv: 1710.07464[hep-lat]].
* [10] S. Ali, G. Bergner, H. Gerber, P. Giudice, I. Montvay, G. Münster, S. Piemonte and P. Scior, JHEP 1803 (2018) 113, [arXiv: 1801.08062[hep-lat]].
* [11] S. Ali, G. Bergner, H. Gerber, S. Kuberski, I. Montvay, G. Münster, S. Piemonte and P. Scior, JHEP 1904 (2019) 150, [arXiv: 1901.02416[hep-lat]].
* [12] S. Ali, G. Bergner, H. Gerber, I. Montvay, G. Münster, S. Piemonte and P. Scior, Phys. Rev. Lett. 122 (2019) 2216011, [arXiv: 1902.11127[hep-lat]].
* [13] F. Farchioni, A. Feo, T. Galla, C. Gebert, R. Kirchner, I. Montvay, G. Münster and A. Vladikas, Eur. Phys. J. C 23 (2002) 719, [arXiv: hep-lat/0111008 ].
* [14] S. Ali, PhD thesis, University of Münster, June 2019.
* [15] S. Ali, G. Bergner, H. Gerber, I. Montvay, G. Münster, S. Piemonte and P. Scior, Eur. Phys. J. C 78 (2018) 404, [arXiv: 1802.07067[hep-lat]].
* [16] K. Demmouche, F. Farchioni, A. Ferling, I. Montvay, G. Münster, E. E. Scholz and J. Wuilloud, Eur. Phys. J. C 69 (2010) 147, [arXiv: 1003.2073[hep-lat]].
* [17] G. Münster and H. Stüwe, JHEP 1405 (2014) 034, [arXiv: 1402.6616[hep-th]].
* [18] H. Gerber, PhD thesis, University of Münster, May 2019.
|
2024-09-04T02:54:58.388585 | 2020-03-09T13:16:15 | 2003.04121 | {
"authors": "Sean Prendiville",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26117",
"submitter": "Sean Prendiville",
"url": "https://arxiv.org/abs/2003.04121"
} | arxiv-papers | # The inverse theorem for the nonlinear Roth configuration: an exposition
Sean Prendiville Department of Mathematics and Statistics
Lancaster University
UK<EMAIL_ADDRESS>
###### Abstract.
We give an exposition of the inverse theorem for the cut-norm associated to
the nonlinear Roth configuration, established by Peluse and the author in [6].
###### Contents
1. 1 Introduction
2. 2 An outline of our argument
3. 3 PET induction
4. 4 An inverse theorem for the arithmetic box norm
5. 5 Quantitative concatenation
6. 6 Degree lowering
7. 7 Proof of the cut norm inverse theorem
8. A Basic theory of the Gowers norms
## 1\. Introduction
Peluse and the author recently obtained an effective bound on the density of
sets of integers lacking the configuration
$x,\ x+y,\ x+y^{2}\qquad(y\neq 0).$ (1.1)
We call this pattern the _nonlinear Roth configuration_ , after Bourgain and
Chang [1].
###### Theorem 1.1 (Peluse and Prendiville [6]).
There exists an absolute constant $c>0$ such that if
$A\subset\left\\{1,2,\dots,N\right\\}$ lacks the configuration (1.1), then
$|A|\ll N(\log\log N)^{-c}.$
We have since removed a logarithm from this bound.
###### Theorem 1.2 (Peluse and Prendiville [7]).
There exists an absolute constant $c>0$ such that if
$A\subset\left\\{1,2,\dots,N\right\\}$ lacks the configuration (1.1), then
$|A|\ll N(\log N)^{-c}.$
The main innovation behind both of these results is [6, Theorem 7.1], an
inverse theorem for the counting operator associated to this configuration. It
is the purpose of this note to give an exposition of this inverse theorem. The
approach is essentially the same as that in [6]. We hope that having two
distinct accounts is useful for those interested in utilising these ideas.
###### Definition 1.3 (Counting operator).
For positive integers $q\leq N$ write
$M:=\left\lfloor\sqrt{N/q}\right\rfloor.$ (1.2)
Given this, define the _counting operator_ on the functions
$f_{i}:\mathbb{Z}\to\mathbb{C}$ by
$\Lambda_{q,N}(f_{0},f_{1},f_{2}):=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+y)f_{2}(x+qy^{2}).$
(1.3)
When the $f_{i}$ all equal $f$ we simply write $\Lambda_{q,N}(f)$.
###### Definition 1.4 (Local function).
We call a function $\phi:\mathbb{Z}\to\mathbb{C}$ a _local function of
resolution $M$ and modulus $q$_ if there exists a partition of $\mathbb{R}$
into intervals of length $M$ such that $\phi$ is constant on the intersection
of every such interval with every congruence class mod $q$.
###### Definition 1.5 (Cut norm).
Define the _cut norm_ of $f:\mathbb{Z}\to\mathbb{C}$ by
$\left\|f\right\|_{q,N}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,\
|\Lambda_{q,N}(g_{1},f,g_{2})|,\ |\Lambda_{q,N}(g_{1},g_{2},f)|\\},$ (1.4)
where the supremum is taken over all 1-bounded functions
$g_{i}:[N]\to\mathbb{C}$. We note that, in spite of our nomenclature, this is
not a norm but a seminorm. One could remedy this by summing over $y\geq 0$ in
the counting operator (1.3)
This seminorm is useful in [7]. However, it is too restrictive for the
approach developed in [6], where we (implicitly) only work with the following
quantities:
$\left\|f\right\|^{\sharp}_{q}:=\sup\\{|\Lambda_{q,N}(g_{0},g_{1},f)|:|g_{i}|\leq
1\ \text{ and }\ \mathrm{supp}(g_{i})\subset[N]\\}$ (1.5)
and
$\left\|f\right\|^{\flat}_{q}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,\
|\Lambda_{q,N}(g_{1},f,g_{2})|\ :\ |g_{i}|\leq 1\ \text{ and }\
\mathrm{supp}(g_{i})\subset[N]\\}.$ (1.6)
Here then is a re-formulation and slight generalisation of [6, Theorem 7.1].
###### Theorem 1.6 (Partial cut norm inverse theorem).
Let $q\leq N$ be positive integers, $\delta>0$, and
$f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$.
Suppose that
$\left\|f\right\|^{\flat}_{q,N}\geq\delta.$
Then either $N\ll(q/\delta)^{O(1)}$ or there exists a 1-bounded local function
$\phi$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, modulus $qq^{\prime}$ for
some $q^{\prime}\ll\delta^{-O(1)}$, and such that
$\sum_{x\in[N]}f(x)\phi(x)\gg\delta^{2^{66}}N.$
This exposition is organised as follows. In §2, we give a more detailed
outline of the proof of Theorem 1.6. In §§3–5 we develop an effective approach
to a (special case of a) so-called _concatenation_ theorem of Tao and Ziegler
[10]. This allows us to show that if our counting operator is large, then the
function weighting the nonlinear term must have large Gowers uniformity norm.
The drawback is that the degree of the resulting Gowers norm is large (in our
approach it is the $U^{5}$-norm). In §6 we give a _degree-lowering_ procedure,
which utilises properties specific to our configuration to show that one may
replace the $U^{5}$-norm with the $U^{1}$-norm. In §7 we combine the results
of the previous sections in order to prove Theorem 1.6.
### 1.1. Notation
#### 1.1.1. Standard conventions
We use $\mathbb{N}$ to denote the positive integers. For a real $X\geq 1$,
write $[X]=\\{1,2,\ldots,\left\lfloor X\right\rfloor\\}$. A complex-valued
function is _1-bounded_ if the modulus of the function does not exceed 1.
We use counting measure on $\mathbb{Z}$, so that for
$f,g:\mathbb{Z}\to\mathbb{C}$ we have
$\left\langle
f,g\right\rangle:=\sum_{x}f(x)\overline{g(x)}\qquad\text{and}\qquad\left\|f\right\|_{L^{p}}:=\biggl{(}\sum_{x}|f(x)|^{p}\biggr{)}^{\frac{1}{p}}.$
Any sum of the form $\sum_{x}$ is to be interpreted as a sum over
$\mathbb{Z}$. We use Haar probability measure on
$\mathbb{T}:=\mathbb{R}/\mathbb{Z}$, so that for measurable
$F:\mathbb{T}\to\mathbb{C}$ we have
$\left\|F\right\|_{L^{p}}:=\biggl{(}\int_{\mathbb{T}}|F(\alpha)|^{p}\mathrm{d}\alpha\biggr{)}^{\frac{1}{p}}=\biggl{(}\int_{0}^{1}|F(\alpha)|^{p}\mathrm{d}\alpha\biggr{)}^{\frac{1}{p}}$
For $\alpha\in\mathbb{T}$ we write $\left\|\alpha\right\|$ for the distance to
the nearest integer.
For a finite set $S$ and function $f:S\to\mathbb{C}$, denote the average of
$f$ over $S$ by
$\mathbb{E}_{s\in S}f(s):=\frac{1}{|S|}\sum_{s\in S}f(s).$
Given functions $f,g:G\to\mathbb{C}$ on an additive group with measure
$\mu_{G}$ we define their convolution by
$f*g(x):=\int_{G}f(x-y)g(y)\mathrm{d}\mu_{G},$ (1.7)
when this makes sense.
We define the Fourier transform of $f:\mathbb{Z}\to\mathbb{C}$ by
$\hat{f}(\alpha):=\sum_{x}f(x)e(\alpha x)\qquad(\alpha\in\mathbb{T}),$ (1.8)
again, when this makes sense. Here $e(\alpha)$ stands for $e^{2\pi i\alpha}$.
The difference function of $f:\mathbb{Z}\to\mathbb{C}$ is the function
$\Delta_{h}f:\mathbb{Z}\to\mathbb{C}$ given by
$\Delta_{h}f(x)=f(x)\overline{f(x+h)}.$
Iterating gives
$\Delta_{h_{1},\dots,h_{s}}f:=\Delta_{h_{1}}\dots\Delta_{h_{s}}f.$
This allows us to define the Gowers $U^{s}$-norm
$\left\|f\right\|_{U^{s}}:=\left(\sum_{x,h_{1},\dots,h_{s}}\Delta_{h_{1},\dots,h_{s}}f(x)\right)^{1/2^{s}}.$
(1.9)
If $\|\cdot\|$ is a seminorm on an inner product space, recall that its dual
seminorm $\|\cdot\|^{*}$ is defined by
$\|f\|^{*}:=\sup_{\|g\|\leq 1}|\langle f,g\rangle|.$
Hence
$\left|\left\langle
f,g\right\rangle\right|\leq\left\|f\right\|^{*}\left\|g\right\|.$ (1.10)
For a function $f$ and positive-valued function $g$, write $f\ll g$ or
$f=O(g)$ if there exists a constant $C$ such that $|f(x)|\leq Cg(x)$ for all
$x$. We write $f=\Omega(g)$ if $f\gg g$. We sometimes opt for a more explicit
approach, using $C$ to denote a large absolute constant, and $c$ to denote a
small positive absolute constant. The values of $C$ and $c$ may change from
line to line.
#### 1.1.2. Local conventions
Up to normalisation, all of the above are well-used in the literature. Next we
list notation specific to our paper. We have tried to minimise this in order
to aid the casual reader.
For a real parameter $H\geq 1$, we use $\mu_{H}:\mathbb{Z}\to[0,1]$ to
represent the following normalised Fejér kernel
$\mu_{H}(h):=\frac{1}{\left\lfloor
H\right\rfloor}\left(1-\frac{|h|}{\left\lfloor
H\right\rfloor}\right)_{+}=\frac{(1_{[H]}*1_{[H]})(h)}{\left\lfloor
H\right\rfloor^{2}}.$ (1.11)
For a multidimensional vector $h\in\mathbb{Z}^{d}$ we write
$\mu_{H}(h):=\mu_{H}(h_{1})\dotsm\mu_{H}(h_{d}).$ (1.12)
We observe that this is a probability measure on $\mathbb{Z}^{d}$ with support
in the interval $(-H,H)^{d}$.
## 2\. An outline of our argument
In this section we describe the ideas behind Theorem 1.6. In the hope of
making the ideas clearer, we make the simplification that $q=1$ in our
counting operator (1.3). Hence, for finitely supported functions
$f_{0},f_{1},f_{2}:\mathbb{Z}\to\mathbb{C}$, write
$\Lambda(f_{0},f_{1},f_{2}):=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[N^{1/2}]}f_{0}(x)f_{1}(x+y)f_{2}(x+y^{2}).$
(2.1)
For this operator, Theorem 1.6 can be deduced from the following.
###### Lemma 2.1.
Let $f_{0},f_{1},f_{2}:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions
supported in the interval $[N]$ and $\delta>0$. Suppose that
$|\Lambda(f_{0},f_{1},f_{2})|\geq\delta.$
Then either $N\ll\delta^{-O(1)}$ or there exist positive integers
$q\ll\delta^{-O(1)}$ and $N^{\prime}\gg\delta^{O(1)}N^{1/2}$ such that
$\sum_{x}\left|\sum_{y\in[N^{\prime}]}f_{1}(x+qy)\right|\gg\delta^{O(1)}NN^{\prime}.$
(2.2)
Using the notation (1.9), notice that the left-hand side of (2.2) is equal to
$\sum_{x}\left\|f_{1}\right\|_{U^{1}(x+q\cdot[N^{\prime}])}.$
### 2.1. Quantitative concatenation
To prove Lemma 2.1, we first prove that our counting operator (2.1) is
controlled by the $U^{5}$-norm of $f_{2}$. The purpose of this subsection is
to sketch how we do this with polynomial bounds.
By applying the Cauchy–Schwarz and van der Corput inequalities a number of
times, we show in §3 that, when $f_{0},f_{1},f_{2}:\mathbb{Z}\to\mathbb{C}$
are $1$-bounded functions supported in the interval $[N]$, largeness of the
counting operator (2.1) implies largeness of the sum
$\sum_{a,b\in[N^{1/2}]}\sum_{h_{1},h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{ah_{1},bh_{2},(a+b)h_{3}}f_{2}(x).$
(2.3)
This deduction is made following the PET induction scheme of Bergelson and
Leibman [2]. The gain in working with the counting operator (2.3) over (2.1)
is that univariate polynomials such as $y^{2}$, whose image constitute a
sparse set, have been replaced by bilinear forms such as $ah_{1}$, whose image
is much denser
In §§4–5, we show that largeness of (2.3) implies largeness of
$\|f_{2}\|_{U^{5}}$. If there were no dependence between the coefficients of
the $h_{i}$ in (2.3), then we could in fact bound (2.3) in terms of
$\|f_{2}\|_{U^{3}}$. Since the argument is informative, we illustrate why this
is the case for the sum
$\sum_{a,b,c\in[N^{1/2}]}\sum_{h_{1},h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{ah_{1},bh_{2},ch_{3}}f_{2}(x).$
(2.4)
The following fact is key, the formal version of which is Lemma 5.3.
###### Claim 2.2.
If $\displaystyle\sum_{a,h\in[N^{1/2}]}\sum_{x}\Delta_{ah}f(x)$ is large then
so is $\displaystyle\sum_{k\in(-N,N)}\sum_{x}\Delta_{k}f(x)$.
###### Sketch proof.
Apply the Cauchy–Schwarz inequality to double the $a$ and $h$ variables,
yielding a bound in terms of
$\sum_{a,a^{\prime}\in[N^{1/2}]}\sum_{h,h^{\prime}\in[N^{1/2}]}\sum_{x}\Delta_{ah-a^{\prime}h^{\prime}}f(x).$
(2.5)
For a random choice of $a,a^{\prime}\in[N^{1/2}]$, the progression
$a\cdot[N^{1/2}]-a^{\prime}\cdot[N^{1/2}]$ covers a large portion of the
interval $(-N,N)$ relatively smoothly. One can make this intuition rigorous
and thus deduce largeness of the sum
$\sum_{k\in(-N,N)}\sum_{x}\Delta_{k}f(x).$∎
Applying Claim 2.2 three times allows us to replace each of $ah_{1}$, $bh_{2}$
and $ch_{3}$ in (2.4) with $k_{1},k_{2},k_{3}\in(-N,N)$, yielding largeness of
$\left\|f_{2}\right\|_{U^{3}}$.
Since the PET induction scheme outputs (2.3), and not (2.4), the problem
remains of how to handle the dependency between the differencing parameters in
(2.3). If we were not concerned with quantitative bounds, we could apply a
‘concatenation’ theorem of Tao and Ziegler [10, Theorem 1.24] to obtain
largeness of the $U^{9}$-norm of $f_{2}$. However, the qualitative nature of
this argument means that it cannot be used to obtain bounds in the nonlinear
Roth theorem. In its place we prove Theorem 5.6, which is a special case of
[10, Theorem 1.24], using a very different argument that gives polynomial
bounds. We spend the remainder of this subsection sketching the argument.
We begin by viewing (2.3) as the average
$\sum_{a,h_{1}\in[N^{1/2}]}\left\|\Delta_{ah_{1}}f_{2}\right\|_{a},$ (2.6)
where
$\|f\|_{a}^{4}:=\sum_{b\in[N^{1/2}]}\sum_{h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{bh_{2},(a+b)h_{3}}f(x)$
(2.7)
One can view this as an average of 2-dimensional Gowers box norms where, for
fixed $b$, the inner sum corresponds to a box norm in the ‘directions’ $b$ and
$a+b$. Note that if we could bound the quantity $\|\Delta_{ah_{1}}f_{2}\|_{a}$
in terms of the $U^{4}$-norm of $\Delta_{ah_{1}}f_{2}$ for many pairs
$(a,h_{1})$, then by Claim 2.2 we deduce largeness of the $U^{5}$-norm of
$f_{2}$. We show that, on average, one can indeed control $\|\cdot\|_{a}$ in
terms of $\|\cdot\|_{U^{4}}$, with polynomial bounds. The following can be
extracted from the proof of (the more general) Theorem 5.6.
###### Lemma 2.3.
For each $a\in[N^{1/2}]$ let $f_{a}:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded
function supported in the interval $[N]$. Suppose that
$\mathbb{E}_{a\in[N^{1/2}]}\|f_{a}\|_{a}^{4}\geq\delta\left\|1_{[N]}\right\|_{a}^{4}.$
Then
$\mathbb{E}_{a\in[N^{1/2}]}\|f_{a}\|_{U^{4}}^{16}\gg\delta^{O(1)}\left\|1_{[N]}\right\|_{U^{4}}^{16}.$
To finish this subsection, we briefly discuss the proof of this key lemma. For
most choices of $a,b\in[N^{1/2}]$, the ‘directions’ $a$ and $a+b$ of the box
norm
$\sum_{h_{2},h_{3}\in[N^{1/2}]}\sum_{x}\Delta_{bh_{2},(a+b)h_{3}}f_{a}(x)$
(2.8)
are close to ‘independent’, in the sense that at least one of the directions
$a$ and $a+b$ is large and together they have small greatest common divisor.
The proof of Lemma 2.3 thus begins by viewing $\|\cdot\|_{a}$ as an average of
box norms
$\|f\|_{\square(X,Y)}^{4}:=\sum_{x_{1},x_{2}\in X,y_{1},y_{2}\in
Y}f(x_{1},y_{1})\overline{f(x_{1},y_{2})f(x_{2},y_{1})}f(x_{2},y_{2}).$ (2.9)
It is easy to show that largeness of $\|f\|_{\square(X,Y)}$ implies that $f$
correlates with a function of the form $(x,y)\mapsto l(x)r(y)$. We show,
analogously, that provided $b$ and $a+b$ are not too small and have greatest
common divisor not too large, then largeness of the arithmetic box norm (2.8)
implies that $f_{a}$ correlates with a product $g_{b}h_{a+b}$ of 1-bounded
functions, where $g_{b}$ is $b$-periodic and $h_{a+b}$ is almost periodic
under shifts by integer multiples of $a+b$. As a consequence, for most
$a\in[N^{1/2}]$, largeness of $\|f_{a}\|_{a}$ implies largeness of
$\sum_{b\in[N^{1/2}]}\sum_{x}f_{a}(x)g_{b}(x)h_{a+b}(x).$ (2.10)
In fact, an application of Cauchy–Schwarz allows us give an explicit
description of $h_{a+b}$ in terms of $f_{a}$, namely we may take it to be of
the form
$h_{a+b}(x)=\mathbb{E}_{k\in[N^{1/2}]}f_{a}(x+(a+b)k)g_{b}(x+(a+b)k).$ (2.11)
This presentation makes apparent the almost periodicity of $h_{a+b}$.
###### Claim 2.4.
Largeness of (2.10) implies that $\mathbb{E}_{b\in[N^{1/2}]}h_{a+b}$ has large
$U^{3}$-norm.
Let us first show why Claim 2.4 in turn implies that $f_{a}$ has large
$U^{4}$-norm, completing our sketch proof of Lemma 2.3. The expression (2.11)
and the triangle inequality for Gowers norms together imply that largeness of
$\mathbb{E}_{b\in[N^{1/2}]}\left\|h_{a+b}\right\|_{U^{3}}$ implies largeness
of $\mathbb{E}_{b\in[N^{1/2}]}\left\|f_{a}g_{b}\right\|_{U^{3}}$. Utilising
the $b$-periodicity of $g_{b}$ we have
$\left\|f_{a}g_{b}\right\|_{U^{3}}=\mathbb{E}_{k\in[N^{1/2}]}\left\|f_{a}(\cdot)g_{b}(\cdot+bk)\right\|_{U^{3}}.$
(2.12)
The product $f_{a}(\cdot)g_{b}(\cdot+bk)$ resembles a difference function in
the direction $b$. Indeed the Gowers–Cauchy–Schwarz inequality (see [9,
Exercise 1.3.19]) shows that if (2.12) is large (on average over
$b\in[N^{1/2}]$) then so is
$\mathbb{E}_{b,k\in[N^{1/2}]}\left\|\Delta_{bk}f_{a}\right\|_{U^{3}}$
Largeness of $\left\|f_{a}\right\|_{U^{4}}$ then follows from Claim 2.2.
Finally we sketch the proof of Claim 2.4. The Cauchy–Schwarz inequality allows
us to remove the weight $f_{a}(x)$ from (2.10) and deduce largeness of
$\sum_{x}\sum_{b,b^{\prime}\in[N^{1/2}]}\overline{g_{b}(x)h_{a+b}(x)}g_{b^{\prime}}(x)h_{a+b^{\prime}}(x).$
Using the periodicity properties of $g_{b}$, $g_{b^{\prime}}$ and $h_{a+b}$,
this is approximately equal to
$\sum_{x}\sum_{\begin{subarray}{c}b,b^{\prime}\in[N^{1/2}]\\\
k_{1},k_{2},k_{3}\in[N^{1/2}]\end{subarray}}\overline{g_{b}(x-bk_{1})h_{a+b}(x-(a+b)k_{2})}g_{b^{\prime}}(x-b^{\prime}k_{3})h_{a+b^{\prime}}(x).$
Changing variables in $x$, we obtain largeness of the sum
$\sum_{x}\sum_{\begin{subarray}{c}b,b^{\prime}\in[N^{1/2}]\\\
k_{1},k_{2},k_{3}\in[N^{1/2}]\end{subarray}}\overline{g_{b}(x+(a+b)k_{2}+b^{\prime}k_{3})h_{a+b}(x+bk_{1}+b^{\prime}k_{3})}\\\
g_{b^{\prime}}(x+bk_{1}+(a+b)k_{2})h_{a+b^{\prime}}(x+bk_{1}+(a+b)k_{2}+b^{\prime}k_{3}).$
The point here is that all but the last function have arguments depending on
at most two of the bilinear forms $bk_{1}$, $(a+b)k_{2}$ and
$b^{\prime}k_{1}^{\prime}$. This enables us to employ the
Gowers–Cauchy–Schwarz inequality (in the form of Lemma A.4) to deduce
largeness of a sum similar to
$\sum_{x}\sum_{\begin{subarray}{c}b,b^{\prime}\in[N^{1/2}]\\\
k_{1},k_{2},k_{3}\in[N^{1/2}]\end{subarray}}\Delta_{bk_{1},\,(a+b)k_{2},\,b^{\prime}k_{3}}h_{a+b^{\prime}}(x).$
The utility of this expression is that the directions of the differencing
parameters are all ‘independent’ of the direction of periodicity of
$h_{a+b^{\prime}}$. Indeed the approximate $(a+b^{\prime})$-periodicity of
$h_{a+b^{\prime}}$ means that one can replace $\Delta_{y}h_{a+b^{\prime}}$
with $\mathbb{E}_{k}\Delta_{y+(a+b^{\prime})k}h_{a+b^{\prime}}$ at the cost of
a small error. We thereby obtain largeness of
$\sum_{x}\sum_{b,b^{\prime}\in[N^{1/2}]}\sum_{\begin{subarray}{c}k_{1},k_{2},k_{3}\in[N^{1/2}]\\\
k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime}\in[N^{1/2}]\end{subarray}}\Delta_{bk_{1}+(a+b^{\prime})k_{1}^{\prime},\,(a+b)k_{2}+(a+b^{\prime})k_{2}^{\prime},\,b^{\prime}k_{3}+(a+b^{\prime})k_{3}^{\prime}}h_{a+b^{\prime}}(x).$
(2.13)
For a random triple $(a,b,b^{\prime})\in[N^{1/2}]$ the greatest common divisor
of the pairs $(b,a+b^{\prime})$, $(a+b,a+b^{\prime})$ and
$(b^{\prime},a+b^{\prime})$ are all small, and these are the pairs appearing
in the differencing parameters of (2.13). The argument used to treat (2.5) may
be therefore be employed to replace (2.13) with
$\sum_{x}\sum_{b^{\prime}\in[N^{1/2}]}\sum_{k_{1},k_{2},k_{3}\in[N]}\Delta_{k_{1},k_{2},k_{3}}h_{a+b^{\prime}}(x),$
and thereby yield Claim 2.4.
### 2.2. Degree lowering
After we have shown that $\Lambda(f_{0},f_{1},f_{2})$ is controlled by the
$U^{5}$-norm of $f_{2}$, we carry out a ‘degree lowering’ argument. This
technique originated in the work [5] in finite fields. The basic idea is that,
under certain conditions, one can combine $U^{s}$-control with understanding
of two-term progressions to deduce $U^{s-1}$-control. Repeating this gives a
sequence of implications
$U^{5}\text{-control}\implies U^{4}\text{-control}\implies
U^{3}\text{-control}\implies U^{2}\text{-control}\implies
U^{1}\text{-control}.$
Despite the appearance of the $U^{5}$-norm, $U^{4}$-norm, and $U^{3}$-norm,
the degree lowering argument, both in [5] and here, does not require the
$U^{s}$-inverse theorem for any $s\geq 3$. Instead it relies on Fourier
analysis in the place of these inverse theorems.
Adapting the degree lowering argument of [5] to the integer setting requires
several significant modifications. The first modification is that the
$U^{s}$-control described above is control in terms of the $U^{s}$-norm of the
dual function
$F(x):=\mathbb{E}_{y\in[N^{1/2}]}f_{0}(x-y^{2})f_{1}(x+y-y^{2}).$ (2.14)
Thus, to begin the degree lowering argument, we must show that largeness of
$\Lambda(f_{0},f_{1},f_{2})$ implies largeness of $\|F\|_{U^{5}}$. To do this,
we use a simple Hahn–Banach decomposition as described in [3, Proposition
3.6], for details see §7.
We conclude this section by sketching an instance of degree-lowering: how
$U^{3}$-control of the dual (2.14) implies $U^{2}$-control, starting from the
assumption that
$\|F\|_{U^{3}}^{8}\geq\delta\left\|1_{[N]}\right\|_{U^{3}}^{8}.$
Using the fact that $\|F\|_{U^{3}}^{8}=\sum_{h}\|\Delta_{h}F\|_{U^{2}}^{4}$
and applying the $U^{2}$-inverse theorem, we deduce the existence of a
function $\phi:\mathbb{Z}\to\mathbb{T}$ such that, for at least $\gg\delta N$
choices of differencing parameter $h$, we have
$\left|\sum_{x\in[N]}\Delta_{h}F(x)e(\phi(h)x)\right|\gg\delta N.$ (2.15)
Note that if, in the above inequality, we could replace the function $\phi(h)$
by a constant $\beta\in\mathbb{T}$ not depending on $h$, then we could easily
deduce largeness of $\|F\|_{U^{2}}$. Indeed, writing $g(h)$ for the phase of
the sum inside absolute values, this would give
$\sum_{x,h}\overline{g(h)}\overline{F(x+h)}F(x)e(\beta
x)\gg\delta^{O(1)}N^{3},$
and the usual argument111One can either use orthogonality and extraction of a
large Fourier coefficient, as in the proof of Lemma A.1, or use two
applications of Cauchy–Schwarz. showing $U^{2}$-control of the equation
$x+y=z$ implies that
$\|F\|_{U^{2}}^{4}\gg\delta^{O(1)}\left\|1_{[N]}\right\|_{U^{2}}$. It thus
remains to show that such a $\beta$ exists.
Expanding the definition of the difference and dual functions in (2.15), and
using the Cauchy–Schwarz inequality (as is done in greater generality in the
proof of Lemma 6.3), one can show that there exists $h^{\prime}$ such that for
many $h$ satisfying (2.15) we have
$\left|\sum_{x}\sum_{y\in[N^{1/2}]}\Delta_{h-h^{\prime}}f_{0}(x)\Delta_{h-h^{\prime}}f_{1}(x+y)e([\phi(h)-\phi(h^{\prime})][x+y^{2}])\right|\gg\delta^{O(1)}N^{3/2}$
Further application of Cauchy–Schwarz allows us to remove the difference
functions from the above inequality and deduce largeness of the exponential
sum
$\sum_{z\in[N^{1/2}]}\left|\sum_{y\in[N^{1/2}]}e(2\left[\phi(h)-\phi(h^{\prime})\right]yz)\right|.$
Summing the inner geometric progression and using a Vinogradov-type lemma then
shows that $\phi(h)-\phi(h^{\prime})$ is major arc. There are very few major
arcs, so the pigeonhole principle gives the existence of
$\beta_{0}\in\mathbb{T}$ such that $\phi(h)-\phi(h^{\prime})$ is very close to
$\beta_{0}$ for many $h\in(-N,N)$ that also satisfy (2.15). We may therefore
take $\beta=\beta_{0}+\phi(h^{\prime})$ in the argument following (2.15).
## 3\. PET induction
We prove Theorem 1.6 over the course of §§3–7. We begin in §§3–5 by showing
how our counting operator $\Lambda_{q,N}(f_{0},f_{1},f_{2})$, as defined in
(1.3), is controlled by the $U^{5}$-norm of $f_{2}$. This argument starts with
the PET induction scheme of Bergelson–Leibman [2], which in some sense
‘linearises’ a polynomial progression, replacing univariate polynomials such
as $y^{2}$ with bilinear forms $ah$. The outcome of this procedure is Lemma
3.3.
For the following, we recall our definition (1.11) of $\mu_{H}$.
###### Lemma 3.1 (van der Corput inequality).
Let $f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded and $M,H\geq 1$. Then we have the
estimate
$\biggl{|}\mathbb{E}_{y\in[M]}f(y)\biggr{|}^{2}\leq\frac{M+H}{M}\sum_{h}\mu_{H}(h)\mathbb{E}_{y\in[M]}\Delta_{h}f(y).$
###### Proof.
This is standard, see for instance [8, Lemma 3.1].∎
###### Lemma 3.2 (Difference functions control linear configurations).
Let $f_{i}:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions with support in
an interval $I_{i}$ of size $|I_{i}|=N$. Then for any $a,b\in\mathbb{Z}$ and
$1\leq H\leq M$ we have
$\biggl{|}\mathbb{E}_{x\in
I_{0}}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\biggr{|}^{8}\\\
\ll\sum_{h}\mu_{H}(h)\mathbb{E}_{x\in
I_{3}}\Delta_{ah_{1},bh_{2},(a+b)h_{3}}f_{3}(x).$ (3.1)
###### Proof.
Applying Cauchy-Schwarz in the $x$ variable gives
$\biggl{|}\mathbb{E}_{x\in
I_{0}}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\biggr{|}^{2}\\\
\leq\frac{1}{N}\sum_{x}\bigg{|}\mathbb{E}_{y\in[M]}f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\bigg{|}^{2}.$
Bounding the inner sum using van der Corput’s inequality (Lemma 3.1) and
making the change of variables $x\mapsto x-ay$ (valid since $x$ is ranging
over $\mathbb{Z}$), the latter is at most
$2\sum_{h_{1}}\mu_{H}(h_{1})\mathbb{E}_{x\in
I_{1}}\mathbb{E}_{y\in[M]}\Delta_{ah_{1}}f_{1}(x)\Delta_{bh_{1}}f_{2}(x+(b-a)y)\Delta_{(a+b)h_{1}}f_{3}(x+by).$
Here we may restrict $x$ to $I_{1}$ on observing that the support of
$\Delta_{ah_{1}}f_{1}$ is contained in the support of $f_{1}$. Making use of
the fact that $\mu_{H}$ is a probability measure, we repeat the procedure of
applying Cauchy–Schwarz, van der Corput then a change of variables, to deduce
that
$\biggl{|}\mathbb{E}_{x\in
I_{0}}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+ay)f_{2}(x+by)f_{3}(x+(a+b)y)\biggr{|}^{4}\\\
\leq 8\sum_{h_{1},h_{2}}\mu_{H}(h_{1})\mu_{H}(h_{2})\mathbb{E}_{x\in
I_{2}}\mathbb{E}_{y\in[M]}\Delta_{bh_{1},(b-a)h_{2}}f_{2}(x)\Delta_{(a+b)h_{1},bh_{2}}f_{3}(x+ay).$
A final iteration of the same procedure then yields (3.1). ∎
Before embarking on the following, we remind the reader of our convention
(1.2) regarding $M$.
###### Lemma 3.3 (Linearisation).
Let $f_{i}:\mathbb{Z}\to\mathbb{C}$ be $1$-bounded functions, each with
support in the interval $[N]$. Then for any $1\leq H\leq M$ we have
$\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|^{32}\ll\sum_{a,b,h}\mu_{M}(a)\mu_{M}(b)\mu_{H}(h)\mathbb{E}_{x\in[N]}\Delta_{2q(a+b)h_{1},\,2qbh_{2},\,2qah_{3}}f_{2}(x).$
(3.2)
###### Proof.
We repeat the procedure given in the proof of Lemma 3.2, applying Cauchy-
Schwarz, followed by van der Corput’s inequality and a change of variables. A
first application gives
$\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|^{2}\leq\\\
2\sum_{a}\mu_{M}(a)\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}\Delta_{a}f_{1}(x)f_{2}\bigl{(}x+qy^{2}-y\bigr{)}\overline{f_{2}\bigl{(}x+q(y+a)^{2}-y\bigr{)}}.$
A second application then gives
$\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|^{4}\ll\sum_{a,b}\mu_{M}(a)\mu_{M}(b)\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}f_{2}(x)\overline{f_{2}\bigl{(}x+2qay+qa^{2}\bigr{)}}\\\
\overline{f_{2}\bigl{(}x+2qby+qb^{2}-b\bigr{)}}f_{2}\bigl{(}x+2q(a+b)y+q(a+b)^{2}-b\bigr{)}.$
Applying Lemma 3.2 to bound the inner sum over $x$ and $y$, we obtain (3.2)
after a final change of variables ∎
## 4\. An inverse theorem for the arithmetic box norm
The objective in this section is to characterise those 1-bounded functions
$f:\mathbb{Z}\to\mathbb{C}$ with support in $[N]$ for which the following
quantity is large
$\sum_{h,x}\mu_{H}(h)\Delta_{ah_{1},bh_{2}}f(x).$ (4.1)
One can think of this as an arithmetic analogue of the two-dimensional ‘box
norm’ (2.9). In our eventual application we are able to ensure that $a$ and
$b$ are a generic pair of integers from the interval $[N^{1/2}]$. In
particular, at least one of them has size proportional to $N^{1/2}$ and their
highest common factor is small. One may think of this as a proxy for linear
independence.
We begin by characterising largeness of (4.1) when the directions are coprime.
###### Lemma 4.1 (Inverse theorem for the arithmetic box norm).
Let $a,b$ be positive integers with $\gcd(a,b)=1$. Suppose that
$f:\mathbb{Z}\to\mathbb{C}$ is $1$-bounded with support in the interval $[N]$
and satisfies
$\sum_{h,x}\mu_{H}(h)\Delta_{ah_{1},bh_{2}}f(x)\geq\delta N.$ (4.2)
Then there exist 1-bounded functions $g,h:\mathbb{Z}\to\mathbb{C}$ such that
* •
$g$ is $a$-periodic, in the sense that $g(x+a)=g(x)$ for all $x$;
* •
$h$ is approximately $b$-periodic, in the sense that for any $\varepsilon>0$
we have
$\\#\left\\{x\in[N]:h(x+by)\neq h(x)\text{ for some }|y|\leq\varepsilon
N/b\right\\}\leq\left(1+\tfrac{2\varepsilon
N}{b}\right)\left(1+\tfrac{N}{a}\right);$
and furthermore
$\biggl{|}\sum_{x}f(x)g(x)h(x)\biggr{|}\geq\delta\left\lfloor
H\right\rfloor^{2}-2\left(\tfrac{H}{a}+\tfrac{Hb}{N}\right)\left\lfloor
H\right\rfloor^{2}.$ (4.3)
###### Remark.
In parsing the above inequalities, it may be helpful to keep in mind that in
our application $a$, $b$ and $H$ are of order $\sqrt{N}$, with $H$
considerably smaller than $a$, in which case the lower bound in (4.3) becomes
$\Omega(\delta H^{2})$.
###### Proof.
The majority of our proof is concerned with manipulating (4.2) until we can
interpret it as a genuine box norm (2.9), and thereby apply the box norm
inverse theorem. The essential observation is that, since $\gcd(a,b)=1$, every
integer $x$ can be uniquely represented in the form
$x=ay+bz\qquad(y\in\mathbb{Z},\ z\in[a]).$
We note that if $x\in[N]$ then the constraint on $z$ forces $y$ to lie in the
range $-b<y<N/a$.
Defining $F:\mathbb{Z}\times\mathbb{Z}\to\mathbb{C}$ by $F(y,z):=f(ay+bz)$,
the left-hand side of (4.2) becomes
$\sum_{y,y^{\prime}\in\mathbb{Z}}\sum_{\begin{subarray}{c}z\in[a]\\\
z^{\prime}\in\mathbb{Z}\end{subarray}}F(y,z)\overline{F(y^{\prime},z)}\overline{F(y,z^{\prime})}F(y^{\prime},z^{\prime})\mu_{H}(y^{\prime}-y)\mu_{H}(z^{\prime}-z).$
If $z^{\prime}$ and $z$ contribute to the above sum then $z^{\prime}\in
z+(-H,H)\subset(-H+1,a+H).$ Hence we can restrict the range of summation of
$z^{\prime}$ to $[a]$, at the cost of perturbing the sum by at most
$2\left\lfloor H\right\rfloor(\frac{N}{a}+b).$ It follows that
$\biggl{|}\sum_{y,y^{\prime}}\sum_{z,z^{\prime}\in[a]}F(y,z)\overline{F(y^{\prime},z)}\overline{F(y,z^{\prime})}F(y^{\prime},z^{\prime})\mu_{H}(y^{\prime}-y)\mu_{H}(z^{\prime}-z)\biggr{|}\\\
\geq\delta N-2\left\lfloor H\right\rfloor\left(\tfrac{N}{a}+b\right).$
We remove the Fejér kernels by Fourier expansion:
$\sum_{\begin{subarray}{c}y,y^{\prime}\\\
z,z^{\prime}\in[a]\end{subarray}}F(y,z)\overline{F(y^{\prime},z)F(y,z^{\prime})}F(y^{\prime},z^{\prime})\mu_{H}(y^{\prime}-y)\mu_{H}(z^{\prime}-z)=\\\
\int_{\mathbb{T}^{2}}\sum_{\begin{subarray}{c}y,y^{\prime}\\\
z,z^{\prime}\in[a]\end{subarray}}F(y,z)\overline{F(y^{\prime},z)F(y,z^{\prime})}F(y^{\prime},z^{\prime})\hat{\mu}_{H}(\alpha)\hat{\mu}_{H}(\beta)e(\alpha(y^{\prime}-y)+\beta(z^{\prime}-z))\mathrm{d}\alpha\mathrm{d}\beta\\\
\leq\left(\int_{\mathbb{T}}|\hat{\mu}_{H}(\alpha)|\mathrm{d}\alpha\right)^{2}\sup_{\alpha,\beta\in\mathbb{T}}\biggl{|}\sum_{\begin{subarray}{c}y,y^{\prime}\\\
z,z^{\prime}\in[a]\end{subarray}}F(y,z)F_{2}(y^{\prime},z)F_{3}(y,z^{\prime})F_{4}(y^{\prime},z^{\prime})\biggr{|},$
where $F_{2}(y^{\prime},z):=\overline{F(y^{\prime},z)}e(-\beta z)$,
$F_{3}(y,z^{\prime}):=\overline{F(y,z^{\prime})}e(-\alpha y)$, and
$F_{4}(y^{\prime},z^{\prime})$ $:=F(y^{\prime},z^{\prime})e(\alpha
y^{\prime}+\beta z^{\prime})$.
We observe that
$\hat{\mu}_{H}(\alpha)=|\hat{1}_{[H]}(\alpha)|^{2}/\left\lfloor
H\right\rfloor^{2}$, which implies that
$\int_{\mathbb{T}}|\hat{\mu}(\alpha)|d\alpha=\left\lfloor
H\right\rfloor^{-1}$. Therefore
$\biggl{|}\sum_{\begin{subarray}{c}y,y^{\prime}\\\
z,z^{\prime}\in[a]\end{subarray}}F(y,z)F_{2}(y^{\prime},z)F_{3}(y,z^{\prime})F_{4}(y^{\prime},z^{\prime})\biggr{|}\geq\delta\left\lfloor
H\right\rfloor^{2}N-2\left\lfloor
H\right\rfloor^{3}\left(\tfrac{N}{a}+b\right),$ (4.4)
for $1$-bounded functions $F_{i}:\mathbb{Z}\times[a]\to\mathbb{C}$ of the form
$F_{i}(y,z)=f(ay+bz)e(\alpha_{1}y+\alpha_{2}z)$. Since $f$ is supported on
$[N]$, there are exactly $N$ pairs
$(y^{\prime},z^{\prime})\in\mathbb{Z}\times[a]$ for which
$F(y^{\prime},z^{\prime})\neq 0$. Thus, by pigeonholing in $y^{\prime}$ and
$z^{\prime}$ in (4.4) and setting $L(y):=F_{3}(y,z^{\prime})$ and
$R(z):=F_{2}(y^{\prime},z)F_{4}(y^{\prime},z^{\prime})$, we get that
$\biggl{|}\sum_{y}\sum_{z\in[a]}F(y,z)L(y)R(z)\biggr{|}\geq\delta\left\lfloor
H\right\rfloor^{2}-2\left\lfloor
H\right\rfloor^{3}\left(\tfrac{1}{a}+\tfrac{b}{N}\right).$
For each $x\in\mathbb{Z}$, define $l(x)\in\mathbb{Z}$ and $r(x)\in[a]$ by
$x=al(x)+br(x)$, and set $g(x):=R\circ r(x)$ and $h(x):=L\circ l(x)$. Then it
remains to check the invariance properties of $g$ and $h$. To see that
$g(x)=g(x+ay)$ for all $x,y\in\mathbb{Z}$, just note that $r(x)=r(x+ay)$ for
every $x,y\in\mathbb{Z}$.
Finally we establish that, for most $x\in[N]$, we have $h(x)=h(x+bz)$ for all
$|z|\leq\varepsilon N/b$. First note that $l(x)=l(x+bz)$ whenever $\varepsilon
N/b<r(x)\leq a-\varepsilon N/b$. Hence for this to fail, $x$ must lie in one
of at most $1+2\varepsilon N/b$ congruence classes modulo $a$. The number of
such $x$ lying in the interval $[N]$ is at most
$\left(1+\frac{2\varepsilon N}{b}\right)\left(1+\frac{N}{a}\right).$
∎
The lemma also yields a result in the situation in which $\gcd(a,b)>1$. In
proving this we take the opportunity to smooth out the $b$-invariance of $h$
slightly, whilst also giving an explicit description of $h$ in terms of $f$.
More concretely, we replace $h$ with a projection of $fg$ onto cosets of
$b\cdot\mathbb{Z}$.
###### Lemma 4.2.
There exists an absolute constant $c>0$ such that on assuming $1\leq H\leq
c\delta^{3}N^{1/2}$ and $1\leq K\leq c\delta^{2}H^{2}N^{-1/2}$ the following
holds. Let $a,b\in[N^{1/2}]$ with $\gcd(a,b)\leq\delta^{-1}$ and
$a,b\geq\delta N^{1/2}$. Suppose that $f:\mathbb{Z}\to\mathbb{C}$ is
$1$-bounded, supported on the interval $[N]$, and satisfies
$\biggl{|}\sum_{h,x}\mu_{H}(h)\Delta_{ah_{1},bh_{2}}f(x)\biggr{|}\geq\delta
N.$
Then there exists a 1-bounded $a$-periodic function $g$ such that
$\sum_{x}f(x)g(x)\sum_{k}\mu_{K}(k)\overline{f(x+bk)g(x+bk)}\gg\delta^{2}H^{4}/N.$
(4.5)
###### Proof.
Set $q:=\gcd(a,b)\leq\delta^{-1}$. For each $u\in[q]$, define a $1$-bounded
function $f_{u}:\mathbb{Z}\to\mathbb{C}$ by $f_{u}(x):=f(u+qx)$, and let
$I_{u}:=\left\\{x:u+qx\in[N]\right\\}$ denote the interval on which $f_{u}$ is
supported. By the pigeon-hole principle, for some $u$ we have
$\sum_{x,h_{1},h_{2}}\mu_{H}(h_{1})\mu_{H}(h_{2})\Delta_{\frac{a}{q}h_{1},\frac{b}{q}h_{2}}f_{u}(x)\geq\delta|I_{u}|.$
Note that $\gcd(a/q,b/q)=1$, so by the previous lemma, there exist 1-bounded
functions $g_{u},h_{u}:\mathbb{Z}\to\mathbb{C}$ such that
$\biggl{|}\sum_{x}f_{u}(x)g_{u}(x)h_{u}(x)\biggr{|}\geq\delta\left\lfloor
H\right\rfloor^{2}-2\left(\tfrac{Hq}{a}+\tfrac{Hb}{q|I_{u}|}\right)\left\lfloor
H\right\rfloor^{2}\gg\delta H^{2}.$
Furthermore, $g_{u}$ is $(a/q)$-periodic and
$\\#\left\\{x\in I_{u}:h_{u}(x)\neq h_{u}(x+yb/q)\text{ for some
}|y|\leq\varepsilon|I_{u}|q/b\right\\}\\\
\leq\left(1+\tfrac{2q\varepsilon|I_{u}|}{b}\right)\left(1+\tfrac{q|I_{u}|}{a}\right)\ll\tfrac{N}{a}+\tfrac{\varepsilon
N^{2}}{ab}.$
Defining $g_{u^{\prime}}$ and $h_{u^{\prime}}$ to be identically zero when
$u^{\prime}\neq u$, we set $g(u^{\prime}+qx):=g_{u^{\prime}}(x)$ and
$h(u^{\prime}+qx):=h_{u^{\prime}}(x)$. One can then check that $g$ is
$a$-invariant, that
$\biggl{|}\sum_{x}f(x)g(x)h(x)\biggr{|}\gg\delta H^{2},$
and that
$\\#\left\\{x\in[N]:h(x)\neq h(x+by)\text{ for some }|y|\leq\varepsilon
N/b\right\\}\ll\tfrac{N}{a}+\tfrac{\varepsilon N^{2}}{ab}.$
We may use the latter property to show that, provided $K\geq 1$, we have
$\biggl{|}\sum_{x}f(x)g(x)h(x)-\sum_{x}h(x)\mathbb{E}_{y\in[K]}g(x+by)f(x+by)\biggr{|}\ll\tfrac{NK}{a}.$
Provided that $K\leq c\delta^{2}H^{2}N^{-1/2}$ we deduce that
$\biggl{|}\sum_{x}h(x)\mathbb{E}_{y\in[K]}g(x+bk)f(x+bk)\biggr{|}\gg\delta
H^{2}.$
One can check that, as a function of $x$, the inner expectation is 1-bounded
with support in $[-2N,2N]$. Applying the Cauchy–Schwarz inequality and
changing variables then gives (4.5). ∎
Finally we observe that a function of the form
$h(x):=\sum_{k}\mu_{K}(k)f(x+by)$ (4.6)
has nice $b$-periodicity properties.
###### Lemma 4.3.
If $h$ is defined as in (4.6) for some 1-bounded $f$, then $h$ is
$O(K^{-1})$-Lipschitz along $b\cdot\mathbb{Z}$, in that for any
$x,y\in\mathbb{Z}$ we have $h(x+by)=h(x)+O(|y|/K)$.
###### Proof.
Recalling the definition (1.11), note that $\mu_{K}$ is $(2/\left\lfloor
K\right\rfloor)$-Lipschitz, in that $|\mu_{K}(k+y)-\mu_{K}(k)|\leq
2|y|/\left\lfloor K\right\rfloor$ for all $k,y\in\mathbb{Z}$. Hence, for
$|y|\leq K$, a change of variables gives
$|h(x+by)-h(x)|\leq\sum_{k}|\mu_{K}(k-y)-\mu_{K}(k)|\ll\frac{|y|}{K}\sum_{|k|<2K}1.$
∎
## 5\. Quantitative concatenation
The endpoint of this section is to show how our counting operator (1.3) is
controlled by the $U^{5}$-norm. We begin with four technical lemmas. The first
says that convolving Fejér kernels along progressions of coprime common
difference covers a substantial portion of an interval in a somewhat regular
manner, a fact that can be interpreted Fourier analytically in the following.
###### Lemma 5.1.
Let $K,L\geq 1$ and let $a,b$ be integers satisfying $a\geq\delta L$,
$b\geq\delta K$ and $\gcd(a,b)\leq\delta^{-1}$. Then
$\int_{\mathbb{T}}\bigl{|}\widehat{\mu}_{K}(a\beta)\bigr{|}\bigl{|}\widehat{\mu}_{L}(b\beta)\bigr{|}\mathrm{d}\beta\ll\frac{\delta^{-4}}{\left\lfloor
K\right\rfloor\left\lfloor L\right\rfloor}.$
###### Proof.
Expanding Fourier transforms, one can check that
$\int_{\mathbb{T}}\bigl{|}\widehat{\mu}_{H}(a\beta)\bigr{|}\bigl{|}\widehat{\mu}_{K}(b\beta)\bigr{|}\mathrm{d}\beta\\\
=\left\lfloor K\right\rfloor^{-2}\left\lfloor
L\right\rfloor^{-2}\\#\biggl{\\{}(x,y)\in[K]^{2}\times[L]^{2}:a(x_{1}-x_{2})=b(y_{1}-y_{2})\biggr{\\}}.$
Writing $d:=\gcd(a,b)$, the number of solutions to the equation is at most
$\left\lfloor K\right\rfloor\left\lfloor
L\right\rfloor\left(\tfrac{\left\lfloor
K\right\rfloor}{b/d}+1\right)\left(\tfrac{\left\lfloor
L\right\rfloor}{a/d}+1\right).$
∎
Our next lemma allows us to discard pairs of integers $a,b$ which are not
sufficiently coprime. We exploit this repeatedly.
###### Lemma 5.2.
For fixed integers $0\leq a_{1},a_{2}\leq M$. The number of pairs $(b,c)$ of
integers $0\leq b,c\leq M$ such that $\gcd(a_{1}+b,a_{2}+c)>\delta^{-1}$ is
$\ll\delta M^{2}$.
###### Proof.
Notice that if $d=\gcd(a_{1}+b,a_{2}+c)$ then $d\leq 2M$. Hence
$\displaystyle\sum_{\begin{subarray}{c}0\leq b,c\leq M\\\
\gcd(a_{1}+b,a_{2}+c)>\delta^{-1}\end{subarray}}1\leq\sum_{\delta^{-1}<d\leq
2M}\ \biggl{(}\ \sum_{0\leq m\leq 2M,\ d\mid m}1\biggr{)}^{2}$
$\displaystyle\leq\sum_{\delta^{-1}<d\leq 2M}\left(\frac{2M}{d}+1\right)^{2}$
$\displaystyle\ll M^{2}\sum_{d>\delta^{-1}}\frac{1}{d^{2}}\ll\delta M^{2}.$
∎
The following lemma says that, as $a$ and $h$ range over $[N^{1/2}]$, the
difference function $\Delta_{ah}f$ behaves like $\Delta_{k}f$ with $k\in[N]$,
at least on average.
###### Lemma 5.3.
Let $f:\mathbb{Z}\to\mathbb{C}$ be a 1-bounded function with support in $[N]$.
Suppose that $\delta N^{1/2}\leq H\leq N^{1/2}$ and
$\mathbb{E}_{a\in[N^{1/2}]}\sum_{h}\mu_{H}(h)\left\|\Delta_{ah}f\right\|_{U^{s}}^{2^{s}}\geq\delta\left\|1_{[N]}\right\|_{U^{s}}^{2^{s}}.$
Then
$\left\|f\right\|_{U^{s+1}}^{2^{s+1}}\gg\delta^{12}\left\|1_{[N]}\right\|_{U^{s+1}}^{2^{s+1}}$
###### Proof.
Expanding the definition of the $U^{s}$-norm
$\mathbb{E}_{a\in[N^{1/2}]}\sum_{h}\mu_{H}(h)\left\|\Delta_{ah}f\right\|_{U^{s}}^{2^{s}}\\\
=\sum_{h_{1},\dots,h_{s},x}\overline{\Delta_{h_{1},\dots,h_{s}}f(x)}\mathbb{E}_{a\in[N^{1/2}]}\sum_{h}\mu_{H}(h)\Delta_{h_{1},\dots,h_{s}}f(x+ah).$
Employing the Cauchy–Schwarz inequality to double the $a$ and $h$ variables
gives
$\mathbb{E}_{a,a^{\prime}\in[N^{1/2}]}\sum_{h_{i}}\sum_{x}\sum_{h,h^{\prime}}\mu_{H}(h)\mu_{H}(h^{\prime})\Delta_{h_{1},\dots,h_{s},ah-a^{\prime}h^{\prime}}f(x)\gg\delta^{2}N^{s+1}.$
By Lemma 5.2 and the pigeon-hole principle, we deduce the existence of
$a,a^{\prime}\gg\delta^{2}N^{1/2}$ with $\gcd(a,a^{\prime})\ll\delta^{-2}$
such that
$\sum_{h_{i}}\sum_{x}\sum_{h,h^{\prime}}\mu_{H}(h)\mu_{H}(h^{\prime})\Delta_{h_{1},\dots,h_{s},ah-a^{\prime}h^{\prime}}f(x)\gg\delta^{2}N^{s+1}.$
By Fourier inversion and extraction of a large Fourier coefficient, there
exists $\alpha\in\mathbb{T}$ such that the right-hand side above is at most
$\int_{\mathbb{T}}\left|\widehat{\mu}_{H}(a\beta)\right|\left|\widehat{\mu}_{H}(a^{\prime}\beta)\right|\mathrm{d}\beta\biggl{|}\sum_{h_{i}}\sum_{x}\Delta_{h_{1},\dots,h_{s},h_{s+1}}f(x)e(\alpha
h_{s+1})\biggr{|}.$
The result follows on employing Lemma 5.1 and Lemma A.3. ∎
We now prove a similar lemma, but with $\Delta_{ah}f$ replaced by $fg_{a}$
where $g_{a}$ is $a$-periodic. The moral is that these are similar quantities
(on average).
###### Lemma 5.4.
Let $f,g_{a}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions such that $g_{a}$
is $a$-periodic and $\mathrm{supp}(f)\subset[N]$. Suppose that
$\mathbb{E}_{a\in[N^{1/2}]}\left\|fg_{a}\right\|_{U^{s}}^{2^{s}}\geq\delta\left\|1_{[N]}\right\|_{U^{s}}^{2^{s}}.$
Then
$\left\|f\right\|_{U^{s+1}}^{2^{s+1}}\gg\delta^{24}\left\|1_{[N]}\right\|_{U^{s+1}}^{2^{s+1}}$
###### Proof.
Fix $a\in[N^{1/2}]$. By the periodicity of $g_{a}$ and a change of variables,
we have
$\sum_{h_{i}}\sum_{x}\Delta_{h_{1},\dots,h_{s}}g_{a}(x)\Delta_{h_{1},\dots,h_{s}}f(x)=\sum_{h_{i}}\sum_{x}\Delta_{h_{1},\dots,h_{s}}g_{a}(x)\mathbb{E}_{y\in[N^{1/2}]}\Delta_{h_{1},\dots,h_{s}}f(x+ay).$
Notice that the sum over $x$ is non-zero only if $|x|,|h_{i}|<N$, hence by
Cauchy–Schwarz and a change of variables
$\displaystyle\biggl{(}\mathbb{E}_{a\in[N^{1/2}]}\left\|fg_{a}\right\|_{U^{s}}^{2^{s}}\biggr{)}^{2}$
$\displaystyle\ll
N^{s+1}\mathbb{E}_{a\in[N^{1/2}]}\sum_{h_{i}}\sum_{x}\sum_{y}\mu_{N^{1/2}}(y)\Delta_{h_{1},\dots,h_{s},ay}f(x)$
$\displaystyle=N^{s+1}\mathbb{E}_{a\in[N^{1/2}]}\sum_{y}\mu_{N^{1/2}}(y)\left\|\Delta_{ay}f\right\|_{U^{s}}^{2^{s}}$
The result follows on employing Lemma 5.3. ∎
We are now ready to give the technical heart of this section. The (somewhat
lengthy) assumptions come from our eventual application of Lemma 4.2.
###### Lemma 5.5.
Fix $a\in\mathbb{N}$ and let $\delta N^{1/2}\leq K\leq N^{1/2}$. For each
$b\in[N^{1/2}]$ let $f,g_{b},h_{b}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded
functions such that $\mathrm{supp}(f),\mathrm{supp}(h_{b})\subset[N]$ and
where $g_{b}$ is $b$-periodic. Set
$\tilde{h}_{b}(x):=\sum_{k}\mu_{K}(k)h_{b}(x+(a+b)k)$
and suppose that
$\sum_{\begin{subarray}{c}\delta\sqrt{N}\leq b\leq\sqrt{N}\\\
\gcd(a,b)\leq\delta^{-1}\end{subarray}}\sum_{x}f(x)g_{b}(x)\tilde{h}_{b}(x)\geq\delta
N^{3/2}.$
Then
$\mathbb{E}_{b\in[N^{1/2}]}\big{\|}h_{b}\big{\|}_{U^{3}}^{8}\gg\delta^{208}\left\|1_{[N]}\right\|_{U^{3}}^{8}.$
###### Proof.
To ease notation, write
$\tilde{h}_{b}(x):=\sum_{k}\mu_{K}(k)h_{b}(x+(a+b)k)$
We apply Cauchy–Schwarz to remove the weight $f(x)$ and double the $b$
variable, yielding
$\sum_{\begin{subarray}{c}\delta\sqrt{N}\leq b,b^{\prime}\leq\sqrt{N}\\\
\gcd(a,b)\leq\delta^{-1}\end{subarray}}\sum_{x}g_{b}(x)\tilde{h}_{b}(x)\overline{g_{b^{\prime}}(x)\tilde{h}_{b^{\prime}}(x)}\geq\delta^{2}N^{2}.$
Employing Lemma 5.2, we may discard those $b,{b^{\prime}}$ for which one of
$\gcd(b^{\prime},a+{b})$ or $\gcd(a+b^{\prime},a+{b})$ is greater than
$C\delta^{-2}$. On combining this with the popularity principle, we deduce the
existence of $\mathcal{B}\subset[\delta N^{1/2},N^{1/2}]$ of size
$|\mathcal{B}|\gg\delta^{2}N^{1/2}$ such that for each $b\in\mathcal{B}$ there
exists $b^{\prime}\in[N^{1/2}]$ with all of $\gcd(b,a+{b})$,
$\gcd({b^{\prime}},a+{b})$, $\gcd(a+b^{\prime},a+{b})$ at most
$O(\delta^{-2})$ and satisfying
$\sum_{x}g_{b}(x)\overline{\tilde{h}_{b^{\prime}}(x)g_{b^{\prime}}(x)}\tilde{h}_{b}(x)\gg\delta^{2}N.$
(5.1)
Expanding the definition of $\tilde{h}_{b^{\prime}}$, using the invariance of
$g_{b}$ and changing variables gives
$\sum_{x}\mathbb{E}_{k_{1},k_{3}\in[K]}\sum_{k_{2}}\mu_{K}(k_{2})g_{b}(x+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\overline{h_{b^{\prime}}(x+bk_{1}+{b^{\prime}}k_{3})}\\\
\overline{g_{b^{\prime}}(x+bk_{1}+(a+b^{\prime})k_{2})}\
\tilde{h}_{b}(x+bk_{1}+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\gg\delta^{2}N.$
Since $h_{b^{\prime}}$ is supported on $[N]$ and $b,{b^{\prime}},K\leq
N^{1/2}$, there are at most $O(N)$ values of $x$ which contribute to the above
sum. Applying Hölder’s inequality then gives
$\sum_{x}\biggl{(}\mathbb{E}_{k_{1},k_{3}\in[K]}\sum_{k_{2}}\mu_{K}(k_{2})g_{b}(x+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\overline{h_{b^{\prime}}(x+bk_{1}+{b^{\prime}}k_{3})}\\\
\overline{g_{b^{\prime}}(x+bk_{1}+(a+b^{\prime})k_{2})}\
\tilde{h}_{b}(x+bk_{1}+(a+b^{\prime})k_{2}+{b^{\prime}}k_{3})\biggr{)}^{8}\gg\delta^{16}N.$
The sum inside the 8th power corresponds to an integral with respect to three
probability measures on $\mathbb{Z}$, with integrand amenable to Lemma A.4.
Combining this with a change of variables gives
$\sum_{x}\sum_{k_{1},k_{2},k_{3}}\mu_{K}(k_{1})\nu_{K}(k_{2})\mu_{K}(k_{3})\Delta_{bk_{1},(a+b^{\prime})k_{2},{b^{\prime}}k_{3}}\
\tilde{h}_{b}(x)\gg\delta^{16}N,$
where we set
$\nu_{K}(k):=\sum_{k_{1}-k_{2}=k}\mu_{K}(k_{1})\mu_{K}(k_{2}).$
By Lemma 4.3, each $\tilde{h}_{b}$ is $O(K^{-1})$-Lipschitz along
$(a+b)\cdot\mathbb{Z}$. Hence, if $l_{i}\in[L]$, a telescoping identity shows
that
$|\Delta_{h_{1}+(a+{b})l_{1},h_{2}+(a+{b})l_{2},h_{3}+(a+{b})l_{3}}\tilde{h}_{b}(x)-\Delta_{h_{1},h_{2},h_{3}}\tilde{h}_{b}(x)|\ll
L/K.$
Taking $L:=c\delta^{16}K$ we obtain
$\sum_{x}\sum_{k_{1},k_{2},k_{3}}\mu_{K}(k_{1})\nu_{K}(k_{2})\mu_{K}(k_{3})\mathbb{E}_{l_{1},l_{2},l_{3}\in[L]}\\\
\Delta_{bk_{1}+(a+{b})l_{1},\,(a+b^{\prime})k_{2}+(a+{b})l_{2},\,{b^{\prime}}k_{3}+(a+{b})l_{3}}\
\tilde{h}_{b}(x)\gg\delta^{16}N.$
We may replace the uniform measure on the $l_{i}$ by Fejér kernels at the cost
of three applications of Cauchy–Schwarz; this gives
$\sum_{x}\sum_{\begin{subarray}{c}k_{1},k_{2},k_{3}\\\
l_{1},l_{2},l_{3}\end{subarray}}\mu_{K}(k_{1})\nu_{K}(k_{2})\mu_{K}(k_{3})\mu_{L}(l_{1})\mu_{L}(l_{2})\mu_{L}(l_{3})\\\
\Delta_{bk_{1}+(a+{b})l_{1},\,(a+b^{\prime})k_{2}+(a+{b})l_{2},\,{b^{\prime}}k_{3}+(a+{b})l_{3}}\
\tilde{h}_{b}(x)\gg\delta^{128}N.$
Write
$\displaystyle\lambda_{1}(h):=\sum_{bk+(a+{b})l=h}$
$\displaystyle\mu_{K}(k)\mu_{L}(l),\qquad\lambda_{2}(h):=\sum_{(a+b^{\prime})k+(a+{b})l=h}\nu_{K}(k)\mu_{L}(l),$
$\displaystyle\lambda_{3}(h):=\sum_{{b^{\prime}}k+(a+{b})l=h}\mu_{K}(k)\mu_{L}(l).$
Then
$\sum_{x}\sum_{h_{1},h_{2},h_{3}}\lambda_{1}(h_{1})\lambda_{2}(h_{2})\lambda_{3}(h_{3})\\\
\Delta_{h_{1},h_{2},h_{3}}\ \tilde{h}_{b}(x)\gg\delta^{128}N.$
By Fourier inversion and extraction of a large Fourier coefficient, there
exist $\alpha_{i}\in\mathbb{T}$ such that
$\biggl{|}\sum_{x}\sum_{h_{1},h_{2},h_{3}}\Delta_{h_{1},h_{2},h_{3}}\
\tilde{h}_{b}(x)e(\underline{\alpha}\cdot\underline{h})\biggr{|}\prod_{i=1}^{3}\int_{\mathbb{T}}\bigl{|}\widehat{\lambda}_{i}(\beta)\bigr{|}\mathrm{d}\beta\gg\delta^{128}N.$
By our choice of $b$, $b^{\prime}$ (see the paragraph preceding (5.1)),
together with Lemma 5.1, for each $i$ we have
$\int_{\mathbb{T}}\bigl{|}\widehat{\lambda}_{i}(\alpha)\bigr{|}\mathrm{d}\alpha\ll\frac{\delta^{-8}}{KL}\ll\frac{\delta^{-26}}{N},$
(5.2)
the latter following from the fact that $L\gg c\delta^{16}K$ and $K\geq\delta
N^{1/2}$. On combining this with Lemma A.3 we obtain
$\big{\|}\tilde{h}_{b}\big{\|}_{U^{3}}^{8}\gg\delta^{206}N^{4}.$
Since $\tilde{h}_{b}$ is an average of translates of $h_{b}$, we may apply the
triangle inequality for the $U^{3}$-norm, together with the fact that Gowers
norms are translation invariant, and conclude that
$\left\|h_{b}\right\|_{U^{3}}^{8}\gg\delta^{206}N^{4}$. Summing over
$b\in\mathcal{B}$ gives our final bound. ∎
Finally we synthesise Lemmas 3.3, 4.2 and 5.5.
###### Theorem 5.6 (Global $U^{5}$-control).
Let $g_{0},g_{1},f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions, each with
support in $[N]$. Suppose that
$\left|\Lambda_{q,N}(g_{0},g_{1},f)\right|\geq\delta\Lambda_{q,N}(1_{[N]}).$
Then
$\sum_{u\in[q]}\left\|f\right\|_{U^{5}(u+q\mathbb{Z})}^{2^{5}}\gg\delta^{2^{25}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{5}(u+q\mathbb{Z})}^{2^{5}}.$
###### Proof.
We recall our convention (1.2) regarding $M$. We begin by applying the
linearisation procedure (Lemma 3.3) to deduce that
$\sum_{a,b\in(-2M,2M)}\
\biggl{|}\sum_{h}\mu_{H}(h)\sum_{x}\Delta_{q(a+b)h_{1},qbh_{2},qah_{3}}f(x)\biggr{|}\\\
\gg\delta^{32}NM^{2}.$
We note that the sum inside the absolute value is invariant under
$a\mapsto-a$. Hence we may restrict to $a,b\in[0,2M]$ at the cost of changing
the absolute constant. Applying Lemma 5.2 we may discard those $a,b$ for which
either $\gcd(a,b)>C\delta^{-32}$ or $b<c\delta^{32}M$. Partitioning the sum
over $x$ into congruence classes $u\bmod q$, the popularity principle gives:
* •
at least $\Omega(\delta^{32}q)$ residues $u\in[q]$;
* •
for each of which there is a subset of $h_{3}\in(-H,H)$ of
$\mu_{H}$-measure222i.e.
$\sum_{h_{3}\in\mathcal{H}}\mu_{H}(h_{3})\gg\delta^{32}$. at least
$\Omega(\delta^{32})$;
* •
for each of which there exist $\Omega(\delta^{32}M)$ values of $a\in[2M]$;
* •
for each of which there are $\Omega(\delta^{32}M)$ values of $b\in[2M]$
satisfying $\gcd(a,b)\ll\delta^{-32}$ and $b\gg\delta^{32}M$;
and together these satisfy
$\biggl{|}\sum_{h_{1},h_{2}}\mu_{H}(h_{1},h_{2})\sum_{x}\Delta_{(a+b)h_{1},bh_{2},ah_{3}}f(qx-u)\biggr{|}\\\
\gg\delta^{32}M^{2}.$
For fixed $u,h_{3},a$ write $\tilde{f}(x):=\Delta_{ah_{3}}f(qx-u),$ so that
$\tilde{f}$ has support in the interval $[(2M)^{2}]$ and
$\biggl{|}\sum_{h_{1},h_{2}}\mu_{H}(h_{1},h_{2})\sum_{x}\Delta_{(a+b)h_{1},bh_{2}}\tilde{f}(x)\biggr{|}\\\
\gg\delta^{32}M^{2}.$
Set
$H:=c\delta^{96}M\qquad\text{and}\qquad K:=c^{3}\delta^{160}M,$ (5.3)
with $c$ sufficiently small to ensure that we may apply Lemma 4.2. This gives
the existence of a 1-bounded $b$-periodic function $g_{b}$ such that on
setting
$\tilde{h}_{b}(x):=\sum_{k}\mu_{K}(k)\overline{\tilde{f}(x+(a+b)k)g_{b}(x+(a+b)k)}$
(5.4)
we have
$\sum_{x}\tilde{f}(x)g_{b}(x)\tilde{h}_{b}(x)\gg\delta^{448}M^{2}.$
Setting $\eta:=c\delta^{480}$ for some small absolute constant $c>0$, we may
sum over our set of permissible $b$ to deduce that
$\sum_{\begin{subarray}{c}\eta M\leq b\leq 2M\\\
\gcd(a,b)\leq\eta^{-1}\end{subarray}}\sum_{x}\tilde{f}(x)g_{b}(x)h_{b}(x)\geq\eta
M^{3}.$
The hypotheses of Lemma 5.5 having been met, we conclude that
$\mathbb{E}_{b\in[2M]}\big{\|}\tilde{f}g_{b}\big{\|}_{U^{3}}^{8}\gg\delta^{99,840}\left\|1_{[M^{2}]}\right\|_{U^{3}}^{8}.$
Applying Lemma 5.4 then gives
$\big{\|}\tilde{f}\big{\|}_{U^{4}}^{16}\gg\delta^{2,396,160}\left\|1_{[M^{2}]}\right\|_{U^{4}}^{16}.$
Recalling that $\tilde{f}(x)=\Delta_{ah_{3}}f_{u}(x)$ where
$f_{u}(x):=f(qx-u)$, we may integrate over the set of permissible $h_{3}$ and
$a$, utilising positivity to extend the range of summation, and deduce that
$\mathbb{E}_{a\in[2M]}\sum_{h}\mu_{H}(h_{3})\big{\|}\Delta_{ah_{3}}f_{u}\big{\|}_{U^{4}}^{16}\gg\delta^{2,396,224}\left\|1_{[M^{2}]}\right\|_{U^{4}}^{16}$
Using Lemma 5.3 and summing over the permissible range of $u$ we get that
$\mathbb{E}_{u\in[q]}\left\|f_{u}\right\|_{U^{5}}^{32}\gg\delta^{28,754,720}\left\|1_{[M^{2}]}\right\|_{U^{5}}^{32},$
and the result follows. ∎
## 6\. Degree lowering
So far, we have shown that $\Lambda_{q,N}(f_{0},f_{1},f_{2})$ is controlled by
$\mathbb{E}_{u\in[q]}\|f_{2}\|_{U^{5}(u+q\mathbb{Z})}^{2^{5}}$ whenever
$f_{0},f_{1},$ and $f_{2}$ are $1$-bounded complex-valued functions supported
on the interval $[N]$. The next step in our argument is to bound
$\Lambda_{q,N}(f_{0},f_{1},f_{2})$ in terms of the $U^{5}(u+q\mathbb{Z})$-norm
of the dual function
$F(x):=\mathbb{E}_{y\in[M]}f_{0}(x-qy^{2})f_{1}(x+y-qy^{2}).$ (6.1)
We postpone this deduction until §7. In this section we show how
$U^{5}$-control of the dual implies $U^{2}$-control.
Our argument combines three simple lemmas: Weyl’s inequality; what we call
‘dual–difference interchange’, which allows us to replace the difference
function of the dual by the dual of the difference functions; and the fact
that a function whose difference functions correlate with ‘low rank’ Fourier
coefficients must have a large uniformity norm of lower degree.
The following log-free variant of Weyl’s inequality can be found in [4, Lemma
A.11].
###### Lemma 6.1 (Weyl’s inequality).
There exists an absolute constant $C$ such that the following holds. Let
$\alpha,\beta\in\mathbb{T}$, $\delta\in(0,1)$ and let $I\subset\mathbb{Z}$ be
an interval with $|I|\geq C\delta^{-6}$ and
$\big{|}\mathbb{E}_{y\in I}e(\alpha y^{2}+\beta y)\big{|}\geq\delta.$
Then there exists a positive integer $q\ll\delta^{-4}$ such that
$\|q\alpha\|\ll\delta^{-14}|I|^{-2}.$
This has the following consequence, which uses our convention (1.2) regarding
$M$.
###### Lemma 6.2.
There exist an absolute constant $C$ such that for $N\geq C(q/\delta)^{C}$ the
following holds. Suppose that for $\alpha\in\mathbb{T}$ there are $1$-bounded
functions $g_{0},g_{1}:\mathbb{Z}\to\mathbb{C}$ supported on the interval
$[N]$ such that
$\left|\sum_{x}\sum_{y\in[M]}g_{0}(qx)g_{1}(qx+y)e(\alpha(x+y^{2}))\right|\geq\delta
MN/q.$
Then there exists a positive integer $q^{\prime}\ll\delta^{-4}$ such that
$\|q^{\prime}q^{2}\alpha\|\ll\delta^{-14}q^{3}/N$.
###### Proof.
We split the sum over $y\in[M]$ into arithmetic progressions modulo $q$ and
split the sum over $x$ into intervals of length $M/q$. Hence, by the pigeon-
hole principle, there exists $u\in[q]$ and an integer $m$ such that on
rounding the sum over $y$ we have
$\left|\sum_{x,y\in[M/q]}g_{0}(q(m+x))g_{1}(u+q(m+x+y))e\left(\alpha\left(x+(u+qy)^{2}\right)\right)\right|\\\
\gg\delta(M/q)^{2}.$
Define the functions
$\displaystyle h_{0}(x):=g_{0}(q(m+x))$ $\displaystyle e(\alpha
x)1_{[M/q]}(x),\qquad h_{1}(x):=g_{1}(u+q(m+x))1_{[2M/q]},$ $\displaystyle
h_{2}(x):=e\left(\alpha(u+qx)^{2}\right)1_{[M/q]}(x)$
Then by orthogonality, extraction of a large Fourier coefficient and Parseval
we have
$\displaystyle\delta
M^{2}/q^{2}\ll\left|\int_{\mathbb{T}}\hat{h}_{0}(\beta)\hat{h}_{1}(-\beta)\hat{h}_{2}(\beta)\mathrm{d}\alpha\right|\ll\big{\|}\hat{h}_{2}\big{\|}_{\infty}\big{\|}\hat{h}_{0}\big{\|}_{L^{2}}\big{\|}\hat{h}_{1}\big{\|}_{L^{2}}\ll\big{\|}\hat{h}_{2}\big{\|}_{\infty}M/q.$
It follows that there exists $\beta\in\mathbb{T}$ such that
$\left|\sum_{x\in[M/q]}e\left(\alpha(u+qx)^{2}+\beta x\right)\right|\gg\delta
M/q.$
Applying Weyl’s inequality, we deduce the existence of
$q^{\prime}\ll\delta^{-4}$ such that
$\left\|q^{\prime}q^{2}\alpha\right\|\ll\delta^{-14}(q/M)^{2}$. ∎
###### Lemma 6.3 (Dual–difference interchange).
For each $y\in[M]$, let $F_{y}:\mathbb{Z}\to\mathbb{C}$ be a 1-bounded
function with support in an interval of length $N$. Set
$F(x):=\mathbb{E}_{y\in[M]}F_{y}(x).$
Then for any function $\phi:\mathbb{Z}^{s}\to\mathbb{T}$ and finite set
$\mathcal{H}\subset\mathbb{Z}^{s}$ we have
$\left(N^{-s-1}\sum_{\underline{h}\in\mathcal{H}}\left|\sum_{x}\Delta_{\underline{h}}F(x)e\bigl{(}\phi(\underline{h})x\bigr{)}\right|\right)^{2^{s}}\ll_{s}\\\
N^{-2s-1}\sum_{\underline{h}^{0},\underline{h}^{1}\in\mathcal{H}}\left|\sum_{x}\mathbb{E}_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1})x\bigr{)}\right|,$
where
$\phi(\underline{h}^{0};\underline{h}^{1}):=\sum_{\omega\in\left\\{0,1\right\\}^{s}}(-1)^{|\omega|}\phi(\underline{h}^{\omega})\qquad\text{and}\qquad\underline{h}^{\omega}:=(h_{1}^{\omega_{1}},\dots,h_{s}^{\omega_{s}}).$
###### Proof.
We proceed by induction on $s\geq 0$, the base case being an identity. Suppose
then that $s\geq 1$. For $\underline{h}\in\mathbb{Z}^{s-1}$ and
$h\in\mathbb{Z}$, we note that
$\Delta_{(\underline{h},h)}F(x)=\Delta_{\underline{h}}\left(\mathbb{E}_{y,y^{\prime}\in[M]}F_{y}(x)\overline{F_{y^{\prime}}(x+h)}\right)$
Hence by the induction hypothesis
$\left(N^{-s-1}\sum_{h}\sum_{\begin{subarray}{c}\underline{h}\\\
(\underline{h},h)\in\mathcal{H}\end{subarray}}\left|\sum_{x}\Delta_{(\underline{h},h)}F(x)e\bigl{(}\phi(\underline{h})x\bigr{)}\right|\right)^{2^{s}}\ll_{s}\\\
\left(N^{-2s}\sum_{h}\sum_{\begin{subarray}{c}\underline{h}^{0},\underline{h}^{1}\\\
(\underline{h}^{i},h)\in\mathcal{H}\end{subarray}}\left|\sum_{x}\mathbb{E}_{y,y^{\prime}\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)\overline{F_{y^{\prime}}(x+h)}e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1};h)x\bigr{)}\right|\right)^{2},$
where
$\phi(\underline{h}^{0};\underline{h}^{1};h):=\sum_{\omega\in\left\\{0,1\right\\}^{s-1}}(-1)^{|\omega|}\phi(\underline{h}^{\omega},h).$
Letting $e(\psi(\underline{h}^{0};\underline{h}^{1};h))$ denote the phase of
the inner absolute, we take the sum over $h$ inside and apply Cauchy–Schwarz
to obtain
$\left(\sum_{\underline{h}^{0},\underline{h}^{1},x}\mathbb{E}_{y,y^{\prime}\in[M]}\sum_{\begin{subarray}{c}h\\\
(\underline{h}^{i},h)\in\mathcal{H}\end{subarray}}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)\overline{F_{y^{\prime}}(x+h)}e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1};h)x+\psi(\underline{h}^{0};\underline{h}^{1};h)\bigr{)}\right)^{2}\\\
\leq
N^{2s-1}\sum_{\underline{h}^{0},\underline{h}^{1}}\sum_{\begin{subarray}{c}h^{0},h^{1}\\\
(\underline{h}^{i},h^{j})\in\mathcal{H}\end{subarray}}\\\
\left|\sum_{x}\mathbb{E}_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}F_{y}(x)\overline{F_{y}(x+h^{0}-h^{1})}e\Bigl{(}\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1};h^{0})-\phi(\underline{h}^{0};\underline{h}^{1};h^{1})\bigr{)}x\Bigr{)}\right|.$
The result follows. ∎
If $\phi(h_{1},\dots,h_{s-1})$ is a function of $s-1$ variables we write
$\phi(h_{1},\dots,\hat{h}_{i},\dots,h_{s}):=\phi(h_{1},\dots,h_{i-1},h_{i+1},\dots,h_{s}).$
We say that $\phi(h_{1},\dots,h_{s})$ is _low rank_ if there exist functions
$\phi_{i}(h_{1},\dots,h_{s-1})$ such that
$\phi(h_{1},\dots,h_{s})=\sum_{i=1}^{s}\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s}).$
From the definition of the Gowers norm together with the $U^{2}$-inverse
theorem (Lemma A.1), one can show that largeness of the $U^{s+2}$-norm is
equivalent to the existence of $\phi:\mathbb{Z}^{s}\to\mathbb{T}$ such that
$\sum_{h_{1},\dots,h_{s}}\left|\sum_{x}\Delta_{h}f(x)e(\phi(h)x)\right|\gg
N^{s+1}.$
The following lemma says that if $\phi$ is low-rank, then the $U^{s+1}$-norm
must also be large.
###### Lemma 6.4 (Low rank correlation implies lower degree).
Let $f:\mathbb{Z}\to\mathbb{C}$ be a 1-bounded function with support in $[N]$.
Then for $\phi_{1},\dots,\phi_{m}:\mathbb{Z}^{s-1}\to\mathbb{T}$ with $m\leq
s$ we have
$\frac{1}{N^{s+1}}\sum_{h_{1},\dots,h_{s}}\left|\sum_{x}\Delta_{h}f(x)e\left(\sum_{i=1}^{m}\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s})x\right)\right|\\\
\ll_{m}\left(\frac{\left\|f\right\|_{U^{s+1}}^{2^{s+1}}}{N^{s+2}}\right)^{2^{-m-1}}.$
(6.2)
###### Proof.
We proceed by induction on $m\geq 0$, the base case corresponding to the
Cauchy–Schwarz inequality. Suppose then that $m\geq 1$ and the result is true
for smaller values of $m$. Letting $e(\psi(h))$ denote the phase of the inner-
most sum, the left-hand side of (6.2) is equal to
$\frac{1}{N^{s+1}}\sum_{h_{2},\dots,h_{s},x}\Delta_{h_{2},\dots,h_{s}}f(x)e\left(\phi_{1}(h_{2},\dots,h_{s})\right)\sum_{h_{1}}\Delta_{h_{2},\dots,h_{s}}\overline{f}(x+h_{1})\\\
e\left(\sum_{i=2}^{m}\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s})x+\psi(h_{1},\dots,h_{s})\right).$
By Cauchy–Schwarz, the square of this is at most
$\frac{1}{N^{s+2}}\sum_{h_{2},\dots,h_{s}}\
\sum_{h_{1},h_{1}^{\prime}\in(-N,N)}\\\
\left|\sum_{x}\Delta_{h_{1}-h_{1}^{\prime},h_{2},\dots,h_{s}}f(x)e\left(\sum_{i=2}^{m}\left(\phi_{i}(h_{1},\dots,\hat{h}_{i},\dots,h_{s})-\phi_{i}(h_{1}^{\prime},\dots,\hat{h}_{i},\dots,h_{s})\right)x\right)\right|.$
Taking a maximum over $h_{1}^{\prime}\in(-N,N)$ and changing variables in
$h_{1}$, the latter is at most an absolute constant times
$\frac{1}{N^{s+1}}\sum_{h_{1},h_{2},\dots,h_{s}}\Bigg{|}\sum_{x}\Delta_{h_{1},h_{2},\dots,h_{s}}f(x)\\\
e\left(\sum_{i=2}^{m}\left(\phi_{i}(h_{1}+h_{1}^{\prime},h_{2}\dots,\hat{h}_{i},\dots,h_{s})-\phi_{i}(h_{1}^{\prime},h_{2}\dots,\hat{h}_{i},\dots,h_{s})\right)x\right)\Bigg{|}.$
This phase has lower rank than the original, hence we may apply the induction
hypothesis to yield the lemma. ∎
###### Lemma 6.5 (Degree lowering).
There exists an absolute constant such that for $N\geq C(q/\delta)^{C}$ the
following holds. Let $f_{0},f_{1}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded
functions with support in $[N]$ and define the dual
$F(x):=\mathbb{E}_{y\in[M]}f_{0}(x-qy^{2})f_{1}(x+y-qy^{2}).$
If, for $s\geq 3$, we have
$\sum_{u\in[q]}\left\|F\right\|_{U^{s}(u+q\cdot\mathbb{Z})}^{2^{s}}\geq\delta\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{s}(u+q\cdot\mathbb{Z})}^{2^{s}},$
then
$\sum_{u\in[q]}\left\|F\right\|_{U^{s-1}(u+q\cdot\mathbb{Z})}^{2^{s-1}}\gg_{s}\delta^{4^{s+2}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{s-1}(u+q\cdot\mathbb{Z})}^{2^{s-1}},$
###### Proof.
Write $M:=\left\lfloor(N/q)^{1/2}\right\rfloor$. Given $u\in[q]$ let
$F_{u}(x):=F(u+qx)$, a function with support in the interval $[2N/q]$.
Applying the popularity principle, there exists a set of $\Omega(\delta q)$
residues $u\in[q]$ for which
$\left\|F_{u}\right\|_{U^{s}}^{2^{s}}\gg\delta(N/q)^{s+1}$. Expanding the
definition of the $U^{s}$-norm (1.9) we have
$\sum_{h_{1},\dots,h_{s-2}}\left\|\Delta_{h_{1},\dots,h_{s-2}}F_{u}\right\|_{U^{2}}^{4}\gg\delta(N/q)^{s+1}.$
Applying the $U^{2}$-inverse theorem (Lemma A.1), there exists
$\mathcal{H}\subset(-2N/q,2N/q)^{s-2}$ of size
$|\mathcal{H}|\gg\delta(N/q)^{s-2}$ and a function
$\phi:\mathbb{Z}^{s-2}\to\mathbb{T}$ such that for every
$\underline{h}\in\mathcal{H}$ we have
$\left|\sum_{x}\Delta_{\underline{h}}F_{u}(x)e\bigl{(}\phi(\underline{h})x\bigr{)}\right|\gg\delta
N/q.$ (6.3)
Set $T:=\left\lceil C\delta^{-1}N/q\right\rceil$, with $C$ an absolute
constant taken sufficiently large to ensure that, on rounding
$\phi(\underline{h})$ to the nearest fraction of the form $t/T$, the validity
of (6.3) remains. Summing over $\underline{h}\in\mathcal{H}$ and applying
Lemma 6.3, we deduce that
$\sum_{\underline{h}^{0},\underline{h}^{1}\in\mathcal{H}}\left|\sum_{x}\mathbb{E}_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{0}(u+qx-
qy^{2})\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{1}(u+qx+y-qy^{2})\right|\\\
e\bigl{(}\phi(\underline{h}^{0};\underline{h}^{1})x\bigr{)}\gg_{s}\delta^{2^{s-1}}(N/q)^{2s-1}.$
Applying the pigeon-hole and popularity principle, there exists
$\mathcal{H}^{\prime}\subset\mathcal{H}$ of size
$\Omega_{s}(\delta^{2^{s-1}}(N/q)^{s-2})$ and
$\underline{h}^{1}\in\mathcal{H}$ such that for every
$\underline{h}^{0}\in\mathcal{H}^{\prime}$ we have
$\left|\sum_{x}\sum_{y\in[M]}\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{0}(u+qx-
qy^{2})\Delta_{\underline{h}^{0}-\underline{h}^{1}}f_{1}(u+qx+y-qy^{2})e\bigl{(}\phi(\underline{h}^{0},\underline{h}^{1})x\bigr{)}\right|\\\
\gg\delta^{2^{s-1}}MN/q.$
By Lemma 6.2, for each $\underline{h}^{0}\in\mathcal{H}^{\prime}$ there exists
$q^{\prime}\ll\delta^{-2^{s+1}}$ such that
$\left\|q^{\prime}q^{2}\phi(\underline{h}^{0},\underline{h}^{1})\right\|\ll\delta^{-2^{s}\times
7}q^{3}/N$
Notice that $\phi(\underline{h}^{0},\underline{h}^{1})$ is an element of the
additive group $\left\\{t/T:t\in[T]\right\\}\subset\mathbb{T}$. Moreover, for
any $Q_{i}$ we have the inclusion
$\left\\{\alpha\in\mathbb{T}:\exists q^{\prime}\leq Q_{1}\text{ with
}\left\|q^{\prime}q^{2}\alpha\right\|\leq
Q_{2}q^{3}/N\right\\}\subset\bigcup_{\begin{subarray}{c}1\leq a\leq q\leq
Q_{1}\\\
\mathrm{hcf}(a,q)=1\end{subarray}}\left[\frac{a}{q^{\prime}q^{2}}-\frac{Q_{2}}{N},\frac{a}{q^{\prime}q^{2}}+\frac{Q_{2}}{N}\right].$
By a volume packing argument, the number of $t/T$ lying in this union of
intervals is at most $O\left(Q_{1}^{2}(1+\tfrac{Q_{2}T}{N})\right)$. It
therefore follows from the pigeon-hole principle that there exists
$\mathcal{H}^{\prime\prime}\subset\mathcal{H}^{\prime}$ of size
$\Omega\left(\delta^{2^{s+3}+1-2^{s}}(N/q)^{s-2}\right)$ and $t_{0}\in[T]$
such that for any $\underline{h}^{0}\in\mathcal{H}^{\prime\prime}$ we have
$\phi(\underline{h}^{0},\underline{h}^{1})=t_{0}/T$. In particular, when
restricted to the set $\mathcal{H}^{\prime\prime}$, the function $\phi$
satisfies
$\phi(\underline{h}^{0})=t_{0}/T-\sum_{\omega\in\left\\{0,1\right\\}^{s}\setminus\left\\{0\right\\}}(-1)^{|\omega|}\phi(\underline{h}^{\omega}).$
The right-hand side of this identity is clearly _low rank_ according to the
terminology preceding Lemma 6.4.
Summing over $\underline{h}\in\mathcal{H}^{\prime\prime}$ in (6.3), we deduce
the existence of a low rank function $\psi:\mathbb{Z}^{s-2}\to\mathbb{T}$ such
that
$\sum_{\underline{h}}\left|\sum_{x}F_{u}(x)e\bigl{(}\psi(\underline{h})x\bigr{)}\right|\gg\delta^{2^{s+3}+1-2^{s}}(N/q)^{s-1}.$
Employing Lemma 6.4 then gives
$\left\|F_{u}\right\|_{U^{s-1}}^{2^{s-1}}\gg\delta^{(2^{s+3}+1-2^{s})2^{s+1}}(N/q)^{s}.$
Summing over permissible $u$, then extending to the full sum over $u\in[q]$ by
positivity, we obtain the bound claimed in the lemma. ∎
## 7\. Proof of the cut norm inverse theorem
In this section we complete our proof of Theorem 1.6. We first show how the
dual function is controlled by the $U^{5}$-norm, and hence by the degree
lowering of §6, the dual is controlled by the $U^{1}$-norm.
The following can be found in the discussion following [3, Proposition 3.6].
Although the statement therein is for norms, and not seminorms, one can check
that the (simple) argument remains valid in this greater generality333On
occasion the relevant results in [3] appear to assume that unit balls are
_bounded_ (if we take the definition of _convex body_ to be a compact convex
set with non-empty interior), which may not be true for the unit ball of a
seminorm. However, the boundedness assumption is not necessary in the
pertinent proofs. Moreover, one could quotient by the norm zero set to obtain
a genuine norm..
###### Lemma 7.1.
Let $\|\cdot\|$ be a seminorm on the space of complex-valued functions
supported on $[N]$. For any such function $f$ and $\varepsilon>0$ there exists
a decomposition $f=f_{str}+f_{unf}$ such that
$\left\|f_{str}\right\|^{*}\leq\varepsilon^{-1}\left\|f\right\|_{2}\quad\text{and}\quad\left\|f_{unf}\right\|\leq\varepsilon\left\|f\right\|_{2}.$
###### Lemma 7.2 ($U^{5}$-control of the dual).
There exists an absolute constant $C$ such that for $N\geq Cq\delta^{-C}$ the
following holds. Let $g_{0},g_{1},f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded
functions, each with support in $[N]$. Suppose that
$\left|\Lambda_{q,N}(g_{0},g_{1},f)\right|\geq\delta\Lambda_{q,N}(1_{[N]}).$
Then, on defining the dual
$G(x):=\mathbb{E}_{y\in[M]}g_{0}(x-qy^{2})g_{1}(x+y-qy^{2}),$ (7.1)
we have
$\sum_{u\in[q]}\left\|G\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}}\gg\delta^{2^{26}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}}.$
###### Proof.
Applying Lemma 7.1 to $f$ with
$\left\|\cdot\right\|:=\left\|\cdot\right\|^{\sharp}_{q}$ as defined in (1.5)
and $\varepsilon:=\tfrac{1}{2}\delta\Lambda_{q,N}(1_{[N]})N^{-1/2}$, we deduce
that
$|\Lambda_{q,N}(g_{0},g_{1},f_{str})|\geq\delta\Lambda_{q,N}(1_{[N]})-|\Lambda_{q,N}(g_{0},g_{1},f_{unf})|\\\
\geq\delta\Lambda_{q,N}(1_{[N]})-\left\|f_{unf}\right\|_{q,N}^{\sharp}\geq\tfrac{1}{2}\delta\Lambda_{q,N}(1_{[N]}).$
We note that our lower bound assumption on $N$ implies that
$\Lambda_{q,N}\left(1_{[N]}\right)\gg 1$. Hence the dual inequality (1.10)
gives
$\delta\ll N^{-1}|\left\langle
f_{str},G\right\rangle|\ll\delta^{-1}\left\|G\right\|^{\sharp}_{q}.$
Invoking Theorem 5.6 yields the result. ∎
Taken together, the work in §§3–6 gives the following.
###### Proof of Theorem 1.6.
Applying Lemma 7.2, we deduce that
$\sum_{u\in[q]}\left\|G\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}}\gg\delta^{2^{26}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{5}(u+q\cdot\mathbb{Z})}^{2^{5}},$
where $G$ is defined as in (7.1).
We now apply Lemma 6.5 three times. The first application gives
$\sum_{u\in[q]}\left\|G\right\|_{U^{4}(u+q\cdot\mathbb{Z})}^{2^{4}}\gg\delta^{2^{40}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{4}(u+q\cdot\mathbb{Z})}^{2^{4}},$
a second replaces $U^{4}$ with $U^{3}$ at the cost of replacing
$\delta^{2^{40}}$ with $\delta^{2^{52}}$. With a final application, we obtain
$\sum_{u\in[q]}\left\|G\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}\gg\delta^{2^{62}}\sum_{u\in[q]}\left\|1_{[N]}\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}.$
Let $\eta:=\delta^{2^{62}}$. By the popularity principle, there are at least
$\Omega(\eta q)$ values of $u\in[q]$ for which
$\left\|G\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}\gg\eta\left\|1_{[N]}\right\|_{U^{2}(u+q\cdot\mathbb{Z})}^{4}$.
The inverse theorem for the $U^{2}$-norm then gives the existence of
$\phi(u)\in\mathbb{T}$ for which
$\left|\sum_{x}G(u+qx)e(\phi(u)x)\right|\gg\eta^{1/2}N/q.$ (7.2)
Set $T:=\left\lceil C\eta^{-1/2}N/q\right\rceil$, with $C$ an absolute
constant taken sufficiently large to ensure that, on rounding $\phi(u)$ to the
nearest fraction of the form $t/T$, the inequality (7.2) remains valid.
By Lemma 6.2, for each $u$ satisfying (7.2), there exists a positive integer
$q^{\prime}\ll\eta^{2}$ such that
$\|q^{\prime}q^{2}\phi(h)\|\ll\eta^{-7}q^{3}/N$. By a volume packing argument
similar to that given in the proof of Lemma 6.5, the function $\phi$ is
constant on a proportion of at least $\Omega\bigl{(}\eta^{11}\bigr{)}$ of the
residues $u\in[q]$ satisfying (7.2). Summing over these $u$, then extending
the sum to all of $[q]$, we deduce the existence of $\alpha\in\mathbb{T}$ and
$q^{\prime}\ll\eta^{-2}$ such that
$\|q^{\prime}q^{2}\alpha\|\ll\eta^{-7}q^{3}/N$ and
$\sum_{u\in[q]}\left|\sum_{x}G(u+qx)e(\alpha x)\right|\gg\eta^{12}N.$ (7.3)
Expanding the dual function, there is a 1-bounded function $\psi(u\bmod q)$
such that the left-hand side of the above is equal to
$\sum_{u\in[q]}\psi(u\bmod q)\sum_{x\equiv
u(q)}\mathbb{E}_{y\in[M]}g_{0}(x-qy^{2})g_{1}(x+y-qy^{2})e(\alpha x/q)\\\
=\sum_{x}g_{0}(x)\psi(x\bmod q)e(\alpha
x/q)\mathbb{E}_{y\in[M]}g_{1}(x+y)e(\alpha y^{2}).$ (7.4)
Let us first suppose that $f=g_{0}$, we deal with the case $f=g_{1}$ shortly.
Setting
$\phi(x):=\psi(x\bmod q)e(\alpha x/q)\mathbb{E}_{y\in[M]}g_{1}(x+y)e(\alpha
y^{2}),$
we have $\left\langle f,\overline{\phi}\right\rangle\gg\eta^{12}N$. Our aim is
to show that $\phi$ can be approximated by a local function of the type
claimed in the lemma.
We begin by removing the phase from the expectation over $[M]$, at the cost of
passing to shorter progressions. Let $M^{\prime}\leq M/q^{\prime}q^{2}$ be a
quantity to be determined. If $y\in[M^{\prime}]$ then for any
$m\in[-M,M]\cap\mathbb{Z}$ we have
$\left|e(\alpha(m+q^{\prime}q^{2}y)^{2})-e(\alpha
m^{2})\right|\ll\left\|\alpha\left(2mq^{\prime}q^{2}y+(q^{\prime}q^{2}y)^{2}\right)\right\|\ll
q^{\prime}q^{4}\eta^{-7}M^{\prime}/M.$ (7.5)
Hence, partitioning $\mathbb{Z}$ into progressions $P$ of common difference
$q^{\prime}q^{2}$ and length $M^{\prime}$, there exist phases $\omega_{P}$
such that for any $x\in\mathbb{Z}$ we have
$\left|\mathbb{E}_{y\in[M]}g_{1}(x+y)e(\alpha
y^{2})-M^{-1}\sum_{P}\omega_{P}\sum_{y\in[M]\cap P}g_{1}(x+y)\right|\ll
q^{\prime}q^{4}\eta^{-7}M^{\prime}/M.$ (7.6)
Notice that there are at most $O(M/M^{\prime})$ progressions $P$ such that
$P\cap[M]\neq\emptyset$ (since we are assuming $M^{\prime}\leq
M/q^{\prime}q^{2}$).
Next we show how the phase $e(\alpha x/q)$ is approximately periodic. Suppose
that $z\in[M^{\prime\prime}]$, with $M^{\prime\prime}\leq M^{\prime}/q$ to be
determined. Then for any $x\in\mathbb{Z}$ we have
$\left|e\left(\alpha(x+q^{\prime}q^{3}z)/q\right)-e\left(\alpha
x\right)\right|\ll\left\|\alpha
q^{\prime}q^{2}\right\|M^{\prime\prime}\ll\eta^{-7}q^{3}M^{\prime\prime}/N$
and by a boundary estimate
$\left|\sum_{y\in[M]\cap P}g_{1}(x+q^{\prime}q^{3}z+y)-\sum_{y\in[M]\cap
P}g_{1}(x+y)\right|\ll qM^{\prime\prime}.$
It then follows from a telescoping identity that for all $x\in\mathbb{Z}$ and
$z\in[M^{\prime\prime}]$ we have
$\displaystyle\left|\phi(x+q^{\prime}q^{3}z)-\phi(x)\right|$
$\displaystyle\ll\frac{\eta^{-7}q^{3}M^{\prime\prime}}{N}+\frac{\eta^{-7}q^{\prime}q^{4}M^{\prime}}{M}+\frac{qM^{\prime\prime}}{M}\sum_{\begin{subarray}{c}P\\\
P\cap[M]\neq\emptyset\end{subarray}}1$
$\displaystyle\ll\frac{\eta^{-7}q^{\prime}q^{4}M^{\prime}}{M}+\frac{qM^{\prime\prime}}{M^{\prime}}.$
Taking $M^{\prime}:=c\eta^{19}M/q^{\prime}q^{4}$ and
$M^{\prime\prime}:=c\eta^{12}M^{\prime}/q$ for a sufficiently small absolute
constant $c>0$ we have
$\left|\phi(x+q^{\prime}q^{3}z)-\phi(x)\right|\leq\eta^{12}/C\quad\text{for
all }x\in\mathbb{Z}\text{ and }z\in[M^{\prime\prime}].$ (7.7)
Partitioning $\mathbb{Z}$ into translates $T$ of
$q^{\prime}q^{3}\cdot[M^{\prime\prime}]$ we deduce that
$\sum_{T}\biggl{|}\sum_{x\in T}f(x)\biggr{|}\gg\eta^{12}N.$
Write $\chi(x)$ for the phase of the inner sum when $x\in T$. Then $\chi$ is a
1-bounded local function of modulus $q^{\prime}q^{3}$ and resolution
$\Omega\left((\delta/q)^{O(1)}M\right)$ satisfying
$\sum_{x}f(x)\overline{\chi(x)}\gg\delta^{2^{66}}N,$
as required.
Next we give the argument for when $f=g_{1}$. Returning to (7.4) we have
$\sum_{x}\left|\mathbb{E}_{y\in[M]}f(x+y)e(\alpha y^{2})\right|\gg\eta^{12}N.$
Utilising (7.5) and (7.6), we may partition $\mathbb{Z}$ into progressions $P$
of common difference $q^{\prime}q^{2}$ and length
$M^{\prime}:=c\eta^{19}M/q^{\prime}q^{4}$ such that
$\sum_{x}\sum_{P}\left|\mathbb{E}_{y\in[M]\cap P}f(x+y)\right|\gg\eta^{12}N.$
Since $O(M/M^{\prime})$ of the $P$ intersect $[M]$, the pigeon-hole principle
gives $P^{\prime}:=P\cap[M]$ such that
$\sum_{x}\left|\sum_{y\in P^{\prime}}f(x+y)\right|\gg\eta^{12}NM^{\prime}.$
In particular $|P^{\prime}|\gg\eta^{12}M^{\prime}\gg(q/\delta)^{C}M$.
Partitioning $\mathbb{Z}$ into translates of $P^{\prime}$ of the form
$\mathbb{Z}=\bigsqcup_{i}(a_{i}+P^{\prime}),$
the pigeon-hole principle gives $z\in P^{\prime}$ such that
$\sum_{i}\left|\sum_{y\in P^{\prime}}f(a_{i}+y+z)\right|\gg\eta^{12}N.$
Writing $\chi(x)$ for the phase of the inner sum when $x\in a_{i}+P$ one sees
that $\chi$ is a local function of resolution $\gg(q/\delta)^{C}M$ and modulus
$q^{\prime}q^{2}$ which satisfies $\left\langle
f,\chi\right\rangle\gg\eta^{12}N$. The proof is complete on noting that a
local function of modulus $q^{\prime}q^{2}$ is also a local function of
modulus $q^{\prime}q^{3}$. ∎
## Appendix A Basic theory of the Gowers norms
###### Lemma A.1 (Inverse theorem for the $U^{2}$-norm).
Let $f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in
$[N]$. Then there exists $\alpha\in\mathbb{T}$ such that
$\|f\|_{U^{2}}^{4}\leq N\left|\sum_{x}f(x)e(\alpha x)\right|^{2}.$
###### Proof.
Using the definition of the Fourier transform (1.8), together with
orthogonality of additive characters, we have
$\left\|f\right\|_{U^{2}}^{4}=\int_{\mathbb{T}}\bigl{|}\hat{f}(\alpha)\bigr{|}^{4}\mathrm{d}\alpha\leq\big{\|}\hat{f}\big{\|}_{\infty}^{2}\int_{\mathbb{T}}\bigl{|}\hat{f}(\alpha)\bigr{|}^{2}\mathrm{d}\alpha\leq\big{\|}\hat{f}\big{\|}_{\infty}^{2}N.$
∎
For each $\omega\in\\{0,1\\}^{s}$, let $f_{\omega}:\mathbb{Z}\to\mathbb{C}$ be
a function with finite support. Then we define the _Gowers inner product_ by
$[f_{\omega}]_{U^{s}}:=\sum_{x,h_{1},\dots,h_{s}}\prod_{\omega\in\left\\{0,1\right\\}^{s}}\mathcal{C}^{|\omega|}f_{\omega}(x+\omega\cdot
h).$
Here $\mathcal{C}$ denotes the operation of complex conjugation. Notice that
$[f]_{U^{s}}=\left\|f\right\|_{U^{s}}^{2^{s}}$.
###### Lemma A.2 (Gowers–Cauchy–Schwarz).
For each $\omega\in\\{0,1\\}^{s}$, let $f_{\omega}:\mathbb{Z}\to\mathbb{C}$ be
a function with finite support. Then we have
$[f_{\omega}]_{U^{s}}\leq\prod_{\omega\in\\{0,1\\}^{s}}\|f_{\omega}\|_{U^{s}}.$
###### Proof.
See [9, Exercise 1.3.19]. ∎
###### Lemma A.3 (Phase invariance for $s\geq 2$).
Let $L\in\mathbb{R}[x,h_{1},\dots,h_{s}]$ be a linear form, with $s\geq 2$ and
let $f:\mathbb{Z}\to\mathbb{C}$. Then
$\biggl{|}\sum_{x,h_{1},\dots,h_{s}}\Delta_{h_{1},\dots,h_{s}}f(x)e(L(x,h_{1},\dots,h_{s}))\biggr{|}\leq\left\|f\right\|_{U^{s}}^{2^{s}}.$
###### Proof.
The linear form may be written as
$L(x,h_{1},\dots,h_{s})=\alpha x+\beta_{1}(x+h_{1})+\dots+\beta_{s}(x+h_{s}),$
for some real $\alpha$ and $\beta_{i}$. Write $f_{0}(x):=f(x)e(\alpha x)$,
$f_{e_{i}}(x):=f(x)e(-\beta_{i}x)$ for $i=1,\dots,s$, and for
$\omega\in\left\\{0,1\right\\}^{s}\setminus\left\\{0,e_{1},\dots,e_{s}\right\\}$
set $f_{\omega}:=f$. Then by Gowers–Cauchy–Schwarz we have
$\biggl{|}\sum_{x,h_{1},\dots,h_{s}}\Delta_{h_{1},\dots,h_{s}}f(x)e(L(x,h_{1},\dots,h_{s}))\biggr{|}\leq\prod_{\omega}\left\|f_{\omega}\right\|.$
It therefore suffice to prove that for a phase function $e_{\alpha}:x\mapsto
e(\alpha x)$ $\left\|fe_{\alpha}\right\|_{U^{s}}=\left\|f\right\|_{U^{s}}.$
The latter follows on observing that
$\Delta_{h_{1},\dots,h_{s}}(fe_{\alpha})=\left(\Delta_{h_{1},\dots,h_{s}}f\right)\left(\Delta_{h_{1},\dots,h_{s}}e_{\alpha}\right),$
and for any $x,h_{1},\dots,h_{s}$ with $s\geq 2$ we have
$\Delta_{h_{1},\dots,h_{s}}e_{\alpha}(x)=1.$ ∎
###### Lemma A.4 (Box Cauchy–Schwarz).
Let $\mu_{1},\mu_{2},\mu_{3}$ be probability measures on $\mathbb{Z}$ with the
discrete sigma algebra. If $F_{1},F_{2},F_{3}$ are 1-bounded function on
$\mathbb{Z}^{2}$ and $F$ is a 1-bounded function on $\mathbb{Z}^{3}$ then
$\left|\sum_{x\in\mathbb{Z}^{3}}F_{1}(x_{2},x_{3})F_{2}(x_{1},x_{3})F_{3}(x_{1},x_{2})F(x)\underline{\mu}(x)\right|^{8}\\\
\leq\sum_{x^{0},x^{1}\in\mathbb{Z}^{3}}\prod_{\omega\in\left\\{0,1\right\\}^{3}}\mathcal{C}^{|\omega|}F(x_{1}^{\omega_{1}},x_{2}^{\omega_{2}},x_{3}^{\omega_{3}})\mu_{1}(x_{1}^{0})\mu_{1}(x_{1}^{1})\mu_{2}(x_{2}^{0})\mu_{2}(x_{2}^{1})\mu_{3}(x_{3}^{0})\mu_{3}(x_{3}^{1}).$
## References
* BC [17] J. Bourgain and M.-C. Chang. Nonlinear Roth type theorems in finite fields. Israel J. Math., 221(2):853–867, 2017.
* BL [96] V. Bergelson and A. Leibman. Polynomial extensions of van der Waerden’s and Szemerédi’s theorems. J. Amer. Math. Soc., 9(3):725–753, 1996.
* Gow [10] W. T. Gowers. Decompositions, approximate structure, transference, and the Hahn-Banach theorem. Bull. Lond. Math. Soc., 42(4):573–606, 2010.
* GT [08] B. Green and T. Tao. Quadratic uniformity of the Möbius function. Ann. Inst. Fourier (Grenoble), 58(6):1863–1935, 2008.
* Pel [19] S. Peluse. On the polynomial szemerédi theorem in finite fields. Duke Math. J., 168(5):749–774, 04 2019.
* PP [19] S. Peluse and S. Prendiville. Quantitative bounds in the non-linear Roth theorem. ArXiv e-prints, 2019.
* PP [20] S. Peluse and S. Prendiville. A polylogarithmic bound in the nonlinear Roth theorem. ArXiv e-prints, 2020.
* Pre [17] S. Prendiville. Quantitative bounds in the polynomial Szemerédi theorem: the homogeneous case. Discrete Anal., pages 34, Paper No. 5, 2017.
* Tao [12] T. Tao. Higher order Fourier analysis, volume 142 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2012.
* TZ [16] T. Tao and T. Ziegler. Concatenation theorems for anti-Gowers-uniform functions and Host-Kra characteristic factors. Discrete Anal., pages 60, Paper No. 13, 2016.
|
2024-09-04T02:54:58.404089 | 2020-03-09T13:16:29 | 2003.04122 | {
"authors": "Sarah Peluse and Sean Prendiville",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26118",
"submitter": "Sean Prendiville",
"url": "https://arxiv.org/abs/2003.04122"
} | arxiv-papers | # A polylogarithmic bound in the nonlinear Roth theorem
Sarah Peluse Mathematical Institute
University of Oxford
UK<EMAIL_ADDRESS>and Sean Prendiville Department of
Mathematics and Statistics
Lancaster University
UK<EMAIL_ADDRESS>
###### Abstract.
We show that sets of integers lacking the configuration $x$, $x+y$, $x+y^{2}$
have at most polylogarithmic density.
###### Contents
1. 1 Introduction
2. 2 Iterating the density increment
3. 3 The cut norm inverse theorem
4. 4 A weak regularity lemma
5. 5 The density increment lemma
6. 6 Global control by major arc Fourier coefficients
7. 7 Longer progressions
## 1\. Introduction
### 1.1. Density bound
In [9] the authors obtained, for the first time, an effective bound for
subsets of $\left\\{1,\dots,N\right\\}$ lacking the nonlinear Roth
configuration $x$, $x+y$, $x+y^{2}$. There it was established that such sets
have cardinality at most $O(N/(\log\log N)^{c})$, where $c>0$ is an absolute
constant. The key breakthrough of [9] was a “local $U^{1}$-control” result,
from which a bound for sets lacking the nonlinear Roth configuration follows
via standard methods. Here, we combine this local $U^{1}$-control result with
a more sophisticated argument to remove a logarithm from the bound of [9].
###### Theorem 1.1 (Density bound).
There exists an absolute constant $c>0$ such that the following holds. Suppose
that $A\subset\left\\{1,\dots,N\right\\}$ lacks configurations of the form
$x,\ x+y,\ x+y^{2}\qquad(y\neq 0).$ (1.1)
Then
$|A|=O\left(N/(\log N)^{c}\right).$
A careful analysis shows that the exponent $c=2^{-150}$ is permissible, where
150 represents the combined number of times we utilise the Cauchy–Schwarz
inequality in [9] and this paper
### 1.2. Major arc correlation
The techniques which yield Theorem 1.1 also allow us to show, in a
quantitatively effective manner, that the major arc Fourier coefficients of a
set determine how many nonlinear Roth configurations (1.1) the set contains.
###### Theorem 1.2 (Major-arc control).
Let $\delta>0$ and $f,g,h:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions with
support in $\left\\{1,\dots,N\right\\}$. Suppose that
$\left|\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f(x)g(x+y)h(x+y^{2})\right|\geqslant\delta
N^{3/2}.$
Then either $N\ll\delta^{-O(1)}$, or there is a frequency
$\alpha\in\mathbb{R}$ and a positive integer $q\ll\delta^{-O(1)}$ such
that111Here $\left\|\cdot\right\|$ denotes the distance to the nearest
integer, and $e(\alpha):=e^{2\pi i\alpha}$. For our conventions regarding
asymptotic notation see §1.5. $\left\|q\alpha\right\|\ll\delta^{-O(1)}/N$ and
$\left|\sum_{x\in\mathbb{Z}}h(x)e(\alpha x)\right|\gg\delta^{O(1)}N.$
In the nomenclature of [14], the major arc linear phases are the only
obstructions to uniformity for the nonlinear Roth configuration. We emphasise
that Theorem 1.2 is not used in the proof of Theorem 1.1.
The major arc Fourier coefficients of a subset of $\\{1,\dots,N\\}$
essentially measure its distribution in arithmetic progressions of common
difference $\ll 1$ and length $\gg N$. To illustrate this, the following
definition is useful.
###### Definition 1.3 (Local function).
We call a function $\phi:\mathbb{Z}\to\mathbb{C}$ a _local function of
resolution $M$ and modulus $q$_ if there exists a partition of $\mathbb{Z}$
into intervals of length $M$ such that $\phi$ is constant on the intersection
of every such interval with every congruence class mod $q$.
###### Corollary 1.4 (Local control of the nonlinear term).
Let $\delta>0$ and $f,g,h:\mathbb{Z}\to\mathbb{C}$ be 1-bounded functions with
support in $\left\\{1,\dots,N\right\\}$. Suppose that
$\left|\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f(x)g(x+y)h(x+y^{2})\right|\geqslant\delta
N^{3/2}.$
Then either $N\ll\delta^{-O(1)}$, or there is a 1-bounded local function
$\phi$ of resolution $M\gg\delta^{O(1)}N$ and modulus $q\ll\delta^{-O(1)}$
such that
$\left|\sum_{x\in\mathbb{Z}}h(x)\phi(x)\right|\gg\delta^{O(1)}N.$
One cannot hope to prove that the functions $f$ and $g$ above also correlate
globally with local functions, as the following example illustrates. For any
positive integers $x_{1},x_{2}\leqslant N^{1/2}$, set
$f\left(x_{1}+(x_{2}-1)\left\lfloor
N^{1/2}\right\rfloor\right)=\begin{cases}1&\text{ if }x_{2}\equiv
0\pmod{4},\\\ 0&\text{ if }x_{2}\equiv 1\pmod{4},\\\ -1&\text{ if }x_{2}\equiv
2\pmod{4},\\\ 0&\text{ if }x_{2}\equiv 3\pmod{4};\end{cases}$
and set $f(x)=0$ everywhere else. Taking $g:=f$ and $h:=1_{\\{1,\dots,N\\}}$,
one can check that either $N\ll 1$ or
$\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f(x)g(x+y)h(x+y^{2})\gg N^{3/2}.$
However, for any arithmetic progression $P\subset\\{1,\dots,N\\}$, we have
$\left|\sum_{x\in P}f(x)\right|\ll N^{1/2}.$
Hence, for any 1-bounded local function $\phi$ of resolution $\geqslant\delta
N$ and modulus $\leqslant\delta^{-1}$, the triangle inequality gives the
discorrelation
$\left|\sum_{x\in\mathbb{Z}}f(x)\phi(x)\right|\ll\delta^{-2}N^{1/2}.$
This example is a local obstruction coming from the real numbers: the nature
of our counting operator means that we cannot disentangle possible
correlations between the $f$ and $g$ functions on subintervals of length
$N^{1/2}$. We can, however, show that these are the only other possible
obstructions to uniformity.
###### Theorem 1.5 (Local control of all terms).
Let $\delta>0$ and $f_{1},f_{2},f_{3}:\mathbb{Z}\to\mathbb{C}$ be 1-bounded
functions with support in $\left\\{1,\dots,N\right\\}$. Suppose that
$\left|\sum_{x\in\mathbb{Z}}\sum_{y\in\mathbb{N}}f_{1}(x)f_{2}(x+y)f_{3}(x+y^{2})\right|\geqslant\delta
N^{3/2}.$
Then either $N\ll\delta^{-O(1)}$, or for each $i=1,2,3$ there is a 1-bounded
local function $\phi_{i}$ of resolution $\gg\delta^{O(1)}N^{1/2}$ and modulus
$q_{i}\ll\delta^{-O(1)}$ such that
$\left|\sum_{x\in\mathbb{Z}}f_{i}(x)\phi_{i}(x)\right|\gg\delta^{O(1)}N.$
###### Proof.
This is an immediate consequence of Corollary 1.4 and Lemma 3.2. ∎
### 1.3. Longer polynomial progressions
In analogy with the first author’s generalisation [8] of [9], it is natural to
ask whether the methods of this paper yield polylogarithmic bounds for sets of
integers lacking longer progressions
$x,\ x+P_{1}(y),\ \dots,\ x+P_{m}(y),$ (1.2)
where the $P_{i}\in\mathbb{Z}[y]$ have zero constant term and $\deg
P_{1}<\dots<\deg P_{m}$.
As was mentioned above, the key input to this paper is the local
$U^{1}$-control result [9, Theorem 7.1]. Replacing this with [8, Theorem 3.3],
our argument generalises in a straightforward manner to yield polylogarithmic
bounds for subsets of $\\{1,\dots,N\\}$ lacking (1.2) when $m=2$, that is, for
all three-term polynomial progressions with distinct degrees and zero constant
term.
Obtaining polylogarithmic bounds for longer polynomial progressions requires
an additional idea. We sketch a strategy in §7, which relies on obtaining an
appropriate generalisation of [8, Theorem 3.3], a generalisation that would
require re-running the majority of the arguments therein.
### Acknowledgements
S. Peluse is supported by the NSF Mathematical Sciences Postdoctoral Research
Fellowship Program under Grant No. DMS-1903038
### 1.4. An outline of our argument
Effective Szemerédi-type theorems are commonly proved via a density increment
strategy, the prototypical example being the proof of Roth’s theorem [11] on
three-term arithmetic progressions. This strategy begins with a set
$A\subset\\{1,\dots,N\\}$ of density $\delta:=|A|/N$ that lacks the
configuration in question. It then proceeds to show that there is a
substructure $S\subset\\{1,\dots,N\\}$ on which $A$ has increased density
$\delta+\Omega_{\delta}(1)$. One then hopes to iterate the argument with
$A\cap S$ in place of $A$ and $S$ in place of $\\{1,\dots,N\\}$.
One avenue to obtaining polylogarithmic bounds in a Szemerédi-type theorem is
to obtain a constant proportion density increment $\delta+\Omega(\delta)$ on a
substructure $S$ of polynomial size $|S|\approx N^{\Omega(1)}$. This was
accomplished for three-term arithmetic progressions by Heath–Brown [7] and
Szemerédi [13] (in fact, they were able to handle a smaller lower bound on
$|S|$).
An alternative strategy for obtaining polylogarithmic bounds is to obtain the
weaker polynomial increment $\delta+\Omega(\delta^{O(1)})$, yet on a _dense_
or _global_ substructure $S$, that is, a substructure of size
$|S|\geqslant\exp(-O(\delta^{-O(1)}))N$. This was accomplished by Sárközy [12]
for the configuration $x,x+y^{2}$ and for three-term arithmetic progressions
by Bourgain [2].
Both of these strategies are achievable for the nonlinear Roth configuration.
The global structure strategy is perhaps the most natural, and may be
accomplished by utilising a generalisation of Theorem 1.2. In this note we do
not pursue this, and instead give details for a constant-proportion density
increment, as our argument is somewhat cleaner in this form.
More specifically, we show that if $A\subset\left\\{1,\dots,N\right\\}$ has
density $\delta$ and lacks nontrivial configurations of the form
$x,x+y,x+y^{2}$, then there exists an arithmetic progression $P$ of length
$|P|\gg\delta^{O(1)}N^{1/2}$ and common difference $q\ll\delta^{-O(1)}$ such
that we have the density increment
$\frac{|A\cap P|}{|P|}\geqslant(1+\Omega(1))\frac{|A|}{N}.$ (1.3)
As outlined in [9], the ‘almost bounded’ size of $q$ allows us to iterate this
procedure. (In [9], we obtain the weaker density increment
$(1+\Omega(\delta^{O(1)}))|A|/N$, which leads to the extra logarithm appearing
in the bound there.)
We obtain the constant-proportion increment (1.3) by combining the local
$U^{1}$-control result of [9] with a strategy of Heath–Brown [7] and Szemerédi
[13], which has a very robust formulation due to Green and Tao [6]. To
accomplish this, we first give a structural characterisation of sets lacking
the nonlinear Roth configuration (this is Lemma 3.3, whose essence is captured
in the weaker Theorem 1.5). These sets resemble the level sets of the product
of a function that is constant on intervals of length $N^{1/2}$ and a function
that is constant on congruence classes modulo a bounded $q$.
Having obtained such a structural characterisation, an energy increment
procedure closely following [6] allows us to approximate an arbitrary set of
integers by these level sets, up to an error that does not contribute
substantially to the count of nonlinear Roth configurations. A combinatorial
argument then allows us to deduce that our set must have a substantial density
increment on one of these level sets, of the form $\delta+\Omega(\delta)$. As
a result, our density increment procedure requires only
$\log(\delta^{-1})+O(1)$ iterations, compared with the $O(\delta^{-O(1)})$
required in [9], and this yields the polylogarithmic improvement over our
previous density increment iteration.
The remainder of this paper is organized as follows. We derive Theorem 1.1 in
§2 via a density increment iteration. Our deduction uses a density increment
lemma that is established in §§3–5. We prove Theorem 1.2 and Corollary 1.4 in
§6.
### 1.5. Notation
#### 1.5.1. Standard conventions
We use $\mathbb{N}$ to denote the positive integers. For a real number
$X\geqslant 1$, write $[X]=\\{1,2,\ldots,\left\lfloor X\right\rfloor\\}$. A
complex-valued function is said to be _1-bounded_ if the modulus of the
function does not exceed 1.
We use counting measure on $\mathbb{Z}$, so that for
$f,g:\mathbb{Z}\to\mathbb{C}$, we have
$\left\|f\right\|_{\ell^{p}}:=\biggl{(}\sum_{x}|f(x)|^{p}\biggr{)}^{\frac{1}{p}},\
\left\langle f,g\right\rangle:=\sum_{x}f(x)\overline{g(x)},\ \text{and}\
(f*g)(x)=\sum_{y}f(y)g(x-y).$
Any sum of the form $\sum_{x}$ is to be interpreted as a sum over
$\mathbb{Z}$. The _support_ of $f$ is the set
$\mathrm{supp}(f):=\left\\{x\in\mathbb{Z}:f(x)\neq 0\right\\}$. We write
$\left\|f\right\|_{\infty}$ for $\sup_{x\in\mathbb{Z}}|f(x)|$.
We use Haar probability measure on $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$, so
that for measurable $F:\mathbb{T}\to\mathbb{C}$, we have
$\left\|F\right\|_{L^{p}}:=\biggl{(}\int_{\mathbb{T}}|F(\alpha)|^{p}d\alpha\biggr{)}^{\frac{1}{p}}=\biggl{(}\int_{0}^{1}|F(\alpha)|^{p}d\alpha\biggr{)}^{\frac{1}{p}}.$
We write $\left\|\alpha\right\|_{\mathbb{T}}$ for the distance from
$\alpha\in\mathbb{R}$ to the nearest integer
$\min_{n\in\mathbb{Z}}|\alpha-n|.$ This remains well-defined on $\mathbb{T}$.
We define the Fourier transform of $f:\mathbb{Z}\to\mathbb{C}$ by
$\hat{f}(\alpha):=\sum_{x}f(x)e(\alpha x)\qquad(\alpha\in\mathbb{T}),$ (1.4)
when this makes sense. Here $e(\alpha)$ stands for $e^{2\pi i\alpha}$.
For a finite set $S$ and function $f:S\to\mathbb{C}$, denote the average of
$f$ over $S$ by
$\mathbb{E}_{s\in S}f(s):=\frac{1}{|S|}\sum_{s\in S}f(s).$
For a complex-valued function $f$ and positive-valued function $g$, write
$f\ll g$ or $f=O(g)$ if there exists a constant $C$ such that $|f(x)|\leq
Cg(x)$ for all $x$. We write $f=\Omega(g)$ if $f\gg g$. We subscript this
notation when the implicit constant may depend on the subscripted parameters.
#### 1.5.2. Local conventions
Up to normalisation, all of the above are widely used in the literature. Next,
we list notation specific to our paper. We have tried to minimise this in
order to aid the casual reader.
The quantity $(N/q)^{1/2}$ appears repeatedly, where $N$ and $q$ are integers
fixed throughout the majority of our paper. We therefore adopt the convention
that
$M:=\left\lfloor\sqrt{N/q}\right\rfloor.$ (1.5)
Assuming this, define the _counting operator_ on the functions
$f_{i}:\mathbb{Z}\to\mathbb{C}$ by
$\Lambda_{q,N}(f_{0},f_{1},f_{2}):=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}f_{0}(x)f_{1}(x+y)f_{2}(x+qy^{2}).$
(1.6)
When $f_{0}=f_{1}=f_{2}=f$, we simply write $\Lambda_{q,N}(f)$ for
$\Lambda_{q,N}(f_{0},f_{1},f_{2})$.
For a real parameter $H\geqslant 1$, we use $\mu_{H}:\mathbb{Z}\to[0,1]$ to
represent the following normalised Fejér kernel
$\mu_{H}(h):=\frac{1}{\left\lfloor
H\right\rfloor}\left(1-\frac{|h|}{\left\lfloor
H\right\rfloor}\right)_{+}=\frac{(1_{[H]}*1_{-[H]})(h)}{\left\lfloor
H\right\rfloor^{2}}.$ (1.7)
This is a probability measure on $\mathbb{Z}$ with support in the interval
$(-H,H)$.
## 2\. Iterating the density increment
In this section we prove Theorem 1.1 using the following lemma, which we will
devote §§3–5 to proving.
###### Lemma 2.1 (Density increment lemma).
Let $q\leqslant N$ be positive integers and $\delta>0$. Suppose that
$A\subset[N]$ satisfies $|A|\geqslant\delta N$ and lacks the configuration
$x,\ x+y,\ x+qy^{2}\qquad(y\neq 0).$ (2.1)
Then either $N\ll(q/\delta)^{O(1)}$ or there exists
$q^{\prime}\leqslant\exp\left(O\left(\delta^{-O(1)}\right)\right)$ and
$N^{\prime}\geqslant
q^{-O(1)}\exp\left(-O\left(\delta^{-O(1)}\right)\right)N^{1/2}$ such that, for
some $a\in\mathbb{Z}$, we have
$|A\cap(a+qq^{\prime}\cdot[N^{\prime}])|\geqslant(1+\Omega(1))\delta
N^{\prime}.$ (2.2)
###### Proof of Theorem 1.1 given Lemma 2.1.
This is the same as the proof of [9, Theorem 1.1], but using the improved
density increment lemma above in place of the density increment lemma of [9].
Note first that if $A$ lacks the configuration (2.1), then the set
$\\{x:a+qq^{\prime}x\in A\\},$
lacks configurations of the form
$x,\ x+y,\ x+q^{2}q^{\prime}y^{2}\qquad(y\neq 0).$
Let $A\subset[N]$ have size $\delta N$, and suppose that it has no non-linear
Roth configurations (1.1). Setting $A_{0}:=A$, $N_{0}:=N$ and $q_{0}=1$, let
us suppose we have a sequence of tuples $(A_{i},N_{i},q_{i})$ for
$i=0,1,\dots,n$ that each satisfy the following:
1. (i)
$A_{i}$ lacks configurations of the form
$x,\ x+y,\ x+q_{0}^{2^{i}}q_{1}^{2^{i-1}}\dotsm
q_{i-1}^{2}q_{i}y^{2}\qquad(y\neq 0).$
2. (ii)
$q_{i}\leqslant\exp\left(O\left(\delta^{-O(1)}\right)\right)$;
3. (iii)
$A_{i}\subset[N_{i}]$ and for $i\geqslant 1$ we have
$\frac{|A_{i}|}{N_{i}}\geqslant(1+c)\frac{|A_{i-1}|}{N_{i-1}},$
where $c=\Omega(1)$ is a positive absolute constant;
4. (iv)
for $i\geqslant 1$ we have the lower bound
$N_{i}\geqslant\frac{N_{i-1}^{1/2}}{\left(q_{0}^{2^{i-1}}\dotsm
q_{i-1}\exp\left(\delta^{-O(1)}\right)\right)^{O(1)}}.$
Applying Lemma 2.1 with $q=q_{0}^{2^{i}}q_{1}^{2^{i-1}}\dotsm
q_{i-1}^{2}q_{i}$, either
$N_{n}\ll\left(q_{0}^{2^{n}}q_{1}^{2^{n-1}}\dotsm
q_{n-1}^{2}q_{n}/\delta\right)^{O(1)},$ (2.3)
or we may obtain $(A_{n+1},N_{n+1},q_{n+1})$ satisfying conditions (i)–(iv).
If (2.3) holds, then our iterative process terminates at stage $n$.
If the number of iterations $n$ is at least $c^{-1}$, then the density of
$A_{n}$ on $[N_{n}]$ is at least $2\delta$. After an additional
$\tfrac{1}{2}c^{-1}$ iterations, the density is at least $4\delta$. Hence if
the number of iterations is at least
$\left\lceil
c^{-1}\right\rceil+\left\lceil\tfrac{1}{2}c^{-1}\right\rceil+\left\lceil\tfrac{1}{4}c^{-1}\right\rceil+\dots+\left\lceil\tfrac{1}{2^{m-1}}c^{-1}\right\rceil,$
then the density is at least $2^{m}\delta$. The density therefore exceeds one
if the number of iterations exceeds $2c^{-1}+\log_{2}(\delta^{-1})$. Since
this cannot happen, it follows that there exists
$n\leqslant\log_{2}(\delta^{-1})+O(1)$ such that the procedure terminates at
stage $n$.
At the point of termination, the smallness assumption (2.3) must hold, so that
$N_{n}\leqslant\exp\left(O\Bigl{(}\delta^{-O(1)}\Bigr{)}\right).$
On the other hand, iteratively applying the lower bound (iv), we have
$\begin{split}N_{n}&\geqslant\frac{N_{n-1}^{1/2}}{\left(q_{0}^{2^{n-1}}\dotsm
q_{n-1}\exp\left(\delta^{-O(1)}\right)\right)^{O(1)}}\\\ &\geqslant
N^{1/2^{n}}\left[q_{0}^{2^{n-1}}\dotsm
q_{n-1}\exp\left(\delta^{-O(1)}\right)\right]^{-O(1+\frac{1}{2}+\frac{1}{4}+\dots+2^{1-n})}\\\
&\gg\exp\left(-O\left(\delta^{-O(1)}\right)\right)N^{\Omega(\delta)},\end{split}$
where we use the upper bound (ii) on the $q_{i}$’s, together with
$n\leqslant\log_{2}(\delta^{-1})+O(1)$. Taking a logarithm and comparing upper
and lower bounds for $N_{n}$ gives $\log N\ll\delta^{-O(1)},$ which yields the
bound claimed in Theorem 1.1. ∎
## 3\. The cut norm inverse theorem
The first step of the proof of Lemma 2.1 is to use the main technical result
of [9] to prove an inverse theorem for the cut norm associated to
$\Lambda_{q,N}$, which we now define.
###### Definition 3.1 (Cut norm).
For positive integers $q\leqslant N$, we define the _cut norm_ of
$f:\mathbb{Z}\to\mathbb{C}$ by
$\left\|f\right\|_{q,N}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,\
|\Lambda_{q,N}(g_{1},f,g_{2})|,\ |\Lambda_{q,N}(g_{1},g_{2},f)|\\},$ (3.1)
where the supremum is taken over all 1-bounded functions
$g_{i}:[N]\to\mathbb{C}$. We note that, in spite of our nomenclature, this is
not a norm, but a seminorm. One could remedy this by summing over $y\geqslant
0$ in the counting operator (1.6).
Initially, the cut norm is too restrictive for us, so we begin by working with
the weaker quantity
$\left\|f\right\|^{\flat}_{q,N}:=\sup\\{|\Lambda_{q,N}(f,g_{1},g_{2})|,|\Lambda_{q,N}(g_{1},f,g_{2})|:|g_{i}|\leqslant
1\text{ and }\mathrm{supp}(g_{i})\subset[N]\\},$ (3.2)
which we refer to as the _partial cut norm_.
The following lemma is simply a rephrasing of [9, Theorem 7.1], which is the
technical heart of that paper. See Definition 1.3 for the meaning of ‘local
function’.
###### Lemma 3.2 (Partial cut norm inverse theorem).
Let $q\leqslant N$ be positive integers, $\delta>0$, and
$f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$.
Suppose that
$\left\|f\right\|^{\flat}_{q,N}\geqslant\delta.$
Then either $N\ll(q/\delta)^{O(1)}$ or there exists a 1-bounded local function
$\phi$ of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$, modulus $qq^{\prime}$ for
some $q^{\prime}\ll\delta^{-O(1)}$, and such that
$\sum_{x\in[N]}f(x)\phi(x)\gg\delta^{O(1)}N.$
###### Proof.
By compactness, there exist 1-bounded functions $g_{1},g_{2}:[N]\to\mathbb{C}$
such that either $|\Lambda_{q,N}(f,g_{1},g_{2})|\geqslant\delta$ or
$|\Lambda_{q,N}(g_{1},f,g_{2})|\geqslant\delta.$ In the latter case, we may
apply [9, Theorem 7.1] to deduce that there exist positive integers
$q^{\prime}\ll\delta^{-O(1)}$ and $N^{\prime}\gg(\delta/q)^{O(1)}N^{1/2}$ such
that
$\sum_{x}\left|\sum_{y\in[N^{\prime}]}f(x+qq^{\prime}y)\right|\gg\delta^{O(1)}NN^{\prime}.$
In the former case, the reader may check that the argument of [9, Theorem 7.1]
delivers the same conclusion222For details see the second author’s exposition
[10]..
To ease notation, write $Q:=qq^{\prime}$. Partitioning the integers into
arithmetic progressions of length $N^{\prime}$ and common difference $Q$ gives
$\delta^{O(1)}NN^{\prime}\ll\sum_{z\in[N^{\prime}]}\sum_{u\in[Q]}\sum_{x\in\mathbb{Z}}\left|\sum_{y\in[N^{\prime}]}f(Qz+QN^{\prime}x+u+Qy)\right|\\\
\leqslant
N^{\prime}\max_{z}\sum_{u\in[Q]}\sum_{x\in\mathbb{Z}}\left|\sum_{y\in[N^{\prime}]}f(Qz+QN^{\prime}x+u+Qy)\right|.$
Defining $\psi_{z}(u,x)$ to be the conjugate phase of the inner sum, we deduce
the existence of $z$ for which
$\displaystyle\delta^{O(1)}N\ll\sum_{u\in[Q]}\sum_{x}\sum_{y\in[N^{\prime}]}f(Qz+QN^{\prime}x+u+Qy)\psi_{z}(u,x).$
The result follows on noting that every integer has a unique representation of
the form $QN^{\prime}x+u+Qy$ with $u\in[Q]$, $x\in\mathbb{Z}$ and
$y\in[N^{\prime}]$. Hence the map
$Qz+QN^{\prime}x+u+Qy\mapsto\psi_{z}(u,x)$
is a local function of resolution $QN^{\prime}$ and modulus $Q$. ∎
Now we can prove an inverse theorem for the cut norm itself.
###### Lemma 3.3 (Full cut norm inverse theorem).
Let $q\leqslant N$ be positive integers, $\delta>0$, and
$f:\mathbb{Z}\to\mathbb{C}$ be a $1$-bounded function with support in $[N]$.
Suppose that
$\left\|f\right\|_{q,N}\geqslant\delta.$
Then either $N\ll(q/\delta)^{O(1)}$ or there exist 1-bounded local functions
$\phi_{1}$ and $\phi_{2}$, of resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and
moduli $qq_{1}$ and $qq_{2}$, respectively, for some
$q_{1},q_{2}\ll\delta^{-O(1)}$ such that
$\left|\sum_{x\in[N]}f(x)\phi_{1}(x)\phi_{2}(x)\right|\gg\delta^{O(1)}N.$
(3.3)
###### Proof.
By the definition of the cut norm (3.1) and Lemma 3.2, we may assume that
there are 1-bounded functions $g,h:[N]\to\mathbb{C}$ such that
$|\Lambda_{q,N}(g,h,f)|\geqslant\delta.$ (3.4)
Recalling that $M:=\lfloor\sqrt{N/q}\rfloor$, define the dual function
$F(x):=\mathbb{E}_{y\in[M]}h(x+y)f(x+qy^{2}).$
Re-parametrising (3.4) and applying the Cauchy–Schwarz inequality, we have
that
$\delta^{2}\leqslant\mathbb{E}_{x\in[N]}F(x)^{2}=\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[M]}F(x)h(x+y)f(x+qy^{2}).$
Recalling the definition of the partial cut norm (3.2), we deduce that
$\left\|F\right\|_{q,N}^{\flat}\geqslant\delta^{2}.$
Applying the partial cut norm inverse theorem (Lemma 3.2), there exists a
1-bounded local function $\phi_{1}$ of resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq_{1}$ for some
$q_{1}\ll\delta^{-O(1)}$ such that
$\left|\sum_{x\in[N]}F(x)\phi_{1}(x)\right|\gg\delta^{O(1)}N.$
Thus
$|\Lambda_{q,N}(\phi_{1},h,f)|\gg\delta^{O(1)}.$
We now re-run our argument on $h$ instead of $f$, deducing the existence of a
1-bounded local function $\phi_{2}$ of resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq_{2}$ for some
$q_{2}\ll\delta^{-O(1)}$ such that
$|\Lambda_{q,N}(\phi_{1},\phi_{2},f)|\gg\delta^{O(1)}.$
Expanding the counting operator and taking a maximum over $y\in[M]$ gives
$\displaystyle\delta^{O(1)}NM$
$\displaystyle\ll\left|\sum_{y\in[M]}\sum_{x}f(x)\phi_{1}(x-qy^{2})\phi_{2}(x-qy^{2}+y)\right|$
$\displaystyle\leqslant
M\left|\sum_{x}f(x)\tilde{\phi}_{1}(x)\tilde{\phi}_{2}(x)\right|,$
where both $\tilde{\phi}_{i}$ are 1-bounded local functions of resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$ and moduli $qq_{i}$ for some
$q_{i}\ll\delta^{-O(1)}$. ∎
## 4\. A weak regularity lemma
Much of the material is this section is standard, and closely follows the
expositions in Green [4] and Green–Tao [6]. To simplify the exposition of
later arguments, while the factors in [4] and [6] are $\sigma$-algebras, our
factors will be the set of atoms of certain $\sigma$-algebras (which can
obviously be recovered by taking the $\sigma$-algebra generated by the set of
atoms).
###### Definition 4.1 (Factor).
We define a _factor_ $\mathcal{B}$ of $[N]$ to be a partition of $[N]$, so
that $[N]=\sqcup_{B\in\mathcal{B}}B$. We say that a factor
$\mathcal{B}^{\prime}$ _refines_ $\mathcal{B}$ if every element of
$\mathcal{B}$ is a union of elements of $\mathcal{B}^{\prime}$. The _join_
$\mathcal{B}_{1}\vee\dots\vee\mathcal{B}_{d}$ of factors
$\mathcal{B}_{1},\dots,\mathcal{B}_{d}$ is the factor formed by taking the
$d$-fold intersections of the elements of $\mathcal{B}_{1}$, …,
$\mathcal{B}_{d}$, that is,
$\mathcal{B}_{1}\vee\dots\vee\mathcal{B}_{d}:=\\{B_{1}\cap\dots\cap
B_{d}:B_{i}\in\mathcal{B}_{i}\text{ for }i=1,\dots,d\\}.$
###### Definition 4.2 (Measurability, projection).
Given a factor $\mathcal{B}$, we say that a function $f:[N]\to\mathbb{C}$ is
_$\mathcal{B}$ -measurable_ if it is constant on the elements of
$\mathcal{B}$.
Define the _projection_ of any function $f:[N]\to\mathbb{C}$ onto
$\mathcal{B}$ by
$\Pi_{\mathcal{B}}f(x)=\mathbb{E}_{y\in B_{x}}f(y),$ (4.1)
where $B_{x}$ is the element of $\mathcal{B}$ that contains $x$. Notice that
$\Pi_{\mathcal{B}}f$ is $\mathcal{B}$-measurable, and is just the conditional
expectation of $f$ with respect to the $\sigma$-algebra generated by the
elements of $\mathcal{B}$.
We record some well-known properties of the projection operator
$\Pi_{\mathcal{B}}$ (that is, properties of conditional expectation) in the
next lemma.
###### Lemma 4.3 (Properties of the projection operator).
1. (i)
The operator $\Pi_{\mathcal{B}}$ linearly projects onto the space of
$\mathcal{B}$-measurable functions.
2. (ii)
$\Pi_{\mathcal{B}}$ is self-adjoint with respect to the inner product
$\left\langle
f,g\right\rangle:=\sum_{x}f(x)\overline{g(x)}\qquad(f,g:[N]\to\mathbb{C}),$
so that $\left\langle
f,\Pi_{\mathcal{B}}g\right\rangle=\left\langle\Pi_{\mathcal{B}}f,g\right\rangle$.
3. (iii)
If $\mathcal{B}^{\prime}$ is a refinement of $\mathcal{B}$ then
$\Pi_{\mathcal{B}^{\prime}}\Pi_{\mathcal{B}}f=\Pi_{\mathcal{B}}f.$
4. (iv)
If $\mathcal{B}^{\prime}$ refines $\mathcal{B}$ then $\Pi_{\mathcal{B}}f$ is
orthogonal to $\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f$.
###### Proof.
Inspecting the formula (4.1) reveals that $\Pi_{\mathcal{B}}$ is linear, that
$\Pi_{\mathcal{B}}f$ is constant on elements of $\mathcal{B}$, and that if $f$
itself is constant on elements of $\mathcal{B}$, then $\Pi_{\mathcal{B}}f=f$.
This establishes (i).
Interchanging the order of summation gives
$\begin{split}\left\langle
f,\Pi_{\mathcal{B}}g\right\rangle=\sum_{B\in\mathcal{B}}|B|^{-1}\sum_{x,y\in
B}f(x)\overline{g(y)}=\left\langle\Pi_{\mathcal{B}}f,g\right\rangle.\end{split}$
This proves that $\Pi_{\mathcal{B}}$ is self-adjoint.
The first refinement property follows from the fact that $\Pi_{\mathcal{B}}f$
is $\mathcal{B}^{\prime}$-measurable.
We utilise self-adjointness of $\Pi_{\mathcal{B}}$ and the first refinement
property to conclude that
$\begin{split}\left\langle\Pi_{\mathcal{B}}f,\Pi_{\mathcal{B}}f-\Pi_{\mathcal{B}^{\prime}}f\right\rangle&=\left\langle\Pi_{\mathcal{B}}f,\Pi_{\mathcal{B}}f-f\right\rangle=\left\langle
f,\Pi_{\mathcal{B}}f-\Pi_{\mathcal{B}}f\right\rangle=0.\end{split}$
∎
Now we describe the particular type of factors that will be relevant to us.
###### Definition 4.4 (Local factor).
A _simple real factor_ of resolution $M$ is a factor of $[N]$ obtained by
partitioning $\mathbb{R}$ into intervals all of length $M$.
A _simple congruence factor_ of modulus $q$ is the factor of $[N]$ obtained by
partitioning into congruence classes mod $q$.
We say that $\mathcal{B}$ is a _simple local factor_ of resolution $M$ and
modulus $q$ if it is the join of a simple real factor of resolution $M$ and a
simple congruence factor of modulus $q$. Notice that $\mathcal{B}$ is a simple
local factor if and only if it consists of the level sets of a local function
(Definition 1.3) of resolution $M$ and modulus $q$.
A _local factor_ of dimension $d$, resolution $M$ and modulus $q$ is the join
of $d$ simple local factors $\mathcal{B}_{i}$, each of resolution $M_{i}$ and
modulus $q_{i}$, where $M_{i}\geqslant M$ and
$q=\mathrm{lcm}[q_{1},\dots,q_{d}]$.
Local factors of large resolution and small modulus and dimension necessarily
contain few sets. This fact will be useful later in the proof of Lemma 2.1.
###### Lemma 4.5 (Size of a local factor).
If $\mathcal{B}$ is a local factor of dimension $d$, resolution $M$, and
modulus $q$, then
$|\mathcal{B}|\leqslant qd\left(\frac{N}{M}+2\right).$
###### Proof.
By the definition of a local factor, it suffices to bound the size of the join
of $d$ simple real factors, and then bound the size of the join of $d$ simple
congruence factors. The product of these two numbers gives us our final bound.
Joining $d$ congruence simple factors with moduli $q_{1},\dots,q_{d}$ results
in another congruence simple factor of modulus
$q=\mathrm{lcm}[q_{1},\dots,q_{d}]$. The number of parts in such a partition
is $q$.
The join of $d$ simple real factors partitions $[N]$ into intervals. The upper
endpoint of each of these intervals is either equal to $N$ or is equal to an
endpoint of an interval in one of the original simple real factors. For a
simple real factor of resolution $M$, at most $1+N/M$ upper endpoints lie in
$[1,N)$. Hence the number of intervals in the join of $d$ simple real factors
of resolutions $M_{1}$, …, $M_{d}$ is at most
$2d+N(M_{1}^{-1}+\dots+M_{d}^{-1})$.∎
We now prove a weak regularity lemma for the cut norm via an energy increment
argument.
###### Lemma 4.6 (Weak regularity).
Let $q\leqslant N$ be positive integers and $\delta>0$. Either
$N\ll(q/\delta)^{O(1)}$, or for any function $f:[N]\to[0,1]$ there exists a
local factor $\mathcal{B}$ of dimension $d\ll\delta^{-O(1)}$, resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$, and modulus $qq^{\prime}$ for some
$q^{\prime}\leqslant O\left(1/\delta\right)^{O(d)}$ such that
$\left\|f-\Pi_{\mathcal{B}}f\right\|_{q,N}\leqslant\delta.$ (4.2)
###### Proof.
We run an energy increment argument, initialising at stage $0$ with the
trivial factor $\mathcal{B}_{0}:=\left\\{[N]\right\\}$. Suppose that at stage
$d$ of this iteration we have a local factor $\mathcal{B}$ of resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$, dimension at most $2d$, and modulus
$qq^{\prime}$ for some $q^{\prime}\leqslant O(1/\delta)^{O(d)}$. In addition,
suppose that we have the energy lower bound
$\left\|\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}^{2}\gg d\delta^{O(1)}N.$ (4.3)
With these assumptions in place, we query if the following holds
$\left\|f-\Pi_{\mathcal{B}}f\right\|_{q,N}\leqslant\delta.$ (4.4)
If so, then the process terminates. If not, we show how our iteration may
proceed to stage $d+1$.
Applying the cut norm inverse theorem (Lemma 3.3), we conclude that there
exist 1-bounded local functions $\phi_{i}$ of resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq_{i}$ for some
$q_{i}\leqslant\delta^{-O(1)}$ such that
$\left|\left\langle
f-\Pi_{\mathcal{B}}f,\phi_{1}\phi_{2}\right\rangle\right|=\left|\sum_{x\in[N]}(f-\Pi_{\mathcal{B}}f)(x)\phi_{1}(x)\phi_{2}(x)\right|\gg\delta^{O(1)}N.$
Let $\mathcal{B}^{\prime}$ denote the join of $\mathcal{B}$ and the simple
local factors generated by $\phi_{1}$ and $\phi_{2}$, so that
$\mathcal{B}^{\prime}$ is a local factor of dimension at most $2(d+1)$,
resolution $\gg(\delta/q)^{O(1)}N^{1/2}$ and modulus $qq^{\prime\prime}$ for
some $q^{\prime\prime}\leqslant q^{\prime}q_{1}q_{2}\leqslant
O(1/\delta)^{O(d+1)}$. Since $\phi_{1}\phi_{2}$ is
$\mathcal{B}^{\prime}$-measurable, we can use the properties listed in Lemma
4.3 together with the Cauchy–Schwarz inequality to deduce that
$\begin{split}\left|\left\langle
f-\Pi_{\mathcal{B}}f,\phi_{1}\phi_{2}\right\rangle\right|&=\left|\left\langle
f-\Pi_{\mathcal{B}}f,\Pi_{\mathcal{B}^{\prime}}(\phi_{1}\phi_{2})\right\rangle\right|=\left|\left\langle\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f,\phi_{1}\phi_{2}\right\rangle\right|\\\
&\leqslant
N^{1/2}\left\|\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}.\end{split}$
It follows that
$\left\|\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}\gg\delta^{O(1)}N^{1/2}.$
Lemma 4.3 (iv) tells us that $\Pi_{\mathcal{B}}f$ is orthogonal to
$\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f$, hence by Pythagoras’s
theorem
$\left\|\Pi_{\mathcal{B}^{\prime}}f\right\|_{\ell^{2}}^{2}=\left\|\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}^{2}+\left\|\Pi_{\mathcal{B}^{\prime}}f-\Pi_{\mathcal{B}}f\right\|_{\ell^{2}}^{2}.$
The energy bound (4.3) follows for $\mathcal{B}^{\prime}$, allowing us to
proceed to the next stage of our iteration.
Since the function $f$ is $1$-bounded, the projection $\Pi_{\mathcal{B}}f$ is
also 1-bounded, hence the energy (4.3) is always bounded above by $N$. It
follows that this energy increment must terminate at stage $d$ for some
$d\ll\delta^{-O(1)}$, yielding the lemma.∎
## 5\. The density increment lemma
In this section we prove Lemma 2.1, modelling our argument on that given by
Green and Tao [6, Corollary 5.8]. We first record, for the sake of
convenience, the following immediate consequence of the triangle inequality.
###### Lemma 5.1 ($\ell^{1}$-control).
Suppose that $N\geqslant q$. Then for any $f_{0},f_{1},f_{2}:[N]\to\mathbb{C}$
we have
$|\Lambda_{q,N}(f_{0},f_{1},f_{2})|\leqslant
N^{-1}\left\|f_{i}\right\|_{\ell^{1}}\prod_{j\neq
i}\left\|f_{j}\right\|_{\infty}.$
###### Proof.
We prove the result for $i=1$, the other cases being similar. A
reparametrisation gives
$\displaystyle\left|\Lambda_{q,N}(f_{0},f_{1},f_{2})\right|$
$\displaystyle=\left|\mathbb{E}_{x\in[N]}f_{1}(x)\mathbb{E}_{y\in[M]}f_{0}(x-y)f_{2}(x+qy^{2}-y)\right|$
$\displaystyle\leqslant\mathbb{E}_{x\in[N]}|f_{1}(x)|\mathbb{E}_{y\in[M]}|f_{0}(x-y)||f_{2}(x+qy^{2}-y)|.$
∎
We are now in a position to prove Lemma 2.1, and thereby complete our proof of
Theorem 1.1.
###### Proof of Lemma 2.1.
Let $A$ satisfy the assumptions of Lemma 2.1. Increasing $\delta$ only
strengthens our conclusion, so we may assume that $|A|=\delta N$. Since
$\Lambda_{q,N}(1_{A})=0$, we have that
$\left|\Lambda_{q,N}(1_{A})-\Lambda_{q,N}(\delta
1_{[N]})\right|=\delta^{3}\Lambda_{q,N}(1_{[N]})\gg\delta^{3}$.
Applying the weak regularity lemma (Lemma 4.6), there exists a local factor
$\mathcal{B}$ of dimension $d\ll\delta^{-O(1)}$, resolution
$\gg(\delta/q)^{O(1)}N^{1/2}$, and modulus $qq^{\prime}$ for some
$q^{\prime}\leqslant O(1/\delta)^{O(d)}$ such that
$\left\|1_{A}-\Pi_{\mathcal{B}}1_{A}\right\|_{q,N}\leqslant\tfrac{1}{6}\delta^{3}\Lambda_{q,N}({1_{[N]}}).$
Setting $f:=\Pi_{\mathcal{B}}1_{A}$, a telescoping identity thus yields
$\left|\Lambda_{q,N}(f)-\Lambda_{q,N}(\delta
1_{[N]})\right|\geqslant\tfrac{1}{2}\delta^{3}\Lambda_{q,N}({1_{[N]}})\gg\delta^{3}.$
Define the $\mathcal{B}$-measurable set
$S:=\left\\{x\in[N]:f(x)\geqslant(1+c)\delta\right\\},$
where $c>0$ is a sufficiently small absolute constant that will be chosen to
make the following argument valid. By Lemma 5.1 and a telescoping identity, we
have $\left|\Lambda_{q,N}(f)-\Lambda_{q,N}(f1_{S^{c}})\right|\leqslant
3|S|/N$, so that
$\tfrac{|S|}{N}+\left|\Lambda_{q,N}(f1_{S^{c}})-\Lambda_{q,N}(\delta
1_{[N]})\right|\gg\delta^{3}.$
Yet another telescoping identity, in conjunction with Lemma 5.1, gives
$\displaystyle\left|\Lambda_{q,N}(f1_{S^{c}})-\Lambda_{q,N}(\delta
1_{[N]})\right|$
$\displaystyle\ll\tfrac{\delta^{2}}{N}\left\|f1_{S^{c}}-\delta
1_{[N]}\right\|_{\ell^{1}}\leqslant\tfrac{\delta^{2}}{N}\left\|f-\delta
1_{[N]}\right\|_{\ell^{1}}+\tfrac{|S|}{N},$
so that
$|S|+\delta^{2}\left\|f-\delta 1_{[N]}\right\|_{\ell^{1}}\gg\delta^{3}N.$
Since $f-\delta 1_{[N]}$ has mean zero, its $\ell^{1}$-norm is equal to twice
the $\ell^{1}$-norm of its positive part. The function $\left(f-\delta
1_{[N]}\right)_{+}$ can only exceed $c\delta$ on $S$, so taking $c$ small
enough gives $|S|\gg\delta^{3}N$. Letting $B$ denote the largest element of
$\mathcal{B}$ for which $B\subset S$, the bound in Lemma 4.5 yields
$|B|\gg q^{-O(1)}\delta^{O(d)}2^{-O(d)}N^{1/2}.$
By construction (see Definition 4.4), the set $B$ is an arithmetic progression
of common difference $qq^{\prime}$ with $q^{\prime}\leqslant
O(1/\delta)^{O(d)}$. Moreover, the density of $A$ on $B$ is equal to the value
of $f(x)$ for any $x\in B$, and this is at least $(1+c)\delta$ by the
definition of $S$. ∎
## 6\. Global control by major arc Fourier coefficients
The purpose of this section is to prove Theorem 1.2 and Corollary 1.4. We
begin with an alternative version of Lemma 3.2, replacing the rigid local
function found therein with something more continuous.
###### Definition 6.1 ($C$-Lipschitz).
We say that $\phi:\mathbb{Z}\to\mathbb{C}$ is _$C$ -Lipschitz along
$q\cdot\mathbb{Z}$_ if for any $x,y\in\mathbb{Z}$ we have
$|\phi(x+qy)-\phi(x)|\leqslant C|y|.$
Recalling our definition for the Fejér kernel (1.7), we observe that a
function of the form
$x\mapsto\sum_{h}\mu_{H}(h)f(x+qh)$ (6.1)
is Lipschitz along $q\cdot\mathbb{Z}$.
###### Lemma 6.2.
Let $q,H$ be positive integers and $f:\mathbb{Z}\to\mathbb{C}$ be 1-bounded.
If $\phi$ is defined as in (6.1), then $\phi$ is $O(H^{-1})$-Lipschitz along
$q\cdot\mathbb{Z}$.
###### Proof.
Recalling (1.7), the triangle inequality for $|\cdot|$ and $\max\\{\cdot,0\\}$
show that $|\mu_{H}(h+y)-\mu_{H}(h)|\leqslant|y|/\left\lfloor
H\right\rfloor^{2}$ for all $h,y\in\mathbb{Z}$. Hence a change of variables
gives
$|\phi(x+qy)-\phi(x)|\leqslant\sum_{h}|\mu_{H}(h-y)-\mu_{H}(h)|\ll\frac{|y|}{H^{2}}\sum_{h\in(-H,H)\cup(y-H,y+H)}1.$
∎
Now we prove another partial cut norm inverse theorem, this time getting
correlation with functions that are Lipschitz along progressions with small
common difference.
###### Lemma 6.3 (Partial cut norm inverse theorem II).
Let $N$ be a positive integer, $\delta>0$, and $f,g,h:\mathbb{Z}\to\mathbb{C}$
be $1$-bounded functions with support in $[N]$. Suppose that
$\left|\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[N^{1/2}]}f(x)g(x+y)h(x+y^{2})\right|\geqslant\delta.$
Then either $N\ll\delta^{-O(1)}$, or there exists $q\ll\delta^{-O(1)}$ and a
1-bounded function $\phi$ that is $O(\delta^{-O(1)}N^{-1/2})$-Lipschitz along
$q\cdot\mathbb{Z}$ such that
$\sum_{x\in[N]}g(x)\phi(x)\gg\delta^{O(1)}N.$
###### Proof.
Applying [9, Theorem 7.1], we obtain positive integers $q\ll\delta^{-O(1)}$
and $N^{1/2}\geqslant M\gg\delta^{O(1)}N^{1/2}$ such that
$\sum_{x}\left|\sum_{y\in[M]}g(x+qy)\right|\gg\delta^{O(1)}NM.$
By the Cauchy–Schwarz inequality and a change of variables, we have
$\sum_{x}g(x)\sum_{y_{1},y_{2}\in[M]}\overline{g(x+q(y_{1}-y_{2}))}\gg\delta^{O(1)}NM^{2}.$
Setting
$\phi(x):=\mathbb{E}_{y_{1},y_{2}\in[M]}\overline{g(x+q(y_{1}-y_{2}))},$
Lemma 6.2 shows this function has the required properties. ∎
Before proving Theorem 1.2, we record two standard facts.
###### Lemma 6.4.
There are at most $O(N^{4})$ solutions $x\in[N]^{6}$ to the equation
$x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=x_{4}^{2}+x_{5}^{2}+x_{6}^{2}.$
###### Proof.
There are a number of ways to prove this. Perhaps the most robust is via the
circle method, see [3]. The result can be read out of [1, Proposition 1.10]. ∎
###### Lemma 6.5 (Weyl’s inequality).
Let $P\subset\mathbb{Z}$ be an arithmetic progression with common difference
$q$ and let $0<\delta\leqslant 1$. Suppose that
$\left|\sum_{x\in P}e(\alpha x^{2})\right|\geqslant\delta|P|.$
Then either $|P|\ll\delta^{-O(1)}$ or there exists a positive integer
$q^{\prime}\ll\delta^{-O(1)}$ such that
$\|q^{\prime}q^{2}\alpha\|\ll\delta^{-O(1)}|P|^{-2}.$
###### Proof.
Let $P=x_{0}+q\cdot[N]$, so that our exponential sum becomes
$\sum_{x\in P}e(\alpha x^{2})=\sum_{y\in[N]}e(\alpha q^{2}y^{2}+2\alpha
qx_{0}y+\alpha x_{0}^{2}).$
Applying [5, Lemma A.11], either $N\ll\delta^{-O(1)}$ or the conclusion of our
lemma follows. ∎
###### Proof of Theorem 1.2.
Write $\Lambda_{N}$ for the counting operator $\Lambda_{1,N}$ (that is, the
average (1.6) with $q=1$). Let $f,g,h:[N]\to\mathbb{C}$ be 1-bounded functions
satisfying
$|\Lambda_{N}(f,g,h)|\geqslant\delta.$
Define the seminorm
$\left\|g\right\|:=\sup\left\\{|\Lambda_{N}(g_{1},g,g_{2})|:|g_{i}|\leqslant
1\text{ and }\mathrm{supp}(g_{i})\subset[N]\right\\}.$
and the dual function
$F(x):=\mathbb{E}_{y\in[N^{1/2}]}f(x-y)h(x+y^{2}-y).$
We follow the argument in the proof of Lemma 3.3 to deduce that
$\left\|F\right\|\geqslant\delta^{2}.$
Hence, by Lemma 6.3, there exists $q\ll\delta^{-O(1)}$ and a 1-bounded
function $\phi$ that is $O(\delta^{-O(1)}N^{-1/2})$-Lipschitz along
$q\cdot\mathbb{Z}$ and satisfies
$\sum_{x\in[N]}F(x)\phi(x)\gg\delta^{O(1)}N.$
Expanding the definition of the dual function, we have
$\sum_{x\in[N]}\sum_{y\in[N^{1/2}]}f(x)\phi(x+y)h(x+y^{2})\gg\delta^{O(1)}N^{3/2}.$
Let us partition $\mathbb{Z}$ into arithmetic progressions $P$ each of common
difference $q$ and length $M$, where $M$ will be chosen shortly. For each such
arithmetic progression $P$, fix an element $y_{P}\in P$. Using the Lipschitz
property of $\phi$, for any $x\in\mathbb{Z}$ and $y\in P$ we have
$|\phi(x+y_{P})-\phi(x+y)|\ll\delta^{-O(1)}MN^{-1/2}.$
Hence,
$\left|\sum_{P}\sum_{x\in[N]}\sum_{y\in
P\cap[N^{1/2}]}f(x)[\phi(x+y)-\phi(x+y_{P})]h(x+y^{2})\right|\ll\delta^{-O(1)}MN.$
We can therefore take $M$ sufficiently small to satisfy both
$M\gg\delta^{O(1)}N^{1/2}$ and
$\left|\sum_{P}\sum_{x}\sum_{y\in
P\cap[N^{1/2}]}f(x)\phi(x+y_{P})h(x+y^{2})\right|\gg\delta^{O(1)}N^{3/2}.$
Set $f_{P}(x):=f(x)\phi(x+y_{P})$. The number of progressions $P$ that
intersect $[N^{1/2}]$ is at most $O(N^{1/2}M^{-1}+q)=O(\delta^{-O(1)})$.
Therefore, the pigeon-hole principle gives a progression $P$ for which
$\left|\sum_{x}\sum_{y\in
P\cap[N^{1/2}]}f_{P}(x)h(x+y^{2})\right|\gg\delta^{O(1)}N^{3/2}.$ (6.2)
In particular, $|P\cap[N^{1/2}]|\gg\delta^{O(1)}N^{1/2}$.
Writing $S_{P}(\alpha)$ for $\sum_{y\in P\cap[N^{1/2}]}e\left(\alpha
y^{2}\right)$, the orthogonality relations allow us to reformulate (6.2) as
$\displaystyle\left|\int_{\mathbb{T}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\gg\delta^{O(1)}N^{3/2}.$
Let $\eta>0$ be a parameter to be determined shortly, and define the major
arcs
$\mathfrak{M}:=\left\\{\alpha\in\mathbb{T}:|S_{P}(\alpha)|\geqslant\eta
N^{1/2}\right\\}.$
Parseval’s identity then gives
$\left|\int_{\mathbb{T}\setminus\mathfrak{M}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\leqslant\eta
N^{1/2}\big{\|}\hat{f}_{P}\big{\|}_{2}\big{\|}\hat{h}\big{\|}_{2}\leqslant\eta
N^{3/2}.$
Hence we may take $\eta\gg\delta^{O(1)}$ and ensure that
$\displaystyle\left|\int_{\mathfrak{M}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\gg\delta^{O(1)}N^{3/2}.$
By Lemma 6.4 and orthogonality, we have $\left\|S_{P}\right\|_{6}\ll N^{1/3}$.
Thus, by Hölder’s inequality, we get that
$\left|\int_{\mathfrak{M}}\hat{f}_{P}(\alpha)\hat{h}(-\alpha)S_{P}(\alpha)d\alpha\right|\leqslant\big{\|}\hat{f}_{P}\big{\|}_{2}\big{\|}\hat{h}\big{\|}_{2}^{2/3}\big{\|}S_{P}\big{\|}_{6}\sup_{\alpha\in\mathfrak{M}}\bigl{|}\hat{h}(-\alpha)\bigr{|}^{1/3}.$
We therefore deduce that there exists $\alpha\in\mathfrak{M}$ such that
$\bigl{|}\hat{h}(-\alpha)\bigr{|}\gg\delta^{O(1)}N.$
Finally, an application of Weyl’s inequality (Lemma 6.5) shows that if
$-\alpha\in\mathfrak{M}$ then $\alpha$ has the required Diophantine
approximation property. ∎
###### Proof of Corollary 1.4.
Let $\alpha\in\mathbb{R}$ be the frequency and $q$ the positive integer
provided by Theorem 1.2. For any integer $a$ and positive integer $M$, if
$x,y\in a+q\cdot[M]$, then
$\left|e(\alpha x)-e(\alpha y)\right|\leqslant
2\pi\left\|\alpha(x-y)\right\|\ll\delta^{-O(1)}MN^{-1}.$
Partitioning $\mathbb{Z}$ into arithmetic progressions of common difference
$q$ and length $M$ then gives
$\delta^{O(1)}N\ll\sum_{P}\Bigl{|}\sum_{x\in P}h(x)\Bigr{|}+\delta^{-O(1)}M.$
We thus take $M\gg\delta^{O(1)}N$ sufficiently small to ensure that
$\delta^{O(1)}N\ll\sum_{P}\Bigl{|}\sum_{x\in P}h(x)\Bigr{|}.$
Write $\theta_{P}$ for the conjugate phase of the inner sum. Then the map
$x\mapsto\sum_{P}\theta_{P}1_{P}(x)$ is a local function of resolution
$\gg\delta^{O(1)}N$ and modulus $\ll\delta^{-O(1)}$, yielding the corollary. ∎
## 7\. Longer progressions
As mentioned in §1.3, the main obstacle to generalising our polylogarithmic
bound to longer configurations such as (1.2) is in obtaining an appropriate
generalisation of Lemma 3.3; in particular, showing that if the relevant
counting operator is large, then _all_ functions must correlate with a product
of a bounded number of local functions.
Let us demonstrate where the argument breaks down for $m>2$. Given polynomials
as in (1.2) and 1-bounded functions
$f_{0},f_{1},\dots,f_{m}:[N]\to\mathbb{C}$, define the counting operator
$\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},f_{1},\dots,f_{m}):=\\\
\mathbb{E}_{x\in[N]}\mathbb{E}_{y\in[N^{1/\deg
P_{m}}]}f_{0}(x)f_{1}(x+P_{1}(y))\dotsm f_{m}(x+P_{m}(y)).$
Using the main technical result of [8], [8, Theorem 3.3], one can show that if
$\left|\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},f_{1},\dots,f_{m})\right|\geqslant\delta,$
then both $f_{0}$ and $f_{1}$ correlate with local functions $\phi_{0}$ and
$\phi_{1}$. Combining this with a dual function argument, as in our proofs of
Theorem 1.2 and Lemma 3.3, one may conclude that
$\left|\Lambda_{P_{1},\dots,P_{m}}^{N}(\phi_{0},\phi_{1},f_{2},\dots,f_{m})\right|\gg\delta^{O(1)},$
If $m=2$, one can then pigeon-hole in the smaller $y$ variable appearing in
the counting operator (as we do in the proof of Lemma 3.3) to conclude that
$f_{2}$ correlates with a product of two local functions. It is this simple
pigeon-holing argument that fails when $m>2$.
### 7.1. An alternative strategy for longer progressions
A more productive strategy is to follow our proof of Theorem 1.2 instead of
Theorem 1.1. In proving Theorem 1.2 we replace the counting operator
$\Lambda_{y,y^{2}}^{N}(f_{0},f_{1},f_{2})$ with
$\Lambda_{y,y^{2}}^{N}(f_{0},\phi,f_{2})$, where $\phi$ is a local function
that is constant on progressions of length $\approx N^{1/2}$ with common
difference of size $\approx O(1)$. Provided that we pass to appropriate
subprogressions in all of the variables appearing in our counting operator, we
can exploit the properties of this local function and ‘remove’ it from our
count. In effect (after passing to subprogressions of bounded common
difference), we replace the count $\Lambda_{y,y^{2}}^{N}(f_{0},f_{1},f_{2})$
with one of the form $\Lambda_{Q}^{N^{\prime}}(f_{0},f_{2})$, where $Q$ is a
quadratic polynomial and $N^{\prime}$ is slightly smaller than $N$.
Generalising this approach, one can use [8, Theorem 3.3] to replace the
counting operator $\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},f_{1},\dots,f_{m})$
with $\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},\phi,f_{2},\dots,f_{m})$, where
$\phi$ is a local function. Provided that this local function has resolution
$\gg N^{\deg P_{1}/\deg P_{m}}$ and common difference $q\ll 1$, we have
$\phi(x+P_{1}(y))\approx\phi(x)$
for any $x\in\mathbb{Z}$ and any $y$ constrained to a subprogression of common
difference $q$ and length $\approx N^{\deg P_{1}/\deg P_{m}}$. Passing to
subprogressions in $x$ and $y$, one should then be able to replace the
operator
$\Lambda_{P_{1},\dots,P_{m}}^{N}(f_{0},\phi,f_{2},\dots,f_{m})$
by one of the form
$\Lambda_{Q_{2},\dots,Q_{m}}^{N^{\prime}}(f_{0},f_{2},\dots,f_{m}).$
Applying induction on $m$ may then allow one to show that every function in
the original counting operator correlates with a local function.
The main impediment to carrying out this strategy is that the polynomials
$Q_{2}$, …, $Q_{m}$, which arise on passing to a subprogression, may not
satisfy the hypotheses required to reapply [8, Theorem 3.3]. It is likely that
the polynomials are sufficiently well-behaved for the arguments of [8] to
remain valid, but we leave this verification to the energetic reader.
## References
* Bou [89] J. Bourgain. On $\Lambda(p)$-subsets of squares. Israel J. Math., 67(3):291–311, 1989.
* Bou [99] J. Bourgain. On triples in arithmetic progression. Geom. Funct. Anal., 9(5):968–984, 1999.
* Dav [05] H. Davenport. Analytic methods for Diophantine equations and Diophantine inequalities. Cambridge Mathematical Library. Cambridge University Press, Cambridge, second edition, 2005. With a foreword by R. C. Vaughan, D. R. Heath-Brown and D. E. Freeman, Edited and prepared for publication by T. D. Browning.
* Gre [07] B. Green. Montréal notes on quadratic Fourier analysis. In Additive combinatorics, volume 43 of CRM Proc. Lecture Notes, pages 69–102. Amer. Math. Soc., Providence, RI, 2007.
* GT [08] B. Green and T. Tao. Quadratic uniformity of the Möbius function. Ann. Inst. Fourier (Grenoble), 58(6):1863–1935, 2008.
* GT [09] B. Green and T. Tao. New bounds for Szemerédi’s theorem. II. A new bound for $r_{4}(N)$. In Analytic number theory, pages 180–204. Cambridge Univ. Press, Cambridge, 2009.
* HB [87] D. R. Heath-Brown. Integer sets containing no arithmetic progressions. J. London Math. Soc. (2), 35(3):385–394, 1987.
* Pel [19] S. Peluse. Bounds for sets with no polynomial progressions. ArXiv e-prints, 2019.
* PP [19] S. Peluse and S. Prendiville. Quantitative bounds in the non-linear Roth theorem. ArXiv e-prints, 2019.
* Pre [20] S. Prendiville. The inverse theorem for the nonlinear Roth configuration: an exposition. ArXiv e-prints, 2020.
* Rot [53] K. F. Roth. On certain sets of integers. J. London Math. Soc., 28:104–109, 1953.
* Sár [78] A. Sárközy. On difference sets of sequences of integers. I. Acta Math. Acad. Sci. Hungar., 31(1–2):125–149, 1978.
* Sze [90] E. Szemerédi. Integer sets containing no arithmetic progressions. Acta Math. Hungar., 56(1-2):155–158, 1990.
* Tao [06] T. Tao. Obstructions to uniformity and arithmetic patterns in the primes. Pure Appl. Math. Q., 2(2, Special Issue: In honor of John H. Coates. Part 2):395–433, 2006.
|
2024-09-04T02:54:58.416642 | 2020-03-09T13:24:21 | 2003.04127 | {
"authors": "Johannes Hillbrand, Nikola Opacak, Marco Piccardo, Harald Schneider,\n Gottfried Strasser, Federico Capasso, Benedikt Schwarz",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26119",
"submitter": "Johannes Hillbrand",
"url": "https://arxiv.org/abs/2003.04127"
} | arxiv-papers | Mode-locked ultrashort pulses from an 8 µm wavelength semiconductor laser
Johannes Hillbrand1,2,∗, Nikola Opačak1, Marco Piccardo2,3, Harald Schneider4,
Gottfried Strasser1, Federico Capasso2, Benedikt Schwarz1,2,†
1Institute of Solid State Electronics, TU Wien, Gußhausstraße 25, 1040 Vienna,
Austria
2Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard
University, Cambridge, USA
3CNST – Fondazione Istituto Italiano di Tecnologia, Via Pascoli 70/3, 20133
Milano, Italy
4Institute of Ion Beam Physics and Materials Research, Helmholtz-Zentrum
Dresden-Rossendorf, Germany
Quantum cascade lasers (QCL) have revolutionized the generation of mid-
infrared light. Yet, the ultrafast carrier transport in mid-infrared QCLs has
so far constituted a seemingly insurmountable obstacle for the formation of
ultrashort light pulses. Here, we demonstrate that careful quantum design of
the gain medium and control over the intermode beat synchronization enable
transform-limited picosecond pulses from QCL frequency combs. Both an
interferometric radio-frequency technique and second-order autocorrelation
shed light on the pulse dynamics and confirm that mode-locked operation is
achieved from threshold to rollover current. Being electrically pumped and
compact, mode-locked QCLs pave the way towards monolithically integrated non-
linear photonics in the molecular fingerprint region beyond 6 µm wavelength.
<EMAIL_ADDRESS><EMAIL_ADDRESS>
The discovery of ultrashort light pulses has led to numerous breakthroughs in
science and technology, including frequency combs1, high-speed optical
telecommunication2 and refractive surgery in ophthalmology3. Nowadays, optical
pulses are routinely generated in mode-locked lasers operating in the visible
or near-infrared range4, 5. Currently, large efforts are aimed at bringing
ultrafast laser science in the mid-infrared (MIR) region to a similarly high
degree of maturity6. Due to the lack of suitable gain media, methods for the
generation of pulses in the molecular fingerprint region beyond 5 µm
wavelength have so far relied on non-linear downconversion of near-infrared
pulses 7. Established techniques such as optical parametric oscillators8 or
difference frequency generation9, 10 either require sophisticated optical
setups with tabletop dimensions or are restricted to mW-level of output power.
Quantum cascade lasers11 (QCL) have matured to become the dominant MIR laser
source. While being microchip-sized and electrically pumped, they are capable
of producing Watt-level average power12, 13. Quantum engineering of the active
region allows to tailor the emission wavelength throughout the entire mid-
infrared region. Hence, harnessing high-performance QCL technology for the
generation of MIR pulses represents a long-sought milestone in ultrafast laser
science. Mode-locked QCLs could serve as monolithic pump lasers for
microresonators and resonant supercontinuum generation14, paving the way
towards broadband and high-brightness frequency combs. So far, the sub-
picosecond carrier transport in QCL active regions has constituted a seemingly
insurmountable obstacle for the formation of short light pulses15, 16, 17. To
date, the only successful attempt of mode-locking in monolithic MIR QCLs was
observed using a specially designed active region with strongly enhanced
upper-state lifetime of the lasing transition17. However, the necessary design
modifications limited mode-locked operation to cryogenic temperatures and peak
power below 10 mW, thus impeding their practical use.
Figure 1: Bi-functional quantum cascade lasers for mode-locking. a: Scanning
electron microscope image of three adjacent laser ridges. Each laser consists
of a roughly 3 mm long gain section and a shorter (320-480 µm) modulation
section. b: Simulated gain and loss spectrum in a standard active region
design18 depending on the applied bias. Upon decreasing the bias, the
structure becomes almost transparent at the lasing wavelength $\lambda_{L}$,
limiting the maximally achievable modulation depth. c: Simulated gain and loss
spectrum in a bi-functional active region design12, allowing to tune the gain
at 10 V continuously to absorption (shown as negative gain) at 0 V. d:
Measured light-current-voltage (L-I-V) characteristics of an epi-up mounted
bi-functional QCL at 15$\,{}^{\circ}$C. e: Illustration of a system of coupled
oscillators. This system shows an in-phase and anti-phase synchronization
state, which oscillate at different frequencies depending on the coupling.
Without external stimulation, the anti-phase state is more favorable due to
the damped coupling. However, both synchronization frequencies can be probed
by exerting mechanical force on the platform coupling the oscillators. In the
QCL, the oscillators are represented by the intermode beatings, which tend to
synchronize in anti-phase due to gain damping19, 20. Both synchronization
frequencies are probed by applying modulation to the laser. f: Average optical
power depending on the modulation frequency and power. Two synchronization
states at $f_{\mathrm{rep}}^{0}$ and 60 MHz above are observed. g: Signal of a
2-QWIP sensitive to peak power as function of modulation frequency and power.
The strongly increased signal of the lobe at $f_{\mathrm{rep}}^{0}$+60 MHz
indicates in-phase synchronization.
In this work, we demonstrate the generation of transform limited picosecond
pulses in high-performance 8 µm wavelength QCLs at room temperature both
experimentally and theoretically. Mode-locking is achieved by electrically
modulating the intracavity loss using a short modulation section designed for
efficient radio-frequency (RF) injection (Fig. 1a). In order to achieve the
large modulation depth required for stable mode-locking, close attention has
to be paid to the band structure of the QCL active region. This effect is
illustrated in Figs. 1a,b. As the bias applied to a standard QCL structure is
decreased, it does not switch to absorption at the lasing wavelength, but
becomes nearly transparent for the intracavity light due to a bias-dependent
shift of the electronic levels, known as Stark effect (Fig. 1b). Hence, the
modulation depth is severely limited in standard QCL designs. For this reason,
we employ a bi-functional active region whose lasing wavelength and absorption
wavelength at zero bias were matched to each other 21, 12 (Fig. 1c). This
strategy allows to overcome the aforementioned limitations of the modulation
depth caused by the Stark shift. Most importantly, the bi-functional design
shows excellent overall performance, which is competitive with other state-of-
the-art designs. A 3.5 mm long device mounted epitaxial-side up emits more
than 130 mW average power in continuous wave at room temperature (Fig. 2d).
As a first step towards pulse generation, it is essential to determine the
optimal modulation frequency. For this purpose, mode-locking can be seen as
synchronization of coupled oscillators20 (Fig. 1e). Each pair of neighboring
cavity modes creates a beating at their difference frequency, which is equal
to the cavity roundtrip frequency. These beatings can be seen as oscillators
coupled by the non-linearity of the gain medium. Thanks to this coupling, the
cavity modes of a free-running QCL can be locked together without modulation,
thus giving rise to a self-starting frequency comb 19. Yet, this kind of
frequency comb does not emit isolated pulses, but rather a quasicontinuous
wave accompanied by a strong linear frequency chirp22. This corresponds to
anti-phase synchronization and will be called frequency modulated (FM) comb in
the following. In contrast, in-phase synchronization of the intermode beatings
leads to the formation of short pulses. It is well known from coupled
oscillators that the in-phase and anti-phase states synchronize at different
frequencies depending on the coupling. As a consequence, while the cavity
roundtrip frequency of the FM QCL comb $f_{\mathrm{rep}}^{0}$ may seem like a
reasonable choice, we expect the optimal modulation frequency for generating
pulses to differ from $f_{\mathrm{rep}}^{0}$.
In order to investigate these two synchronization states experimentally, we
start by operating the laser well above its threshold current. Subsequently,
the DC bias of the modulation section is decreased to 2.8 V, where the large
absorption caused by the bifunctional design (Fig. 1c) brings the QCL just
slightly below lasing threshold. In these conditions, modulation at the right
frequencies can provide enough additional gain to reach threshold. Fig. 1f
shows the laser power depending on modulation frequency and power. At 33 dBm
modulation power, the QCL reaches threshold when modulating close to
$f_{\mathrm{rep}}^{0}$. Strikingly, a second modulation frequency where lasing
occurs is observed almost 60 MHz higher than $f_{\mathrm{rep}}^{0}$, as
predicted by the picture of synchronized oscillators. Both the range around
the two synchronization frequencies, where lasing is observed, as well as the
optical power grow upon increasing the modulation power.
Figure 2: Mode-locked pulses from an 8 µm wavelength QCL. a: SWIFTS
characterization of the QCL operated close to lasing threshold. The laser is
modulated at the in-phase synchronization frequency and at 37 dBm power level.
b: Reconstructed time-domain signal of the QCL, showing a train of transform
limited pulses with 6.5 ps FWHM. c: Simulation of the QCL using the coherent
Master equation described in supp. section 1. d: Interferometric
autocorrelation (IAC) of the QCL pulses close to threshold. Red dots: envelope
of the IAC reconstructed using SWIFTS. e: IAC at higher current. f: IAC at the
rollover current, still displaying the 8:1 ratio. The second burst at a delay
equal to the cavity roundtrip time is due to the interference of subsequently
emitted pulses. Its peak value of 8 provides another proof for the coherence
of the pulses because phase-decoherence would smear out the fringes of the IAC
and thus decrease the peak value to smaller than 8. Inset: zoom on the
interferometric fringes.
Even more insight is provided by using a two-photon quantum well infrared
photodetector (2-QWIP) to detect the emitted light. The signal of the 2-QWIP
is proportional to the square of the intensity. This allows to identify which
modulation frequency leads to in-phase and which to anti-phase synchronization
(Fig. 1g). Again, two lobes appear around $f_{\mathrm{rep}}^{0}$ and 60 MHz
above. Yet, the 2-QWIP signal is more than an order of magnitude larger in the
lobe at higher $f_{\mathrm{mod}}$. At this frequency, the laser operates in
the in-phase synchronization regime and emits intense pulses, which leads to a
strongly increased 2-QWIP signal.
Figure 3: Synchronization under strong modulation. a: Schematic of a 3-section
QCL comprised of modulation, gain and high-speed detector sections. b: First
three harmonics of the beatnote of the free-running 7 mm long QCL FM comb
(red) compared to the actively mode-locked QCL (AM comb, blue). c: laser
beatnote while free-running (bottom), at
$f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}$ (middle) and at
$f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}{+}33\,$MHz (top). While a broad
pedestal is visible for $f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}$, the
beatnote is perfectly locked for
$f_{\mathrm{mod}}{=}f_{\mathrm{rep}}^{0}{+}33\,$MHz. d: RF spectrum around
$f^{0}_{\mathrm{rep}}{=}$6.196 GHz as the modulation frequency is varied
around $f^{0}_{\mathrm{rep}}$. The phase-noise of the RF spectrum disappears
abruptly at $f_{\mathrm{mod}}{\approx}f_{\mathrm{rep}}^{0}{+}20\,$MHz,
corresponding to in-phase synchronization. Here, the beatnote consists of a
single narrow peak, indicating that the laser is phase-locked to the
modulation. Furthermore, the sharp sidepeaks visible at
$f_{\mathrm{mod}}{=}6.18\,$GHz are attributed to a periodic modulation of the
QCL output, as previously observed in simulations16.
In order to unequivocally prove mode-locking, we employ two independent
methods to characterize the pulse dynamics at three points of operation from
threshold up to the rollover current. Firstly, an interferometric RF technique
called ’Shifted wave interference Fourier transform spectroscopy’ (SWIFTS)23,
24 is used to measure the phases of the QCL spectrum (details in Methods
section). This information not only enables the reconstruction of the temporal
waveform, but also allows to assess the phase-coherence of the pulses and
whether they form a frequency comb. Secondly, we measure the interferometric
autocorrelation (IAC) of the pulses using the 2-QWIP, which constitutes an
additional well-established proof for mode-locking and the pulse width. Fig.
2a shows the SWIFTS characterization of the QCL operated close to threshold.
In contrast to the free-running laser, the intensity spectrum consists of a
single Gaussian-shaped lobe. The SWIFTS spectrum represents the part of the
intensity spectrum which is beating exactly at the modulation frequency. Since
the SWIFTS spectrum has the same shape as the intensity spectrum over its
entire span, the QCL generates a frequency comb whose repetition frequency is
given by the modulation frequency. The intermodal difference phases
$\Delta\phi$, which correspond to the spectral group delay, are synchronized
almost perfectly in-phase. Hence, all parts of the spectrum have the same
group delay and form a pulse. Indeed, the reconstructed waveform in Fig. 2b
shows the emission of 6.5 ps short pulses. The full-width-half-maximum (FWHM)
of the reconstructed pulses is given by the transform limit of the spectrum in
Fig. 2b, indicating that there is negligible chirp in the pulses. In these
conditions, the peak power reaches almost 250 mW, which constitutes an
enhancement of more than 12 compared to the average power of 20 mW. In order
to model the cavity dynamics, we use a fully coherent master equation25 (supp.
section 1). This single equation for the complex field replaces the entire
Maxwell-Bloch system and reliably predicts the spectral shape, phase
relationship and pulse width observed experimentally (Fig. 2c). Furthermore,
it allows experimentally unavailable analyses, e.g. the influence of
dispersion and nonlinearities (supp. Figs. 2,3,7).
The IAC close to threshold (Fig. 2d) shows a prominent peak at zero path
difference caused by constructive interference of the pulses after the
Michelson interferometer. The ratio of this peak to the background at a delay
larger than the pulse width is 8:1, which is generally regarded as the smoking
gun of mode-locked pulses. Encouragingly, the measured IAC is in excellent
agreement with the expected IAC, which was calculated using the pulses
obtained by SWIFTS (red dots in Fig. 2d). This confirms successful mode-
locking and the retrieved pulse width.
The generation of pulses becomes increasingly challenging at higher gain
current. Due to gain saturation, the wings of a pulse experience more gain
than the peak. This effect leads to pulse broadening and can destabilize mode-
locking. Fortunately, the large modulation depth provided by the bi-functional
quantum design enables mode-locking over the entire lasing range from
threshold to rollover. The IAC traces at 3.7 kA/cm2 and at the rollover
current still show the required peak-to-background ratio of 8:1. The pulses at
rollover are slightly broadened to roughly 12 ps, which is attributed
partially to a slight chirp and partially to the gain saturation effect
mentioned above (supp. Fig. 6). Yet, the average power is greatly increased to
62 mW, which results in over 430 mW peak power and 5 pJ pulse energy - more
than an order of magnitude higher than recent reports of comparable mid-
infrared semiconductor lasers emitting at shorter wavelengths 26, 27.
Another fascinating aspect of bi-functional quantum design is the possibility
to monolithically integrate ultrafast photodetectors. While this is
particularly important in applications such as photonic integrated circuits,
it also provides a tool to measure the beatnote with very large signal-to-
noise ratio directly on the chip (Fig. 3a). This provides crucial information
about the type of synchronization state and about its stability. Fig. 3b shows
the first three harmonics of the beatnote in the free-running and the actively
mode-locked regime. In the latter conditions, the beatnote amplitudes increase
by 19 dB due to the much larger amplitude modulation. The zoom on the first
harmonic beatnote (Fig. 3c) allows to assess the phase-coherence and stability
of the frequency comb. The free-running QCL is operating in the anti-phase
state showing a weak beatnote at $f_{\mathrm{rep}}^{0}$. Previous work28, 29,
30 has shown that a weak electrical modulation can be used to lock and
stabilize the beatnote of the anti-phase state. However, the situation is very
different when applying strong modulation at $f_{\mathrm{rep}}^{0}$. In this
case, the modulation enforces an AM waveform, which is contrary to the natural
anti-phase behaviour of the laser. As a result, the beatnote of this waveform
is not phase-locked, as indicated by the pedestal around $f_{\mathrm{mod}}$.
The situation changes completely, when the modulation frequency is tuned to
the synchronization frequency of the in-phase state
($f_{\mathrm{rep}}^{0}{+}33\,$MHz). There, the strong modulation is in
consensus with the natural behavior of the laser, leading to a phase-locked
frequency comb with narrow beatnote. This can also be seen in Fig. 3d, which
shows the laser beatnote while tuning the modulation frequency across the in-
phase and anti-phase synchronization frequencies. While the frequency of the
beatnote is controlled by the modulation over the entire span, a phase-locked
comb is only generated around the in-phase synchronization frequency.
In conclusion, our experiments provide unambiguous proof for the generation of
mode-locked pulses in high-performance QCLs at room temperature - a goal which
remained elusive since the invention of the QCL - and confirm stunning
similarities to synchronization in coupled oscillators. These mode-locked QCLs
constitute the first compact and electrically pumped source for ultrashort
pulses beyond 5 µm wavelength, demonstrating that they are a highly promising
technology for ultrafast laser science in the long-wave infrared region. The
availability of such a source paves the way towards a semiconductor-based
platform for non-linear photonics, potentially enabling broadband mid-infrared
frequency combs and supercontinuum generation.
## Methods
QCLs optimized for RF modulation: The QCLs are processed as buried
heterostructures. The width of the QCL ridges is 12 µm and the facets are left
as cleaved. The area of the top contact of the modulation section is minimized
to decrease its parasitic capacitance, which increases the RF injection
efficiency. Ground contacts for the modulation are provided by etching through
the Fe-doped InP layer next to the laser ridges. The modulation signal is
provided by a HP8341B synthesized sweeper, amplified up to roughly 5 W and
injected via coplanar tips. The insertion loss at 12 GHz is 14 dB including
cables and bias-tee.
SWIFTS and IAC: The light emitted by the QCL is shone through a Bruker Vertex
70v FTIR spectrometer. In order to perform SWIFTS, the light is then detected
by a home-built fast QWIP at the exit of the FTIR. The optical beating
obtained from the QWIP is subsequently amplified and mixed down to roughly 10
MHz using a local oscillator. A Zurich Instruments HF2LI lock-in amplifier and
the trigger of the FTIR are used to record the SWIFTS and intensity
interferograms in rapid scan mode. The IAC is obtained by detecting the pulses
at the exit of the FTIR using the 2-QWIP and recording its photocurrent
depending on the path delay of the FTIR.
## References
* 1 Udem, T., Holzwarth, R. & Hänsch, T. W. Optical frequency metrology. _Nature_ 416, 233–237 (2002).
* 2 Hasegawa, A. & Tappert, F. Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion. _Applied Physics Letters_ 23, 142–144 (1973).
* 3 Peyman, G. A. Method for modifying corneal curvature (1989). US Patent 4,840,175.
* 4 Moulton, P. F. Spectroscopic and laser characteristics of Ti:Al_2O_3. _Journal of the Optical Society of America B_ 3, 125 (1986).
* 5 Kim, J. & Song, Y. Ultralow-noise mode-locked fiber lasers and frequency combs: principles, status, and applications. _Advances in Optics and Photonics_ 8, 465 (2016).
* 6 Cao, Q., Kärtner, F. X. & Chang, G. Towards high power longwave mid-IR frequency combs: power scalability of high repetition-rate difference-frequency generation. _Optics Express_ 28, 1369 (2020).
* 7 Schliesser, A., Picqué, N. & Hänsch, T. W. Mid-infrared frequency combs. _Nature Photonics_ 6, 440–449 (2012).
* 8 Iwakuni, K. _et al._ Phase-stabilized 100 mW frequency comb near 10 $\upmu$m. _Applied Physics B_ 124 (2018).
* 9 Keilmann, F. & Amarie, S. Mid-infrared Frequency Comb Spanning an Octave Based on an Er Fiber Laser and Difference-Frequency Generation. _Journal of Infrared, Millimeter, and Terahertz Waves_ 33, 479–484 (2012).
* 10 Sotor, J. _et al._ All-fiber mid-infrared source tunable from 6 to 9 $\upmu$m based on difference frequency generation in OP-GaP crystal. _Optics Express_ 26, 11756 (2018).
* 11 Faist, J. _et al._ Quantum Cascade Laser. _Science_ 264, 553–556 (1994).
* 12 Schwarz, B. _et al._ Watt-Level Continuous-Wave Emission from a Bifunctional Quantum Cascade Laser/Detector. _ACS Photonics_ 4, 1225–1231 (2017).
* 13 Jouy, P. _et al._ Dual comb operation of $\lambda$ $\sim$ 8.2 $\upmu$m quantum cascade laser frequency comb with 1 W optical power. _Applied Physics Letters_ 111, 141102 (2017).
* 14 Anderson, M. H. _et al._ Photonic chip-based resonant supercontinuum. _arXiv preprint arXiv:1909.00022_ (2019).
* 15 Gordon, A. _et al._ Multimode regimes in quantum cascade lasers: From coherent instabilities to spatial hole burning. _Physical Review A_ 77 (2008).
* 16 Wang, Y. & Belyanin, A. Active mode-locking of mid-infrared quantum cascade lasers with short gain recovery time. _Optics Express_ 23, 4173 (2015).
* 17 Wang, C. Y. _et al._ Mode-locked pulses from mid-infrared Quantum Cascade Lasers. _Optics Express_ 17, 12929 (2009).
* 18 Wittmann, A., Bonetti, Y., Faist, J., Gini, E. & Giovannini, M. Intersubband linewidths in quantum cascade laser designs. _Applied Physics Letters_ 93, 141103 (2008).
* 19 Hugi, A., Villares, G., Blaser, S., Liu, H. C. & Faist, J. Mid-infrared frequency comb based on a quantum cascade laser. _Nature_ 492, 229–233 (2012).
* 20 Hillbrand, J. _et al._ In-Phase and Anti-Phase Synchronization in a Laser Frequency Comb. _Physical Review Letters_ 124 (2020).
* 21 Schwarz, B. _et al._ A bi-functional quantum cascade device for same-frequency lasing and detection. _Applied Physics Letters_ 101, 191109 (2012).
* 22 Singleton, M., Jouy, P., Beck, M. & Faist, J. Evidence of linear chirp in mid-infrared quantum cascade lasers. _Optica_ 5, 948–953 (2018).
* 23 Burghoff, D. _et al._ Evaluating the coherence and time-domain profile of quantum cascade laser frequency combs. _Optics Express_ 23, 1190 (2015).
* 24 Han, Z., Ren, D. & Burghoff, D. Sensitivity of SWIFT spectroscopy. _Optics Express_ 28, 6002 (2020).
* 25 Opačak, N. & Schwarz, B. Theory of Frequency-Modulated Combs in Lasers with Spatial Hole Burning, Dispersion, and Kerr Nonlinearity. _Physical Review Letters_ 123 (2019).
* 26 Feng, T., Shterengas, L., Hosoda, T., Belyanin, A. & Kipshidze, G. Passive Mode-Locking of 3.25 $\upmu$m GaSb-Based Cascade Diode Lasers. _ACS Photonics_ 5, 4978–4985 (2018).
* 27 Hillbrand, J. _et al._ Picosecond pulses from a mid-infrared interband cascade laser. _Optica_ 6, 1334 (2019).
* 28 Hillbrand, J., Andrews, A. M., Detz, H., Strasser, G. & Schwarz, B. Coherent injection locking of quantum cascade laser frequency combs. _Nature Photonics_ 13, 101–104 (2018).
* 29 St-Jean, M. R. _et al._ Injection locking of mid-infrared quantum cascade laser at 14 GHz, by direct microwave modulation. _Laser & Photonics Reviews_ 8, 443–449 (2014).
* 30 Forrer, A. _et al._ Photon-Driven Broadband Emission and Frequency Comb RF Injection Locking in THz Quantum Cascade Lasers. _ACS Photonics_ (2020).
## Acknowledgements
This work was supported by the Austrian Science Fund (FWF) in the framework of
”Building Solids for Function” (Project W1243), the projects ”NanoPlas”
(P28914-N27) and ”NextLite” (F4909-N23).
## Author contributions
J.H. processed the QCLs and carried out the experiments. B.S. and J.H. built
up the SWIFTS and IAC setups. N.O. and B.S. developed the simulation tool.
M.P. carried out the temporal reconstruction using the IAC data. H.S. provided
the 2-QWIP. J.H. wrote the manuscript with editorial input from N.O., B.S.,
G.S and F.C. All authors analysed the results and commented on the paper.
|
2024-09-04T02:54:58.433052 | 2020-03-04T23:32:28 | 2003.04133 | {
"authors": "Nils Ohlendorf, Wolf-Peter Schill",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26120",
"submitter": "Wolf-Peter Schill",
"url": "https://arxiv.org/abs/2003.04133"
} | arxiv-papers | # Frequency and duration of low-wind-power events in Germany
Nils Ohlendorf<EMAIL_ADDRESS>Wolf-Peter Schill<EMAIL_ADDRESS>Mercator Research Institute on Global Commons and Climate Change (MCC), EUREF
Campus 19, Torgauer Straße 12-15, 10829 Berlin, Germany German Institute for
Economic Research (DIW Berlin), Mohrenstrasse 58, 10117 Berlin, Germany
Energy Transition Hub, Climate & Energy College, The University of Melbourne
###### Abstract
In the transition to a renewable energy system, the occurrence of low-wind-
power events receives increasing attention. We analyze the frequency and
duration of such events for onshore wind power in Germany, based on 40 years
of reanalysis data and open software. We find that low-wind-power events are
less frequent in winter than in summer, but the maximum duration is
distributed more evenly between months. While short events are frequent, very
long events are much rarer. Every year, a period of around five consecutive
days with an average wind capacity factor below 10% occurs, and every ten
years a respective period of nearly eight days. These durations decrease if
only winter months are considered. The longest event in the data lasts nearly
ten days. We conclude that public concerns about low-wind-power events in
winter may be overrated, but recommend that modeling studies consider multiple
weather years to properly account for such events.
###### keywords:
Wind power; Low-wind-power events; Reanalysis data;
††journal: arXiv
## 1 Introduction
The Paris Agreement calls for an extensive decarbonization of the global
economy. A major strategy for achieving this goal is a massive expansion of
variable renewable energy sources, in particular solar photovoltaics (PV) and
wind power [de Coninck et al., 2018]. While power generation from solar PV
largely follows diurnal and seasonal cycles with annually repeating patterns,
wind power is subject to more irregular inter-annual as well as intra-annual
variations which are relevant from a security of supply perspective. In
countries with growing shares of wind power, the occurrence of low-wind-power
(LWP) events thus receives increasing attention.
This is particularly true in Germany. In the context of its energy transition,
Germany is one of the global front-runners in wind power deployment. In 2018,
a total capacity of 52.5 GW of onshore wind power was installed in Germany,
generating 90.5 TWh of electricity. This corresponds to 15% of German gross
electricity consumption [BMWi, 2019]. Given the government’s targets to expand
the share of renewables in electricity consumption to 65% by 2030 and at least
80% by 2050 [Bundesregierung, 2019], the dependence of the German energy
system on wind power is set to increase strongly in the future. Concerns about
LWP events have been discussed in German media [Wetzel, 2017, 2019] and in the
German parliament [Deutscher Bundestag, 2019a], and LWP events are also
mentioned in the government’s energy transition reporting [Deutscher
Bundestag, 2019b]. In this context, the term Dunkelflaute is increasingly
used. It refers to a persistent situation with very low power generation from
wind and solar PV, which would be especially challenging in the German winter
season where PV availability is low and electric load has its peak. Yet no
clear definition of this concept has been provided so far [Wissenschaftliche
Dienste, 2019], and quantitative evidence on the frequency and duration of
such events is missing. In Table $15$ of Deutscher Bundestag [2019b], an
independent expert commission generally assumes a no-wind-no-solar period of
two weeks.
Yet research on LWP events is sparse so far. In this paper, we contribute to
filling this gap, focusing on onshore wind power in Germany. We provide an in-
depth analysis of the frequency, duration, and magnitude of LWP events, making
use of reanalysis data for 40 full years (1980 to 2019) and power curves of
recently installed wind turbines. In doing so, we propose two definitions of
LWP events and investigate three different thresholds of capacity factors (2%,
5% and 10%). We also compare the spatial distributions of the most persistent
LWP event and the mean electricity generation. Parts of our analysis
explicitly focus on winter months: these are particularly relevant, as power
generation from solar PV is relatively low during this season, while the
German peak load also occurs in winter. In order to allow for the highest
degree of transparency and reproducibility, we provide the source code of our
analysis under a permissive open-source license [Ohlendorf, 2020].
There are only few dedicated analyses on the frequency and duration of LWP
events. Early contributions address reliability aspects of spatially dispersed
wind power in California [Kahn, 1979] or in the midwestern United States
[Archer and Jacobson, 2007]. Analyses explicitly focusing on LWP events only
recently emerged. Yet these differ from our work, amongst other factors, with
respect to geographical and temporal coverage, data sources used, and
methodologies applied. In particular, previous low-wind analyses mostly draw
on local measurement data and either evaluate wind speeds [Leahy and McKeogh,
2013, Patlakas et al., 2017] or wind power [Handschy et al., 2017, Kruyt et
al., 2017]. Leahy and McKeogh [2013] and Patlakas et al. [2017] investigate
low-wind events for Ireland and the North Sea area, respectively. Both studies
firstly evaluate low-wind events that are constantly below a given wind speed
threshold, and secondly determine annual minimum moving average wind speeds
for given durations, using extreme value distributions. Kruyt et al. [2017]
and Handschy et al. [2017] go one step further and calculate respective power
generation from wind speeds for Switzerland and the United States, using a
power curve. While the findings of these studies are necessarily idiosyncratic
to the specific geographical applications, some common findings emerge. First,
low-wind events are less frequent and less persistent if more, and spatially
more dispersed, measurement stations are used. Second, there are generally
less events in winter than in summer.
The measurement-based analyses face challenges related to their data sources.
In general, studies that draw on measured wind speeds are spatially biased,
have low measurement densities, and extrapolation from measurement height to
hub height is challenging because of distorting effects of terrain, elevations
or buildings [Sharp et al., 2015]. Measurement data may further be subject to
inconsistencies caused by changing equipment and measurement errors. Extreme
event analyses further require consistent measurements over large time periods
to sufficiently capture climatic variations.
These issues can be addressed by using long-term meteorological reanalysis
data. Such data is increasingly applied for onshore wind energy modelling.
Several studies focus on data accuracy and on validating models of wind power
generation [Decker et al., 2012, Sharp et al., 2015, Olauson and Bergkvist,
2015, Rose and Apt, 2015, Staffell and Pfenninger, 2016, González-Aparicio et
al., 2017, Germer and Kleidon, 2019]. Other analyses deal with variability
aspects of wind power, but do not focus on extreme low-wind events. For
example, Grams et al. [2017] explain longer-term fluctuations in European wind
power generation with different types of weather regimes, based on MERRA-2
data. With similar approaches, Collins et al. [2018] investigate inter-annual
variations of European wind and solar power, and Santos-Alamillos et al.
[2017] explore optimal allocations of renewable generation capacity in a
European super grid. For the contingent U.S. states, Shaner et al. [2018]
investigate the reliability of future power systems dominated by wind and/or
solar PV, and Kumler et al. [2019] explore inter-annual renewable variability
for Texas. Yet none of these studies explicitly focuses on the frequency and
duration of extreme low-wind-power events.
A notable reanalysis study that does focus on extreme wind events is conducted
by Cannon et al. [2015] for Great Britain. Using 33 years of MERRA as well as
ERA-Interim data, the authors conclude that the frequency and duration of low-
wind-power events can be approximated by a Poisson-like process. Weber et al.
[2019] also use ERA-Interim data for a superstatistical analysis of extreme
wind power events at nine specific European sites, including one German
onshore location. They find that the distribution of low-wind events has a
heavy tail, as low-wind events may result from a combination of different
weather and circulation patterns.111Weber et al. [2019] base their analysis on
wind speeds, not wind power generation, with a cut-off threshold of $4$ m/s.
In another analysis based on ERA-Interim reanalysis data and other sources,
Raynaud et al. [2018] define and investigate the occurrence of renewable
“energy droughts”, which are measured relative to average daily generation.
They find that wind power droughts are both relatively frequent and relatively
short in most European countries, compared to hydro power droughts.
We contribute to this emerging literature with a dedicated open-source,
reanalysis-based study that investigates LWP events in Germany in detail. To
the best of our knowledge, we are the first to use MERRA-2 data in this
context, i.e., spatially and temporally consistent reanalysis data covering 40
years at 50 m above surface. Compared to Cannon et al. [2015], we also make
use of not only one, but three recent power curves to represent different
types of wind turbines that are characteristic for different locations defined
by mean wind speeds. Complementary to Raynaud et al. [2018], we further
present an alternative approach to defining and evaluating LWPs by looking
either at hours that are constantly below a threshold, or at hours with a mean
below a threshold.
## 2 Methods and data
### 2.1 General approach
Based on wind speeds and power curves, we derive an hourly aggregated time
series of capacity factors for wind power in Germany. First, we take wind
speeds at 50 m above surface from the MERRA-2 reanalysis dataset, which covers
40 years from 1980 to 2019, and extrapolate to hub heights.222See Section A
for further information on the use of reanalysis data for energy modelling.
Second, capacity factors of each MERRA-2 grid cell are calculated based on
power curves of recently installed wind turbines. Third, we spatially
aggregate these capacity factors using a weighting scheme that considers the
current spatial distribution of onshore wind power capacity in Germany.
Finally, we investigate the resulting time series of hourly aggregated
capacity factors by applying a narrower and a wider definition of LWP events.
### 2.2 Wind speeds derived from reanalysis data
We use the MERRA-2 dataset provided by NASA [Gelaro et al., 2017]. Data is
available starting from the year 1980. In contrast to several other global
reanalysis datasets which have time resolutions of 3 to 6 hours and provide
wind speeds at 10 m above surface, MERRA-2 includes hourly wind speed data at
50 m, which allows better modelling of wind power generation.
Figure 1: MERRA-2 grid points (blue) and grid cells that intersect with
Germany.
The MERRA-2 grid consists of 576 longitudinal and 361 latitudinal horizontal
grid points, i.e., a resolution of $0.625^{\circ}$ x $0.5^{\circ}$ which for
Germany roughly corresponds to 50 x 50 km [Bosilovich et al., 2016]. Figure 1
shows the grid points in blue and all grid cells extrapolated from these
points that intersect with Germany. For each grid cell, MERRA-2 provides
hourly northward and eastward wind speed data at 50 m above surface. Our
dataset further includes surface roughness data for the year 2019.
### 2.3 Aggregated wind power derived from wind speeds using power curves
We calculate the magnitude of the horizontal wind speed ($U$) for each MERRA-2
grid point based on northward ($u$) and eastward components ($v$) at 50 m
(Equation 1).
$U=\sqrt{(u^{2}+v^{2})}$ (1)
In line with Kruyt et al. [2017], we use the logarithmic power law to
extrapolate wind speeds to hub-height ($h$) with $U_{hub}$ as the wind speed
at hub height and $z_{0}$ as the surface roughness data for every grid point
and each hour of the year 2019 (Equation 2).
$U_{hub}=w\frac{\ln\frac{h}{z_{0}}}{\ln\frac{50}{z_{0}}}$ (2)
Figure 2: Wind speed zones in Germany. Dark blue implies high mean wind
speeds, blue medium wind speeds, and light blue low mean wind speeds.
We define three types of wind zones, based on mean local wind speeds over 40
years for each MERRA-2 grid cell (Figure 2), and assign typical hub heights
for wind turbines. For high-wind-speed sites, we assign a hub height of 100 m,
for medium-wind-speed sites of 125 m, and for low-wind-speed sites of 139 m
[Wallasch et al., 2015]. These values reflect the mean hub heights of recently
installed wind power plants in respective German wind speed zones.
We calculate hourly capacity factors for each grid cell by applying power
curves characteristic for the three wind zones. The power curves are based on
manufacturer data of currently available wind turbines for low-, medium- and
high-wind sites, respectively. Both the low- and high-wind site power curves
represent an average of four wind turbines of similar diameters and
capacities. We consider turbines from six manufacturers (see B), among them
four large companies which cover 87% of the capacity installed in Germany in
2015 [Lüers, 2016].
Manufacturers generally provide discrete capacity factors ($CF_{disc}$) for
wind speed intervals of 1 m/s. For both the low- and high-wind curves, we
first calculate discrete mean capacity factors for each wind speed and then
calculate continuous capacity factors using a generalized logistic function
(Equation 3).
$CF_{cont}=A+\frac{C}{(1+Te^{-B(U_{hub}-M)})^{1/T}}$ (3)
Here, $CF_{cont}$ is the continuous capacity factor and $A$, $B$, $C$, $M$ and
$T$ are fitted coefficients based on minimising the squared deviations between
$CF_{disc}$ and $CF_{cont}$. For both the low- and the high-wind power curve,
cut-in wind speeds of around 3 m/s emerge, and the resulting capacity factors
are capped at 0% and 100%. The medium-wind power curve represents the average
of the low- and high-wind curves (Figure 3).
Figure 3: Power curves of three types of wind turbines
Aggregated hourly capacity factor time series for overall Germany are derived
by weighting all grid cells with the current distribution of installed wind
power generation capacity. The latter is extracted from Open Power System Data
[Open Power System Data, 2017, Wiese et al., 2019] and open-source GIS data.
The red points in Figure 4 indicate the installed wind capacity of locally
aggregated wind power plant sites in Germany and the blue squares show the
corresponding relative capacity distribution of the MERRA-2 grid cells. Grid
cells only partly intersecting with the German land area receive lower weights
according to the overlapping area. We implicitly assume that the transmission
infrastructure allows geographical balancing of wind power in Germany, which
is currently largely the case.333This assumptions is particularly valid for
low-wind periods. During high-wind, high-load periods, the German transmission
grid can be constrained in North-South direction.
Figure 4: Distribution of currently installed wind power capacity in Germany.
Darker colors indicate a larger share of total or relative installed capacity.
### 2.4 Definition of low-wind-power events
We propose two different measures of low-wind-power periods, a narrower and a
wider one (Figure 5). We further consider three alternative capacity factor
thresholds of 2%, 5%, and 10%.
As for the narrower definition, we consider LWP events to be consecutive hours
in which the aggregated capacity factors are Constantly Below the Threshold
(CBT). This concept bears some resemblance to the “runs analysis” by Leahy and
McKeogh [2013] or the “duration given intensity” method by Patlakas et al.
[2017]. Starting in the first hour, we list annual LWP events for durations
starting from five consecutive hours and report the number of hours constantly
below the given capacity factor threshold. We then increase the duration in
hourly steps and repeat until there are no further events listed.
To provide a wider definition, we consider LWP events to consist of
consecutive hours in which the moving average of capacity factors is under the
same threshold, i.e., Mean Below the Threshold (MBT). Again, we list all LWP
periods until we reach the threshold value, ensuring that LWP periods do not
overlap. By definition, the MBT method results in more low-wind-power events
for a given duration and also results in longer events for each threshold,
compared to CBT.
Figure 5: Illustration of the two LWP event definitions
The average annual amount of LWP events per duration over all 40 years equals
the expected value of events per year. Further, the reciprocal value of the
annual average provides the return period, that is the expected temporal
distance between two similar reoccurring events. Periods overlapping annually
or monthly are assigned to the year or month in which more than 50% of the
hours are located444 Accounting for annually overlapping periods requires
December data from the previous year, and January data from the subsequent
year. For the two boundary years 1980 and 2019, we substitute the missing data
for December 1979 (January 2020) with data from December 1980 (January 2019).
.
## 3 Results
### 3.1 Seasonal distribution and frequency of low-wind-power events
Figure 6 shows that LWP events are generally most frequent in summer (here
defined as June-August) and least frequent in winter (December-February). The
results for spring (March-May) and autumn (September-November) are mostly
close to the annual average. Accordingly, respective findings made for other
European countries [Leahy and McKeogh, 2013, Cannon et al., 2015, Kruyt et
al., 2017] are also valid for Germany.
Figure 6: Average seasonal duration (horizontal axis) and frequency (vertical
axis) of LWP events in Germany
The frequency of events for a given duration is about 1.5-3 times higher for
the wider MBT definition compared to the narrower CBT concept. For both
metrics, the frequency of LWP events increases substantially with the capacity
factor threshold value. For example, a 10-hour event below a capacity factor
of 2% occurs on average around 0.2 times per winter for CBT and slightly less
than once per winter for MBT. For a 10% capacity factor threshold, there are
on average around eight such winter events for CBT and 13 for MBT. In general,
we find that short LWP events with a duration of up to around half a day are
relatively frequent and may occur several times per year, especially under the
wider MBT definition. Longer LWP events, in contrast, are much less frequent.
To provide a complementary perspective, we calculate the return periods for
different durations of LWP events (Table 1). The return periods are the
reciprocal of the average (annual or seasonal) frequency of LWP events for
different durations, considering both definitions and all three thresholds
(cf. Figure 6). For example, an LWP event with an average frequency of 0.2 for
a given duration leads to a return period of 5 years for this specific
duration. The longer a given duration, the lower its average frequency and the
longer its return period.
For a return period of ten years, we find a duration of 17 hours (2% capacity
factor threshold), 41 hours (5%) and 77 hours (10%) under the narrower CBT
definition, and a duration of 34 hours (2%), 79 hours (5%) and 188 hours (10%)
under the wider MBT concept. In other words, every ten years the German energy
system has to deal with a period of nearly eight days of average wind power
generation (MBT) below 10% of the installed capacity.
Table 1: Duration in hours for LWP events in winter or in any season for different return periods | Constantly below threshold (CBT) | Mean below threshold (MBT)
---|---|---
| Winter | Any season | Winter | Any season
Return period | 2% | 5% | 10% | 2% | 5% | 10% | 2% | 5% | 10% | 2% | 5% | 10%
1 year | 5 | 15 | 29 | 11 | 23 | 45 | 8 | 30 | 63 | 18 | 58 | 122
2 years | 7 | 21 | 40 | 13 | 32 | 57 | 12 | 45 | 92 | 21 | 69 | 144
3 years | 8 | 23 | 44 | 14 | 33 | 60 | 14 | 52 | 101 | 23 | 71 | 161
4 years | 9 | 30 | 48 | 14 | 33 | 63 | 16 | 62 | 112 | 27 | 72 | 173
5 years | 10 | 32 | 57 | 15 | 35 | 65 | 22 | 68 | 113 | 28 | 75 | 178
6 years | 10 | 32 | 57 | 15 | 35 | 67 | 25 | 69 | 114 | 29 | 76 | 182
7 years | 12 | 33 | 60 | 15 | 36 | 67 | 27 | 70 | 114 | 31 | 76 | 186
8 years | 14 | 33 | 63 | 17 | 37 | 69 | 28 | 72 | 117 | 33 | 79 | 186
9 years | 14 | 33 | 63 | 17 | 37 | 69 | 28 | 72 | 117 | 33 | 79 | 186
10 years | 14 | 33 | 64 | 17 | 41 | 77 | 28 | 72 | 126 | 34 | 79 | 188
15 years | 17 | 36 | 67 | 18 | 41 | 77 | 31 | 76 | 129 | 38 | 82 | 189
20 years | 19 | 41 | 77 | 19 | 49 | 81 | 34 | 79 | 131 | 45 | 89 | 221
25 years | 19 | 41 | 77 | 19 | 49 | 81 | 34 | 79 | 131 | 45 | 89 | 221
30 years | 19 | 41 | 77 | 19 | 49 | 81 | 34 | 79 | 131 | 45 | 89 | 221
To better interpret these return periods, we provide an example for the German
onshore wind power capacity of 52.5 GW installed in 2018. For this wind
turbine fleet, average power generation is expected to not exceed around five
GW, i.e., 10% of capacity, during a period of around five consecutive days
every year (122 hours, MBT for ’Any Season’ in 1). Every ten years, this
period increases to nearly eight days, and every twenty years to more than
nine full days. Looking only at LWP events in winter, these durations decrease
to less than three days every winter, less than five days every tenth winter,
and around five and a half days every twentieth winter. The remaining load has
to be covered by other generators, energy storage or demand-side measures.
However, wind power still contributes some generation capacity above the 10%
threshold during some of these hours, as indicated by much lower CBT return
periods.
### 3.2 Magnitude of the most extreme low-wind-power events
The most extreme LWP events over the entire 40 years analyzed can be
interpreted as worst cases from an energy system planning perspective. In an
annual perspective, the most extreme events occurred in 1985 for all capacity
factor thresholds (Figure 7). Under the narrower CBT definition, there are
nearly four consecutive days with wind power generation constantly below 10%
in 1985, and still around two consecutive days with generation constantly
below 5%. Under the wider MBT definition, the duration of this most extreme
event increases to nearly ten days (10%) or around four days (5%).
Figure 7: Most extreme LWP events per year. The vertical axis shows the
duration of the longest event per year for the three capacity factor
thresholds.
While this 1985 event is the most extreme one under both CBT and MBT, the
ranking of the second most extreme yearly events differs between the LWP
definitions. For example, the second-longest event occurred in 1984 under the
CBT definition. Yet under MBT, the duration of the most extreme event in 1984
is only average. In general, the definition of LWP events and the chosen
thresholds have a substantial impact on quantitative results. Under MBT, the
most extreme annual events are generally around twice as high compared to CBT.
We further find very large inter-annual variations. Considering the 10%
threshold, the longest event for the MBT definition lasted for almost 10 days
in 1985, but in 2005 the longest duration was only three days for the same
threshold. The relative difference between the longest events for each year
increases with the threshold. These large variations of the most extreme
annual LWP events complement the findings made by Collins et al. [2018], who
determine large inter-annual variations of average renewable availability.
We next look at the most extreme LWP event in a monthly perspective,
irrespective of the year in which these occur (Figure 8). The most extreme
events for the 10% threshold occur in March for both definitions. This is the
1985 event discussed above, with durations of nearly four (CBT) or nearly ten
consecutive days (MBT).
Figure 8: Most extreme LWP events per month. The vertical axis shows the
duration of the longest event of all respective months for the three capacity
factor thresholds.
Considering all thresholds and both LWP definitions, there is no clear trend
of the most extreme monthly LWP events. That is, substantial extreme events
may occur throughout the year, and also in winter months. This contrasts the
previous finding that the frequency of LWP events is generally much higher in
summer than in winter, as shown in Section 3.1. Under CBT, the most extreme
events in each of the winter months are even longer than those in summer
months for the 10% capacity threshold. This finding is, however, not confirmed
under the MBT definition.
### 3.3 Spatial distribution of wind power during most extreme LWP event
To also explore the spatial dimension of LWP events, we compare the
distribution of capacity factors during the most extreme LWP of 1985 to the
distribution of annual mean capacity factors in the same year (Figure 9).
Figure 9: Spatial distribution of wind power. Left: Average wind power during
most extreme LWP event (10% capacity factor, MBT) in dataset in March 1985
(Scale: From 0% to 20% of mean capacity factors). Right: Mean wind power in
the entire year 1985 (Scale: From 5% to 50% of mean capacity factors).
The spatial pattern of annual mean capacity factors (Figure 9, right panel)
largely resembles that of average wind speeds in Germany (Figure 2). Mean
capacity factors are generally higher in Northern than in Southern Germany.
They are highest close to the Northern and the Baltic Sea, and lowest in the
southern Alpine region.
The spatial pattern of mean capacity factors during the most extreme LWP event
(Figure 9, left panel) substantially deviates from the distribution of the
means. In particular, capacity factors of the north-eastern region and parts
of the northern region are relatively low. The respective spatial
distributions of capacity factors for other thresholds under both the CBT and
MBT definitions of the same event also show substantial deviations from annual
means.
Accordingly, the spatial distribution of capacity factors during extreme LWP
events does not necessarily correspond to the annual mean pattern. This
indicates that low-wind events can be very pronounced even in regions with
very good average wind resources.
## 4 Conclusions
We analyze the seasonal distribution, frequency and magnitude of onshore low-
wind-power events in Germany, as well as spatial aspects of the most extreme
events, based on MERRA-2 reanalysis data and open software. We propose and
evaluate two definitions of low-wind-power events for three capacity factor
thresholds.
We synthesize three key results from the analysis. First, LWP events are
generally most frequent in summer and least frequent in winter. Nonetheless,
substantial events occur in all months of the year, and also in winter. The
most persistent LWP event in the dataset occurred in March.
Second, while short events with a duration of up to around half a day are
relatively frequent, very long events are much rarer.555Weber et al. [2019]
argue that low-wind event statistics do not follow a simple exponential
distribution, but have “heavy tails”, i.e. the probability decreases rather
slowly with increasing duration. Every year, the German energy system has to
deal with a period of around five consecutive days during which average wind
power generation is below 10% of the installed capacity. Every ten years, a
respective period of nearly eight days is to be expected. Looking only at
winter months, the durations of these expected events decrease to less than
three days every winter and less than five days every tenth winter. The most
persistent low-wind event in the entire dataset has a duration of nearly ten
consecutive days of average wind power generation below a 10% capacity factor.
Third, the spatial pattern of LWP events may be very different from the one of
average wind power resources. During the most persistent LWP event, we find
average generation to be particularly low in several regions which have some
of the best wind resources.
We conclude that energy modeling studies that only consider one historic
weather year are likely to substantially underestimate the occurrence of low-
wind-power events and related system implications. In particular, analyses
with an energy system planning perspective should take less frequent LWP
events into account, e.g., the discussed events with a return period of ten
years, or even the most extreme event identified here. This is particularly
important when the complementary role of other variable and dispatchable
generators, energy storage, or demand-side measures in highly-renewable energy
systems is to be explored.666This is demonstrated, for example, by Schill and
Zerrahn [2018] in an analysis of storage requirements for renewable energy
integration in a sensitivity analysis with one artificial no-wind week.
Further, analyses dealing with the pros and cons of either more decentralized
or more centralized renewable energy systems should consider the spatial
dimension of LWP events. Although not in the focus of our analysis, our
results indicate that LWP events are more pronounced for smaller geographic
areas.
From an energy policy perspective, our findings on LWP events occurring in
winter may be most relevant. Our analysis indicates that concerns about
frequent and persistent LWP events in German winters appear to be overrated,
considering that the longest event with an average capacity factor below 10%
and a ten-year return period in winter has a duration of less than five days.
We further recommend that policy makers or regulators develop a proper
definition of the Dunkelflaute term, which currently appears to be used in a
rather qualitative way. Our two definitions of LWP events proposed here may be
useful in this context.
While our analysis deliberately focuses on LWP events of onshore wind power in
Germany, we see an avenue for future research that would ideally combine the
analysis of low production periods of onshore and offshore wind power as well
as solar PV with time series of load, while expanding the geographic focus
beyond Germany. The open-source provision of the tool used for the present
analysis may be a useful starting point for such research.
## Acknowledgment
This analysis is a result of the research project P2X, funded by the German
Federal Ministry of Education and Research (FKZ 03SFK2B1). Wolf-Peter Schill
carried out parts of the work during a research stay at the University of
Melbourne. Nils Ohlendorf mainly worked on this project while employed at DIW
Berlin, and partly also after being employed at MCC. We thank the participants
of the DIW Sustainability Cluster Seminar in April 2017, Strommarkttreffen
Berlin November 2017, IAEE International Conference Groningen 2018 and Enerday
Dresden 2018 for valuable comments on earlier drafts.
## Data availability statement
The data that support the findings of this study have been created with
software that is openly available under an MIT license at
https://doi.org/10.5281/zenodo.3694373.
## References
* Archer and Jacobson [2007] Archer, C.L., Jacobson, M.Z., 2007\. Supplying baseload power and reducing transmission requirements by interconnecting wind farms. Journal of Applied Meteorology and Climatology 46, 1701–1717. doi:10.1175/2007JAMC1538.1.
* BMWi [2019] BMWi, 2019. Zeitreihen zur Entwicklung der erneuerbaren Energien in Deutschland. Bundesministerium für Wirtschaft und Energie (Federal Ministry for Economic Affairs and Energy). URL: https://www.erneuerbare-energien.de/EE/Redaktion/DE/Downloads/zeitreihen-zur-entwicklung-der-erneuerbaren-energien-in-deutschland-1990-2018.pdf.
* Bosilovich et al. [2016] Bosilovich, M.G., Lucchesi, R., Suarez, M., 2016. MERRA-2: File Specification. GMAO Office Note No. 9 (Version 1.1). NASA Global Modeling and Assimilation Office. URL: https://gmao.gsfc.nasa.gov/pubs/docs/Bosilovich785.pdf.
* Bundesregierung [2019] Bundesregierung, 2019. Klimaschutzprogramm 2030 der Bundesregierung zur Umsetzung des Klimaschutzplans 2050. German Federal Government. URL: https://www.bundesregierung.de/resource/blob/975226/1679914/e01d6bd855f09bf05cf7498e06d0a3ff/2019-10-09-klima-massnahmen-data.pdf.
* Cannon et al. [2015] Cannon, D., Brayshaw, D., Methven, J., Coker, P., Lenaghan, D., 2015. Using reanalysis data to quantify extreme wind power generation statistics: A 33 year case study in Great Britain. Renewable Energy 75, 767 – 778. doi:10.1016/j.renene.2014.10.024.
* Carvalho et al. [2014] Carvalho, D., Rocha, A., Gómez-Gesteira, M., Santos, C.S., 2014\. WRF wind simulation and wind energy production estimates forced by different reanalyses: Comparison with observed data for Portugal. Applied Energy 117, 116 – 126. doi:10.1016/j.apenergy.2013.12.001.
* Collins et al. [2018] Collins, S., Deane, P., Ó Gallachóir, B., Pfenninger, S., Staffell, I., 2018. Impacts of inter-annual wind and solar variations on the European power system. Joule 2, 2076–2090. doi:10.1016/j.joule.2018.06.020.
* de Coninck et al. [2018] de Coninck, H., Revi, A., Babiker, M., Bertoldi, P., Buckeridge, M., Cartwright, A., Dong, W., Ford, J., Fuss, S., Hourcade, J.C., Ley, D., Mechler, R., Newman, P., Revokatova, A., Schultz, S., Steg, L., Sugiyama, T., 2018. Strengthening and implementing the global response, in: Global Warming of 1.5∘C. An IPCC Special Report on the impacts of global warming of 1.5∘C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. URL: https://www.ipcc.ch/site/assets/uploads/sites/2/2019/05/SR15_Chapter4_Low_Res.pdf.
* Decker et al. [2012] Decker, M., Brunke, M.A., Wang, Z., Sakaguchi, K., Zeng, X., Bosilovich, M.G., 2012\. Evaluation of the Reanalysis Products from GSFC, NCEP, and ECMWF Using Flux Tower Observations. Journal of Climate 25, 1916–1944. doi:10.1175/JCLI-D-11-00004.1.
* Deutscher Bundestag [2019a] Deutscher Bundestag (Ed.), 2019a. Plenarprotokoll 19/98 Stenografischer Bericht 98. Sitzung. Plenarprotokoll 19/98. URL: http://dip21.bundestag.de/dip21/btp/19/19098.pdf. 09.05.2019.
* Deutscher Bundestag [2019b] Deutscher Bundestag (Ed.), 2019b. Unterrichtung durch die Bundesregierung Zweiter Fortschrittsbericht zur Energiewende 2019. Drucksache 19/10760. Drucksache 19/10760 19. Wahlperiode. URL: http://dip21.bundestag.de/dip21/btd/19/107/1910760.pdf. 07.06.2019.
* Gelaro et al. [2017] Gelaro, R., McCarty, W., Suárez, M.J., Todling, R., Molod, A., Takacs, L., Randles, C.A., Darmenov, A., Bosilovich, M.G., Reichle, R., Wargan, K., Coy, L., Cullather, R., Draper, C., Akella, S., Buchard, V., Conaty, A., da Silva, A.M., Gu, W., Kim, G.K., Koster, R., Lucchesi, R., Merkova, D., Nielsen, J.E., Partyka, G., Pawson, S., Putman, W., Rienecker, M., Schubert, S.D., Sienkiewicz, M., Zhao, B., 2017. The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2). Journal of Climate 30, 5419–5454. doi:10.1175/JCLI-D-16-0758.1.
* Germer and Kleidon [2019] Germer, S., Kleidon, A., 2019\. Have wind turbines in germany generated electricity as would be expected from the prevailing wind conditions in 2000-2014? PLOS ONE 14, 1–16. doi:10.1371/journal.pone.0211028.
* González-Aparicio et al. [2017] González-Aparicio, I., Monforti, F., Volker, P., Zucker, A., Careri, F., Huld, T., Badger, J., 2017. Simulating European wind power generation applying statistical downscaling to reanalysis data. Applied Energy 199, 155 – 168. doi:10.1016/j.apenergy.2017.04.066.
* Grams et al. [2017] Grams, C.M., Beerli, R., Pfenninger, S., Staffell, I., Wernli, H., 2017. Balancing Europe’s wind-power output through spatial deployment informed by weather regimes. Nature Climate Change 7, 557–562. doi:10.1038/nclimate3338.
* Handschy et al. [2017] Handschy, M.A., Rose, S., Apt, J., 2017. Is it always windy somewhere? Occurrence of low-wind-power events over large areas. Renewable Energy 101, 1124 – 1130. doi:10.1016/j.renene.2016.10.004.
* Hans Ertel Zentrum [2019] Hans Ertel Zentrum, 2019. Cosmo regional reanalysis. Universität Bonn and Deutscher Wetterdienst. URL: https://reanalysis.meteo.uni-bonn.de/?Overview.
* Kahn [1979] Kahn, E., 1979. The reliability of distributed wind generators. Electric Power Systems Research 2, 1 – 14. doi:10.1016/0378-7796(79)90021-X.
* Kruyt et al. [2017] Kruyt, B., Lehning, M., Kahl, A., 2017. Potential contributions of wind power to a stable and highly renewable Swiss power supply. Applied Energy 192, 1 – 11. doi:10.1016/j.apenergy.2017.01.085.
* Kumler et al. [2019] Kumler, A., Carreño, I.L., Craig, M.T., Hodge, B.M., Cole, W., Brancucci, C., 2019\. Inter-annual variability of wind and solar electricity generation and capacity values in Texas. Environmental Research Letters 14, 044032. doi:10.1088/1748-9326/aaf935.
* Leahy and McKeogh [2013] Leahy, P.G., McKeogh, E.J., 2013\. Persistence of low wind speed conditions and implications for wind power variability. Wind Energy 16, 575–586. doi:10.1002/we.1509.
* Liléo et al. [2013] Liléo, S., Berge, E., Undheim, O., Klinkert, R., Bredesen, R.E., 2013. Long-term correction of wind measurements. state-of-the-art, guidelines and future work. Complexity 1, 2–3.
* Liléo and Petrik [2011] Liléo, S., Petrik, O., 2011\. Investigation on the use of NCEP/NCAR, MERRA and NCEP/CFSR reanalysis data in wind resource analysis, in: European Wind Energy Conference and Exhibition 2011, EWEC 2011\.
* Lüers [2016] Lüers, S., 2016. Status des Windenergieausbaus an Land in Deutschland \- Zusätzliche Auswertungen und Daten für das Jahr 2015. Technical Report. Deutsche WindGuard. Varel. URL: https://www.windguard.de/veroeffentlichungen.html?file=files/cto_layout/img/unternehmen/veroeffentlichungen/2016/Status%20des%20Windenergieausbaus%20an%20Land%20in%20Deutschland%20-%20Zus%C3%A4tzliche%20Auswertungen%20und%20Daten%20f%C3%BCr%20das%20Jahr%202015.pdf.
* Moemken et al. [2018] Moemken, J., Reyers, M., Feldmann, H., Pinto, J.G., 2018\. Future changes of wind speed and wind energy potentials in EURO-CORDEX ensemble simulations. Journal of Geophysical Research: Atmospheres 123, 6373–6389. doi:10.1029/2018JD028473.
* Molod et al. [2015] Molod, A., Takacs, L., Suarez, M., Bacmeister, J., 2015\. Development of the GEOS-5 atmospheric general circulation model: evolution from MERRA to MERRA2. Geoscientific Model Development 8, 1339–1356. doi:10.5194/gmd-8-1339-2015.
* Ohlendorf [2020] Ohlendorf, N., 2020. Source code for “Frequency and persistence of low-wind-power events in Germany”. Zenodo. doi:10.5281/zenodo.3694374.
* Olauson and Bergkvist [2015] Olauson, J., Bergkvist, M., 2015\. Modelling the Swedish wind power production using MERRA reanalysis data. Renewable Energy 76, 717 – 725. doi:10.1016/j.renene.2014.11.085.
* Open Power System Data [2017] Open Power System Data, 2017. Data package renewable power plants. URL: https://data.open-power-system-data.org/renewable_power_plants/2017-02-16/. version 2017-02-16.
* Patlakas et al. [2017] Patlakas, P., Galanis, G., Diamantis, D., Kallos, G., 2017\. Low wind speed events: persistence and frequency. Wind Energy 20, 1033–1047. doi:10.1002/we.2078.
* Raynaud et al. [2018] Raynaud, D., Hingray, B., François, B., Creutin, J., 2018\. Energy droughts from variable renewable energy sources in European climates. Renewable Energy 125, 578 – 589. doi:10.1016/j.renene.2018.02.130.
* Rose and Apt [2015] Rose, S., Apt, J., 2015. What can reanalysis data tell us about wind power? Renewable Energy 83, 963 – 969. doi:10.1016/j.renene.2015.05.027.
* Santos-Alamillos et al. [2017] Santos-Alamillos, F.J., Brayshaw, D.J., Methven, J., Thomaidis, N.S., Ruiz-Arias, J.A., Pozo-Vázquez, D., 2017\. Exploring the meteorological potential for planning a high performance European electricity super-grid: optimal power capacity distribution among countries. Environmental Research Letters 12, 114030. doi:10.1088/1748-9326/aa8f18.
* Schill and Zerrahn [2018] Schill, W.P., Zerrahn, A., 2018\. Long-run power storage requirements for high shares of renewables: Results and sensitivities. Renewable and Sustainable Energy Reviews 83, 156 – 171. doi:10.1016/j.rser.2017.05.205.
* Schlott et al. [2018] Schlott, M., Kies, A., Brown, T., Schramm, S., Greiner, M., 2018. The impact of climate change on a cost-optimal highly renewable European electricity network. Applied Energy 230, 1645 – 1659. doi:10.1016/j.apenergy.2018.09.084.
* Shaner et al. [2018] Shaner, M.R., Davis, S.J., Lewis, N.S., Caldeira, K., 2018\. Geophysical constraints on the reliability of solar and wind power in the United States. Energy and Environmental Science 11, 914–925. doi:10.1039/C7EE03029K.
* Sharp et al. [2015] Sharp, E., Dodds, P., Barrett, M., Spataru, C., 2015\. Evaluating the accuracy of CFSR reanalysis hourly wind speed forecasts for the UK, using in situ measurements and geographical information. Renewable Energy 77, 527 – 538. doi:10.1016/j.renene.2014.12.025.
* Staffell and Green [2014] Staffell, I., Green, R., 2014\. How does wind farm performance decline with age? Renewable Energy 66, 775 – 786. doi:10.1016/j.renene.2013.10.041.
* Staffell and Pfenninger [2016] Staffell, I., Pfenninger, S., 2016\. Using bias-corrected reanalysis to simulate current and future wind power output. Energy 114, 1224 – 1239. doi:10.1016/j.energy.2016.08.068.
* Tobin et al. [2016] Tobin, I., Jerez, S., Vautard, R., Thais, F., van Meijgaard, E., Prein, A., Déqué, M., Kotlarski, S., Maule, C.F., Nikulin, G., Noël, T., Teichmann, C., 2016\. Climate change impacts on the power generation potential of a European mid-century wind farms scenario. Environmental Research Letters 11, 034013. doi:10.1088/1748-9326/11/3/034013.
* Wallasch et al. [2015] Wallasch, A.K., Lüers, S., Rehfeldt, K., 2015. Kostensituation der Windenergie an Land in Deutschland - Update. Technical Report. Deutsche WindGuard. Varel. URL: https://www.windguard.de/veroeffentlichungen.html?file=files/cto_layout/img/unternehmen/veroeffentlichungen/2015/Kostensituation%20der%20Windenergie%20an%20Land%20in%20Deutschland%20-%20Update.pdf.
* Weber et al. [2019] Weber, J., Reyers, M., Beck, C., Timme, M., Pinto, J.G., Witthaut, D., Schäfer, B., 2019. Wind power persistence characterized by superstatistics. Scientific Reports 9, 19971–. doi:10.1038/s41598-019-56286-1.
* Wetzel [2017] Wetzel, D., 2017. Die ,,Dunkelflaute” bringt Deutschlands Stromversorgung ans Limit. Die Welt. URL: https://www.welt.de/wirtschaft/article161831272/Die-Dunkelflaute-bringt-Deutschlands-Stromversorgung-ans-Limit.html.
* Wetzel [2019] Wetzel, D., 2019. In der ,,kalten Dunkelflaute” rächt sich die Energiewende. Die Welt. URL: https://www.welt.de/wirtschaft/article191195983/Energiewende-Das-droht-uns-in-der-kalten-Dunkelflaute.html.
* Wiese et al. [2019] Wiese, F., Schlecht, I., Bunke, W.D., Gerbaulet, C., Hirth, L., Jahn, M., Kunz, F., Lorenz, C., Mühlenpfordt, J., Reimann, J., Schill, W.P., 2019. Open Power System Data – Frictionless data for electricity system modelling. Applied Energy 236, 401 – 409. doi:10.1016/j.apenergy.2018.11.097.
* Wissenschaftliche Dienste [2019] Wissenschaftliche Dienste, 2019. Sicherstellung der Stromversorgung bei Dunkelflauten. Documentation. Deutscher Bundestag. URL: https://www.bundestag.de/resource/blob/627898/b65deea51fdb399e4b64f1182465658d/WD-5-167-18-pdf-data.pdf. wD 5 - 3000 - 167/18.
## Appendix A Reanalysis data and its use for energy modelling
Reanalysis data is increasingly used for energy modelling as it provides
consistent global time series of long-term atmosphere data such as wind speed,
temperature and air pressure in regular spatial and temporal resolutions. The
underlying global circulation models extrapolate measurement station data on
wind speeds, temperature, moisture and surface pressure as well as data from
satellites and precipitation measurements [Decker et al., 2012]. Several
publicly available second-generation global reanalysis datasets have been
released since the early 2000s. We use MERRA-2, which builds on and improves
the previous MERRA dataset, using advanced models and data sources [Molod et
al., 2015].
Decker et al. [2012] evaluate the accuracy of several reanalysis datasets
(MERRA, NCEP, ERA-40, ERA-Interim, CFSR and GLDAS) using flux tower
measurements in the Northern Hemisphere. Almost all products overestimate the
monthly and 6-hourly wind speeds and their variability. MERRA and ERA-Interim
show the lowest values root-mean-square error and bias for diurnal cycles.
Sharp et al. [2015] review other data validation studies of different
reanalysis datasets. Three studies derive Pearson’s correlation coefficients
for MERRA between 0.75 and 0.89 based on measurement stations in Sweden,
Portugal, Norway and Denmark [Liléo and Petrik, 2011, Liléo et al., 2013,
Carvalho et al., 2014]. Staffell and Pfenninger [2016] propose country-
specific wind speed bias correction factors for MERRA and MERRA-2 to increase
the correlation with national capacity factors. Without such correction,
average capacity factors for Germany based on raw MERRA or MERRA-2 wind speeds
would be overestimated. Staffell and Green [2014] make a similar point for the
UK. In contrast, Cannon et al. [2015] do not use correction factors in a UK
application. Even if MERRA wind speeds turn out to be not particularly valid
for single measurement points, spatial aggregation of mean wind speed over all
stations results in a correlation coefficient of 0.94. This indicates a high
validity of MERRA data for large-scale wind patterns. Following Cannon et al.
[2015], we also refrain from introducing correction factors and instead make
use of the error-smoothing effect of spatial aggregation. In doing so, we also
avoid model artefacts, particularly as the usefulness of correction factors
has only been demonstrated for average wind speeds, but not for extreme
values.
## Appendix B Wind power turbines
The low- and high-wind power curves used in our analysis are based on data of
eight wind power turbines by six manufacturers, namely Nordex, Senvion,
Enercon, Vestas, Gamesa and Vensys. Specifically, we use the following high-
wind power turbines:
* 1.
Nordex N90-2.5MW
* 2.
Vestas V90-2.0MW
* 3.
Gamesa G97-2MW
* 4.
Vensys 100-2.5MW
Analogously, we use following low-wind power turbines:
* 1.
Nordex N131-3.3MW
* 2.
Senvion 3.2M122
* 3.
Enercon E126 EP4/4.2MW
* 4.
Vestas V126-3.3MW
## Appendix C Discussion of limitations
We briefly discuss some limitations of our analysis and how these may
qualitatively impact results.
First, there are general limitations of using reanalysis data which have been
discussed in the literature, for example spatial biases or issues with
upscaling to hub heights [Sharp et al., 2015, Olauson and Bergkvist, 2015,
Rose and Apt, 2015, Staffell and Pfenninger, 2016]. It is, however, not clear
if there are specific distortions with respect to extreme low-wind events
derived from reanalysis data. A limitation specific to the MERRA-2 dataset is
the relatively coarse 50x50 km grid cell size, which insufficiently represent
local impacts on wind speeds. Regional reanalysis data with more refined
geographical resolutions may resolve this issue, e.g. COSMO-REA2 with 2x2 km,
or COSMO-REA6 with 6x6 km [Hans Ertel Zentrum, 2019], yet these are only
available for shorter periods of time. The global coverage of MERRA-2 further
allows repeating our open-source analysis for other countries and world
regions.
Second, we use power curves of currently available wind turbines and assume
hub-heights of recently constructed plants. We may thus overestimate wind
power generation compared to the currently existing fleet of wind turbines in
Germany, which includes many older and smaller turbines, and in turn
underestimate the magnitude of current LWP events. Conversely, we may
underestimate power generation of future turbines, and accordingly
overestimate the magnitude of future low-wind-power events, assuming that
turbine efficiency and hub height increases further, with corresponding upward
shifts in the power curves. Once LWP events become more relevant for the
overall energy system, this may also trigger specific technology improvements
toward lower cut-in speeds and a steeper slope of the power curve on the very
left-hand side. Quantifying the potentially mitigating effects of such
developments on LWP periods is left for future research.
Third, we use the current spatial capacity distribution of German wind power
plants for deriving an aggregated capacity factor time series. We implicitly
assume that this distribution also persists in the future. In reality, a
relative increase of wind power deployment at sites with lower wind resources
may occur, for example in southern Germany. From the results presented in
Section 3.1, we infer that a more even spatial dispersion of wind turbines
could slightly mitigate LWP events.
Next, climate change has an impact on wind speeds. Future time series of wind
power capacity factors will thus differ from the historic ones investigated
here. Tobin et al. [2016] find that wind power variability in Europe may
generally increase, but Schlott et al. [2018] conclude that this has no
substantial effect on optimal deployment of onshore wind power in highly
renewable future scenarios. Moemken et al. [2018] find that climate change
will increase the occurrence of low wind speeds.
Finally, the focus of this analysis is a detailed but selective investigation
of onshore LWP events in Germany. This geographic focus helps to keep the
analysis tractable and avoids making implicit assumptions on continental
electricity transmission infrastructure. It is also relevant from an energy
policy perspective, which often includes national energy security
considerations. Yet expanding the geographic scope of the analysis would allow
raising complementary insights on larger-scale spatial patterns of extreme LWP
events. Focusing on onshore wind power, and not including other renewable
energy sources such as offshore wind power and solar PV, allows for
parsimonious model assumptions, and findings remain valid for any level of
installed capacity. Analyses that would combine periods of low production from
various renewable energy sources, and also explore their correlation with
electric load, appear to be a promising field for future research. The work of
Raynaud et al. [2018], albeit with lower temporal and spatial detail compared
to our analysis, can be considered as a first step in this direction.
|
2024-09-04T02:54:58.461125 | 2020-03-09T16:24:41 | 2003.04239 | {
"authors": "Debajyoti Choudhuri, Jiabin Zuo",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26121",
"submitter": "Debarjoyti Choudhuri",
"url": "https://arxiv.org/abs/2003.04239"
} | arxiv-papers | # A shadow of algebraic topology and variational method - Prandtl Batchelor
problem
Debajyoti Choudhuri111Corresponding author<EMAIL_ADDRESS>ORCID ID:
0000-0001-8744-9350
Department of Mathematics, National Institute of Technology Rourkela, India
###### Abstract
In this paper we study the existence of nontrivial weak solution to a Prandtl-
Batchelor type free boundary value elliptic problem involving a $p$-Laplacian
operator and a power nonlinearity. Topics from algebraic topology will be used
to establish the existence of a solution to the approximating problem,
whereas, the variational technique will be used to fix the claim of existence
of a solution to the main problem. In the process, a couple of classical
results were also improved to suit the purpose of establishing the existence
of a nontrivial solution.
Keywords: Dirichlet free boundary value problem, Sobolev space, Morse
relation, cohomology group.
AMS Classification: 35J35, 35J60.
## 1 Introduction
We will investigate the existence of solution to the following free boundary
value problem.
$\displaystyle\begin{split}-\Delta_{p}u&=\lambda\chi_{\\{u>1\\}}(u-1)_{+}^{q-1},~{}\text{in}~{}\Omega\setminus
H(u),\\\ |\nabla u^{+}|^{p}-|\nabla
u^{-}|^{p}&=\frac{p}{p-1},~{}\text{in}~{}H(u)\\\
u&=0,\text{on}~{}\partial\Omega.\end{split}$ (1.1)
Here, $\lambda>0$ is a parameter, $(u-1)_{+}=\max\\{u-1,0\\}$ and
$H(u)=\partial\\{u>1\\}.$
Also $\nabla u^{\pm}$ are the limits of $\nabla u$ from the sets $\\{u>1\\}$
and $\\{u\leq 1\\}^{\circ}$ respectively. The domain
$\Omega\subset\mathbb{R}^{N}(N\geq 2)$ is bounded with a sufficiently smooth
boundary $\partial\Omega$. The relation between the exponents are assumed in
the order $1<p\leq q-1$, with $q<p^{*}=\dfrac{Np}{N-p}$. The solution(s)
satisfy the free boundary condition in the following sense: for all
$\vec{\phi}\in C_{0}^{1}(\mathbb{R}^{N})$ such that $u\neq 1$ a.e. on the
support of $\vec{\phi}$,
$\displaystyle\underset{\epsilon^{+}\rightarrow
0}{\lim}\int_{u=1+\epsilon^{+}}\left(\frac{p}{p-1}-|\nabla
u|^{p}\right)\vec{\phi}\cdot\hat{n}dS-\underset{\epsilon^{-}\rightarrow
0}{\lim}\int_{u=1-\epsilon^{-}}|\nabla u|^{p}\vec{\phi}\cdot\hat{n}dS$
$\displaystyle=0,$ (1.2)
where $\hat{n}$ is the outward drawn normal to
$\\{1-\epsilon^{-}<u<1+\epsilon^{+}\\}$. Note that the sets
$\\{u=1\pm\epsilon^{\pm}\\}$ are smooth hypersurfaces for almost all
$\epsilon^{\pm}>0$ by the Sard’s theorem. The limit above in (1.2) is taken by
running such $\epsilon^{\pm}>0$ towards zero.
A rich literature survey has been done in the book due to Perera et al. [10]
where the author has discussed problems of several variety involving the
$p$-Laplacian operators which could be studied using the Morse theory. The
motivation for the current work has been drawn from the work due to Perera
[13]. The treatment used to address the existence of atleast one (or two)
solution(s) to the approximating problem may be classical (section $3$,
Theorems 3.3 and 3.5) but the result concerning the reguarity of the free
boundary is very new and the question of existence of solution to the problem
(1.1) has not been answered till now (section $4$, Lemma 4.1), to the best of
my knowledge. Two more results due to Alt-Caffarelli [1] (section $4$, Lemma
4.2) and Caffarelli et al. [6] (Appendix, Lemma 4.3) were improved to the best
possible extent to suit the purpose of the problem in this paper.
### 1.1 A physical motivation
Consider the problem
$\displaystyle\begin{split}-\Delta
u&=\lambda\chi_{\\{u>1\\}}(x),~{}\text{in}~{}\Omega\setminus H(u),\\\ |\nabla
u^{+}|^{2}-|\nabla u^{-}|^{2}&=2,~{}\text{in}~{}H(u)\\\
u&=0,\text{on}~{}\partial\Omega.\end{split}$ (1.3)
This is the well known Prandtl-Batchelor free boundary value problem, where
the phase $\\{u>1\\}$ is a representation of the vortex patch bounded by the
vortex line $u=1$ in a steady fluid flow for $N=2$ (refer Batchelor [2, 3]).
Thus the current problem is a more generalized version of (1.3). For a more
physical application to this problem we direct the reader’s attention to the
work due to Caflisch [4], Elcrat and Miller [7].
Another instance of occurrence of such a phenomena is in the non-equilibrium
system of melting of ice. In a given block of ice, the heat equation can be
solved with a given set of appropriate initial/boundary conditions in order to
determine the temperature. However, if there exists a region of ice in which
the temperature is greater than the melting point of ice, this subdomain will
be filled with water. The boundary thus formed due to the ice-water interface
is controlled by the solution of the heat equation.
Thus encountering a free boundary in the nature is not unnatural. The problem
in this paper is a large enough generalization to this physical phenomena
which besides being a new addition to the literature can also serve as a note
to bridge the problems in elliptic PDEs with algebraic topology.
## 2 Preliminaries
We begin by giving the relevant definitions and results besides defining the
function space which will be used very frequently in the article. Let $X$ be a
topological space and $A\subset X$ be a topological subspace. Roughly, a
homology group is an algebraic group constructed from a topological object or
a space. Following is the fundamental tool that will be used to work with,
namely the homology theory [12].
###### Definition 2.1.
A homology group on a family of pairs of spaces $(X,A)$ consists of:
1. 1.
A sequence $\\{H_{k}(X,A)\\}_{k\in\mathbb{N}_{0}}$ of abelian groups is known
as homology group for the pair $(X,A)$ (note that for the pair $(X,\phi)$, we
write $H_{k}(X),k\in\mathbb{N}_{0}$). Here
$\mathbb{N}_{0}=\mathbb{N}\cup\\{0\\}$.
2. 2.
To every map of pairs $\varphi:(X,A)\rightarrow(Y,B)$ is associated a
homomorphism $\varphi^{*}:H_{k}(X,A)\rightarrow H_{k}(Y,B)$ for all
$k\in\mathbb{N}_{0}$.
3. 3.
To every $k\in\mathbb{N}_{0}$ and every pair $(X,A)$ is associated a
homomorphism $\partial:H_{k}(X,A)\rightarrow H_{k-1}(A)$ for all
$k\in\mathbb{N}_{0}$.
These items satisfy the following axioms.
($A_{1}$)
If $\varphi=id_{X}$, then $\varphi_{*}=id|_{H_{k}(X,A)}$.
($A_{2}$)
If $\varphi:(X,A)\rightarrow(Y,B)$ and $\psi:(Y,B)\rightarrow(Z,C)$ are maps
of pairs, then $(\psi\circ\varphi)_{*}=\psi_{*}\circ\varphi_{*}$.
($A_{3}$)
If $\varphi:(X,A)\rightarrow(Y,B)$ is a map of pairs, then
$\partial\circ\varphi_{*}=(\varphi|_{A})_{*}\circ\partial$.
($A_{4}$)
If $i:A\rightarrow X$ and $j:(X,\phi)\rightarrow(X,A)$ are inclusion maps,
then the following sequence is exact
$...\xrightarrow[]{\partial}H_{k}(A)\xrightarrow[]{i_{*}}H_{k}(X)\xrightarrow[]{j_{*}}H_{k}(X,A)\xrightarrow[]{\partial}H_{k-1}(A)\rightarrow...$
Recall that a chain
$...\xrightarrow[]{\partial_{K+1}}C_{k}(X)\xrightarrow[]{\partial_{k}}C_{K-1}(X)\xrightarrow[]{\partial_{k-1}}C_{k-2}(X)\xrightarrow[]{\partial_{k-2}}...$
is said to be exact if $im(\partial_{k+1})=ker(\partial_{k})$ for each
$k\in\mathbb{N}_{0}$.
($A_{5}$)
If $\varphi,\psi:(X,A)\rightarrow(Y,B)$ are homotopic maps of pairs, then
$\varphi_{*}=\psi_{*}$.
($A_{6}$)
(Excision): If $U\subseteq X$ is an open set with
$\bar{U}\subseteq\text{int}(A)$ and $i:(X\setminus U,A\setminus
U)\rightarrow(X,A)$ is the inclusion map, then $i_{*}:H_{k}(X\setminus
U,A\setminus U)\rightarrow H_{k}(X,A)$ is an isomorphism.
($A_{7}$)
If $X=\\{*\\}$, then $H_{k}({*})=0$ for all $k\in\mathbb{N}$.
###### Definition 2.2.
A continuous map $F:X\times[0,1]\to X$ is a deformation retraction of a space
$X$ onto a subspace $A$ if, for every $x\in X$ and $a\in A$, $F(x,0)=x$,
$F(x,1)\in A$, and $F(a,1)=a$.
A crucial notion in analysis is the idea of compactness and the Palais-Smale
condition is a special type of compactness which is given as follows.
###### Definition 2.3.
(S. Kesavan [11]) Let $V$ be a Banach space and $J:V\rightarrow\mathbb{R}$ a
$C^{1}$ functional. It is said to satisfy the Palais-Smale condition (PS) if
the following holds: whenever $(u_{n})$ is a sequence in $V$ such that
$(J(u_{n}))$ is bounded and $J^{\prime}(u_{n})\rightarrow 0$ in $V^{\prime}$
(the dual space of $V$), then $(u_{n})$ has a strongly convergent subsequence.
The following is a deformation lemma which will be quintessential in computing
the homology groups.
###### Lemma 2.4.
(S. Kesavan [11]) Let $J:V\rightarrow\mathbb{R}$ be a $C^{1}$ functional
satisfying the Palais-Smale condition. Let $c,a$ be real numbers. Define
$K_{J,c}=\\{v\in X:J(u)=c,J^{\prime}(v)=0\\}$, $K^{a}=\\{v\in X:J(v)\leq a\\}$
(likewise we define $K_{a}=\\{v\in X:J(v)\geq a\\}$). Let $K_{J,c}=\O$. Then
there exists $\epsilon^{\prime}>0$ and a continuous homotopy $\eta:[0,1]\times
V\rightarrow V$ such that $\forall~{}0<\epsilon\leq\epsilon^{\prime}$
1. 1.
$\eta(0,v)=v$ for all $v\in X$.
2. 2.
$\eta(t,v)=v$ for all $t\in[0,1]$, $v\neq J^{-1}([c-\epsilon,c+\epsilon])$.
3. 3.
$\eta(1,K^{c+\epsilon})\subset K^{c-\epsilon}$.
###### Definition 2.5.
Morse index of a functional $J:V\rightarrow\mathbb{R}$ is defined to be the
maximum subspace of $V$ such that $J^{\prime\prime}$, the second Fréchet
derivative, is negative definite on it.
### 2.1 Space description
We begin by defining the standard Lebesgue space $L^{p}(\Omega)$ for $1\leq
p<\infty$ as
$L^{p}(\Omega)=\left\\{u:\Omega\rightarrow\mathbb{R}:u\;{\text{is measurable
and}}\int_{\Omega}|u|^{p}dx<\infty\right\\}$
endowed with the norm $\|u\|_{p}=\left(\int_{\Omega}|\nabla
u|^{p}dx\right)^{\frac{1}{p}}$. We will define the Sobolev space as
$W^{1,p}(\Omega)=\\{u\in L^{p}(\Omega):\nabla u\in(L^{p}(\Omega)^{N}\\}$
with the norm $\|u\|_{1,p}^{p}=\|u\|_{p}+\|\nabla u\|_{p}$. We further define
$W_{0}^{1,p}(\Omega)=\\{u\in
W^{1,p}(\Omega):u=0~{}\text{on}~{}\partial\Omega\\}.$
The associated norm is $\|u\|^{p}=\|\nabla u\|_{p}$.
With these norms, $L^{p}(\Omega)$, $W^{1,p}(\Omega)$ and $W_{0}^{1,p}(\Omega)$
are separable, reflexive Banach spaces([11]). We now state the Hölder’s
inequality and embedding results in the following propositions.
###### Proposition 2.6.
For any $u\in L^{p}(\Omega)$ and $v\in L^{p^{\prime}}(\Omega)$, where
$L^{p^{\prime}}(\Omega)$ is the conjugate space of $L^{p}(\Omega)$ such that
$\frac{1}{p}+\frac{1}{p^{\prime}}=1$,
$\big{|}\int_{\Omega}uv\;dx\big{|}\leq\|u\|_{p}\|v\|_{p^{\prime}}$
###### Proposition 2.7.
If $p<N$, then $W^{1,p}(\Omega)\hookrightarrow L^{r}(\Omega)$ is continuous
for $r\in[p,p^{*}]$ and compact for $r\in[p,p^{*})$. If $p=N$, then
$W^{1,p}(\Omega)\hookrightarrow L^{r}(\Omega)$ is continuous and compact for
$r\in[p,\infty)$. Further, if $p>N$, then $W^{1,p}(\Omega)\hookrightarrow
C^{1-\left[\frac{N}{p}\right]}(\bar{\Omega})$.
## 3 The way to tackle the problem using Morse theory
We at first define an energy functional associated to the problem in (1.1)
which is as follows.
$\displaystyle\begin{split}I(u)&=\int_{\Omega}\frac{|\nabla
u|^{p}}{p}dx+\int_{\Omega}\chi_{\\{u>1\\}}(x)dx-\lambda\int_{\Omega}\frac{(u-1)_{+}^{q}}{q}dx.\end{split}$
This functional is not even differentiable and hence poses serious issues as
far as the application of variational theorems are concerned. Thus we
approximate $I$ using the following functionals that varies with respect to a
parameter $\alpha>0$. This method is adapted from the work of Jerison-Perera
[9]. We define a smooth function $g:\mathbb{R}\rightarrow[0,2]$ as follows:
$g(t)=\begin{cases}0,&\text{if}~{}t\leq 0\\\ \text{a positive
function},&\text{if}~{}0<t<1\\\ 0,&\text{if}~{}t\geq 0\end{cases}$
and $\int_{0}^{1}g(t)dt=1$. We further let $G(t)=\int_{0}^{t}g(t)dt$. Clearly,
$G$ is smooth and nondecreasing function such that
$G(t)=\begin{cases}0,&\text{if}~{}t\leq 0\\\ \text{a positive
function}<1,&\text{if}~{}0<t<1\\\ 1,&\text{if}~{}t\geq 0.\end{cases}$
We thus define
$\displaystyle\begin{split}I_{\alpha}(u)&=\int_{\Omega}\frac{|\nabla
u|^{p}}{p}dx+\int_{\Omega}G\left(\frac{u-1}{\alpha}\right)dx-\lambda\int_{\Omega}\frac{(u-1)_{+}^{q}}{q}dx.\end{split}$
This functional $I_{\alpha}$, is of at least $C^{2}$ class and hence
$\displaystyle\langle I_{\alpha}^{\prime\prime}(u)v,w\rangle=$
$\displaystyle\int_{\Omega}[|\nabla u|^{p-2}\nabla v\cdot\nabla w+(p-2)|\nabla
u|^{p-4}(\nabla u\cdot\nabla v)(\nabla u\cdot\nabla w)]dx$
$\displaystyle+\int_{\Omega}\frac{1}{\alpha^{2}}g^{\prime}\left(\frac{u-1}{\alpha}\right)vwdx-\lambda\int_{\Omega}(u-1)_{+}^{q-2}vwdx.$
Following is an important result in Morse theory which explains the effect of
the associated Homology groups on the set $K_{J,(-\infty,a]}=\\{x\in
V:f(x)\leq a\\}$.
###### Theorem 3.1.
Let $J\in C^{2}(V)$ satisfy the Palais-Smale condition and let ‘$a$’ be a
regular value of $J$. Then if, $H_{*}(V,J^{a})\neq 0$, implies that
$K_{J,(-\infty,a]}\neq\emptyset$.
###### Remark 3.2.
Before we apply the Morse lemma we recall that for a Morse function the
following holds
1. 1.
$H_{*}(J^{c},f^{c}\setminus\text{Crit}(J,c))=\oplus_{j}H_{*}(J^{c}\cap
N_{j},J^{c}\cap N_{j}\setminus\\{x_{j}\\}),$
where $\text{Crit}(J,c)=\\{x\in V:J(x)=c,J^{\prime}(x)=0\\}$, $N_{j}$ is a
neighbourhood of $x_{j}$.
2. 2.
$H_{k}(J^{c}\cap N,J^{c}\cap
N\setminus\\{x\\})=\begin{cases}\mathbb{R},&k=m(x)\\\
0,&\text{otherwise}\end{cases}$
where $m(x)$ is a Morse index of $x$, a critical point of $J$.
3. 3.
Further
$H_{k}(J^{a},J^{b})=\oplus_{\\{i:m(x_{i})=k\\}}\mathbb{R}=\mathbb{R}^{m_{k}(a,b)}$
where $m_{k}(a,b)=n(\\{i:m(x_{i})=k,x_{i}\in K_{J,(a,b)}\\})$. Here $n(S)$
denotes the number of elements present in the set $S$.
4. 4.
Morse relation
$\sum_{u\in K_{J,[a,b]}}\sum_{k\geq 0}\text{dim}(C_{k}(J,u))t^{k}=\sum_{k\geq
0}\text{dim}(H_{k}(J^{a},J^{b}))t^{k}+(1+t)\mathcal{Q}_{t}$
for all $t\in\mathbb{R}$. Here $Q_{t}$ is a nonnegative polynomial in
$\mathbb{N}_{0}[t]$.
###### Theorem 3.3.
The functional $I_{\alpha}$ has at least one nontrivial critical point when
$0<\lambda\leq\lambda_{1}$, $\lambda_{1}$ being the first eigen value of
$(-\Delta)_{p}$.
###### Proof.
We observe that $I_{\alpha}(tu)\rightarrow-\infty$ as $t\rightarrow\infty$. A
key observation here is that there exists $R$ sufficiently small such that
$I_{\alpha}(u)\geq\alpha>0$
whenever $\|u\|=R$. We choose $\epsilon>0$ such that $c=\epsilon$ is a regular
value of $I_{\alpha}$. Thus, $I_{\alpha}^{\epsilon}$ is not path connected
since it has at least two path connected components namely in the form of a
neighbourhood of $0$ and a set $\\{u:\|u\|\geq R\\}$ for $R$ suffciently
large. From the theory of homology groups we get that
$\text{dim}(H_{0}(I_{\alpha}^{\epsilon}))\geq 2$, ‘dim’ denoting the dimension
of the Homology group. From the Definition 2.1, let us consider the following
exact sequence
$...\rightarrow
H_{1}(W_{0}^{1,p(x)}(\Omega),I_{\alpha}^{\epsilon})\xrightarrow[]{\partial_{1}}H_{0}(I_{\alpha}^{\epsilon},\emptyset)\xrightarrow[]{i_{0}}H_{0}(W_{0}^{1,p(x)}(\Omega),\emptyset)\rightarrow...$
Obviously $\text{dim}(H_{0}(W_{0}^{1,p(x)}(\Omega),\emptyset))=1$ and
$\text{dim}(H_{0}(I_{\alpha}^{\epsilon}))\geq 2$. Due to the exactness of the
sequence we conclude that
$\text{dim}H_{1}(W_{0}^{1,p(x)}(\Omega),I_{\alpha}^{\epsilon})\geq 1$. Thus by
the Remark we have $K_{I_{\alpha},(-\infty,\epsilon]}\neq\emptyset$.
Suppose that the only critical point to (1.1) is $u=0$ at which the energy of
the functional $I_{\alpha}$ is also $0$. Thus from the discussion above and
the Remark (3.2)-(4) we have from the Morse relation we have the following
identity over $\mathbb{R}$
$1=t+\mathcal{P}(t)+(1+t)\mathcal{Q}_{t},$
$g$ being a power series in $t$, $\mathcal{Q}_{t}\geq 0$. This is a
contradiction. Thus there exists at least one $u\neq 0$ which is a critical
point to $I_{\alpha}$ whenever $\lambda\leq\lambda_{1}$. ∎
###### Definition 3.4 (Krasnoselskii genus).
Let $V$ be a Banach space and $S\subset V$. A set $S$ is said to be symmetric
if $u\in S$ implies $-u\in S$. Let $S$ be a close, symmetric subset of $V$
such that $0\notin S$. We define a genus $\gamma(S)$ of $S$ by the smallest
integer $k$ such that there exists an odd continuous mapping from $S$ to
$\mathbb{R}^{k}\setminus\\{0\\}$. We define $\gamma(S)=\infty$, if no such $k$
exists.
To each closed and symmetric subsets $M$ of $W_{0}^{1,p}(\Omega)$ with the
Krasnoselskii genus $\gamma(M)\geq k$, define
$\lambda_{k}=\inf_{M\in\mathfrak{F}_{k}}\sup_{u\in M}I_{\alpha}(u).$
Here $\mathfrak{F}_{k}=\\{M\subset W_{0}^{1,p}(\Omega),~{}\text{closed and
symmetric}:\gamma(M)\geq k\\}$. A natural question at this point will be to
ask if the same conclusion can be drawn when
$\lambda_{k}<\lambda\leq\lambda_{k+1}$. We will define $\lambda_{0}=0$. The
next theorem answers this question.
###### Theorem 3.5.
The problem in (1.1) has at least one nontrivial solution when
$\lambda_{k}<\lambda\leq\lambda_{k+1}$, $\lambda_{k}$ being as defined above.
###### Proof.
We at first show that $H_{k}(W_{0}^{1,p}(\Omega),I_{\alpha}^{-a})=0$ for all
$k\geq 0$. Pick a $u\in\\{v:\|v\|=1\\}=\partial B^{\infty}$, where
$B^{\infty}=\\{v:\|v\|\leq 1\\}$. Then
$I_{\alpha}(tu)=\int_{\Omega}\frac{|\nabla(tu)|^{p}}{p}dx+\int_{\Omega}G\left(\frac{tu-1}{\alpha}\right)dx-\lambda\int_{\Omega}\frac{(tu-1)_{+}^{q}}{q}dx<-a<0$
for all $t\geq t_{0}$. It can be easily seen that for a fixed $u$, we have
$I^{\prime}(tu)>0$. Further, for any $t\geq t_{0}$ we have
$I_{\alpha}(tu)<-a<0$. Thus, there exists $t(u)$ such that
$I_{\alpha}^{\prime}(tu)=0$ by the continuity of $I_{\alpha}^{\prime}$. We can
thus say that there exists a $C^{1}$-function
$T:W_{0}^{1,p}(\Omega)\setminus\\{0\\}\rightarrow\mathbb{R}^{+}$. We now
define a standard deformation retract $\eta$ of $W_{0}^{1,p}(\Omega)\setminus
B_{R^{\prime}}(0)$ into $I_{\alpha}^{-a}$ as follows (refer Definition 2.2).
$\eta(s,u)=\begin{cases}(1-s)u+sT\left(\frac{u}{\|u\|}\right)\frac{u}{\|u\|},&\|u\|\geq
R^{\prime},I_{\alpha}(u)\geq-a\\\ u,&I_{\alpha}(u)\leq-a.\end{cases}$
It is not difficult to see that $\eta$ is a $C^{1}$ function over $[0,1]\times
W_{0}^{1,p}(\Omega)\setminus B_{R^{\prime}}(0)$. On using the map
$\delta(s,u)=\dfrac{u}{\|u\|}$, for $u\in W_{0}^{1,p}(\Omega)\setminus
B_{R^{\prime}}(0)$ we claim that
$H_{k}(W_{0}^{1,p}(\Omega),W_{0}^{1,p}(\Omega)\setminus
B_{r}(0))=H_{k}(B^{\infty},S^{\infty})$ for all $k\geq 0$. This is because,
$H_{k}(B^{\infty},S^{\infty})\cong H_{k}(*,0)$. From elementary computation of
homology groups with two $0$-dimensional simplices it is easy to see that
$H_{k}(*,0)=\\{0\\}$ for each $k\geq 0$. A result in [10] says that
$C_{m}(I,u)=\begin{cases}\mathbb{R},&\text{if}~{}m(u)=m\\\
0,&\text{otherwise}\end{cases}$
Therefore, from the Morse relation in the Remark (3.2)-4 and the result above,
we have for $b>0$
$\displaystyle\sum_{u\in K_{I,[-a,\infty)}}\sum_{k\geq
0}\text{dim}(C_{k}(I,u))t^{k}$ $\displaystyle=t^{m(u)}+p(t)$ (3.1)
where $m(u)$ is the Morse index of $u$ and $\mathcal{P}(t)$ contains the rest
of the powers of $t$ corresponding to the other critical points, if any. The
Morse index is finite because of the following reason. From the argument which
helped in establishing a ‘maxima’, say $u_{0}$, using the mountain pass
geometry around $0$, we had to assume $\lambda<C^{-q}\frac{q}{p}\|u\|^{p-q}$.
Owing to $u_{0}$ being a maxima, we have $I_{\alpha}^{\prime\prime}(u_{0})<0$
which necessarily requires $\lambda>C^{-q}\frac{p-1}{q-1}\|u\|^{p-q}$. Thus we
have
$C^{-q}\frac{p-1}{q-1}\|u\|^{p-q}<\lambda<C^{-q}\frac{q}{p}\|u\|^{p-q}.$
This implies that $\lambda_{i}<\lambda<\lambda_{j}$ for some
$i,j\in\mathbb{N}_{0}$. On further using the the Morse relation we obtain
$\displaystyle t^{m(u)}+\mathcal{P}(t)$ $\displaystyle=(1+t)\mathcal{Q}_{t}.$
(3.2)
This is because the $H_{k}$s are all trivial groups. Hence, $Q_{t}$ either
contains $t^{m(u)}$ or $t^{m(u)-1}$ or both. Thus there exists at least one
nontrivial $u\in K_{I_{\alpha},[-a,\infty)}$ with $m(u)\leq n+1$. ∎
###### Remark 3.6.
If $0<\lambda\leq\lambda_{k+1}$, then there exists at least $k$ solutions to
the equation (1.1).
## 4 Existence of solution to the main problem (1.1) and smoothness of the
boundary $\partial\\{u>1\\}$
###### Lemma 4.1.
Let $\alpha_{j}\rightarrow 0$ ($\alpha_{j}>0$) as $j\rightarrow\infty$ and
$u_{j}$ be a critical point of $I_{\alpha_{j}}$. If $(u_{j})$ is bounded in
$W_{0}^{1,p}(\Omega)\cap L^{\infty}(\Omega)$, then there exists $u$, a
Lipschitz continuous function, on $\bar{\Omega}$ such that $u\in
W_{0}^{1,p}(\Omega)\cap C^{2}(\bar{\Omega}\setminus H(u))$ and a subsequence
(still denoted by $(u_{j})$) such that
1. (i)
$u_{j}\rightarrow u$ uniformly over $\bar{\Omega}$,
2. (ii)
$u_{j}\rightarrow u$ locally in $C^{1}(\bar{\Omega}\setminus\\{u=1\\})$,
3. (iii)
$u_{j}\rightarrow u$ strongly in $W_{0}^{1,p}(\Omega)$,
4. (iv)
$I(u)\leq\lim\inf I_{\alpha_{j}}(u_{j})\leq\lim\sup I_{\alpha_{j}}(u_{j})\leq
I(u)+|\\{u=1\\}|$, i.e. $u$ is a nontrivial function if $\lim\inf
I_{\alpha_{j}}(u_{j})<0$ or $\lim\sup I_{\alpha_{j}}(u_{j})>0$.
Furthemore, $u$ satisfies
$-\Delta_{p}u=\lambda\chi_{\\{u>1\\}}(x)(u-1)_{+}^{q-1}$
classically in $\Omega\setminus H(u)$, the free boundary condition is
satisfies in the generalized sense and vanishes continuously on
$\partial\Omega$. In the case of $u$ being nontrivial, then $u>0$ in $\Omega$,
the set $\\{u<1\\}$ is connected and the set $\\{u>1\\}$ is nonempty.
An important result that will be used to pass the limit in the proof of the
Lemma 4.1 is the following theorem which is in line to the theorem due to
Caffarelli et al. in [6, Theorem $5.1$].
###### Lemma 4.2.
Let $u$ be a Lipschitz continuous function on the unit ball
$B_{1}(0)\subset\mathbb{R}^{N}$ satisfying the distributional inequalities
$\pm\Delta_{p}u\leq
A\left(\dfrac{1}{\alpha}\chi_{\\{|u-1|<\alpha\\}}(x)+1\right)$
for constants $A>0$ and $0<\alpha\leq 1$. Then there exists a constant $C>0$
depending on $N,A$ and $\int_{{B_{1}}(0)}u^{p}dx$, but not on $\alpha$, such
that
$\underset{x\in B_{\frac{1}{2}}(0)}{\text{esssup}}\\{|\nabla u(x)|\\}\leq C.$
###### Proof.
Given that $u$ is a Lipschitz continuous function on the unit ball
$B_{1}(0)\subset\mathbb{R}^{N}$, so $u$ is also bounded in the unit ball say
by a constant $M_{0}$. Not just that, $u$ is also differentiable a.e. in
$B_{1}(0)$. We will prove the result stated in the lemma for $u_{+}$, as the
proof for $u_{-}$ will follow suit. Denote $v(x)=\frac{15}{\alpha}u_{-}(\alpha
x/15)$ and
$v_{1}=v+\underset{B_{1/4}}{\max}\\{v^{-}\\}.$
Therefore, $0\leq v_{1}\leq M_{1}$. Let us choose a test function $\eta\in
C_{0}^{\infty}(B_{1/4})$ which is such that $0\leq\eta\leq 1$ in $B_{3/4}$ and
$\eta=1$ in $B_{1/2}$. Thus
$\displaystyle\begin{split}\int_{\Omega}\eta^{p}|\nabla
v_{1}|^{p}=&-\int_{\Omega}(pv_{1}\eta^{p-1}|\nabla v_{1}|^{p-2}(\nabla
v_{1}\cdot\nabla\eta)+\eta^{p}v_{1}\Delta_{p}v_{1}dx)dx\\\
\leq&\frac{1}{p}\int_{\Omega}\eta^{p}|\nabla
v_{1}|^{p}dx+p\int_{\Omega}v_{1}^{p}|\nabla\eta|^{p}dx\\\
&+AM_{1}\int_{\Omega}\eta^{p}\left(\frac{1}{\alpha}\chi_{\\{|u-1|<\alpha\\}}(x)+1\right)dx\\\
\leq&\frac{1}{p}\int_{\Omega}\eta^{p}|\nabla
v_{1}|^{p}dx+pM_{1}^{p}\int_{\Omega}|\nabla\eta|^{p}dx\\\
&+AM_{1}\int_{\Omega}\eta^{p}\left(\frac{1}{\alpha}\chi_{\\{|u-1|<\alpha\\}}(x)+1\right)dx.\end{split}$
(4.1)
It is now established that
$\displaystyle\frac{p-1}{p}\int_{B_{1/2}}|\nabla v_{1}|^{p}dx$
$\displaystyle\leq M_{2}.$ (4.2)
However, $u$ being Lipschitz continuous, the gradient $\nabla u$ is bounded
a.e. in $B_{1}(0)$ and hence in $B_{1/2}(0)$. Thus
$\underset{B_{1/2}(0)}{\text{esssup}}\\{|\nabla u|\\}\leq C$, for some $C>0$.
∎
###### Proof of Lemma 4.1.
Let $0<\alpha_{j}<1$. Consider the problem sequence $(P_{j})$
$\displaystyle\begin{split}-\Delta_{p}u_{j}&=-\frac{1}{\alpha_{j}}g\left(\frac{(u_{j}-1)_{+}}{\alpha_{j}}\right)+\lambda(u-1)_{+}^{q-1}~{}\text{in}~{}\Omega\\\
u_{j}&>0~{}\text{in}~{}\Omega\\\
u_{j}&=0~{}\text{on}~{}\partial\Omega.\end{split}$ (4.3)
The nature of the problem being a sublinear one allows us to conclude by an
iterative technique that the sequence $(u_{j})$ is bounded in
$L^{\infty}(\Omega)$. Therefore, there exists $C_{0}$ such that $0\leq
g\left(\frac{(u_{j}-1)_{+}}{\alpha_{j}}\right)(u-1)_{+}^{q-1}\leq C_{0}$. Let
$\varphi_{0}$ be a solution of
$\displaystyle\begin{split}-\Delta_{p}\varphi_{0}&=\lambda
C_{0}~{}\text{in}~{}\Omega\\\
\varphi_{0}&=0~{}\text{on}~{}\partial\Omega.\end{split}$ (4.4)
Now since $g\geq 0$, we have that $-\Delta_{p}u_{j}\leq\lambda
C_{0}=-\Delta\varphi_{0}$ in $\Omega$. Therefore by the maximum principle,
$\displaystyle 0\leq u_{j}(x)\leq\varphi_{0}(x)~{}\forall x\in\Omega.$ (4.5)
Since $\\{u_{j}\geq 1\\}\subset\\{\varphi_{0}\geq 1\\}$, hence $\varphi_{0}$
gives a uniform lower bound, say $d_{0}$, on the distance from the set
$\\{u_{j}\geq 1\\}$ to $\partial\Omega$. Thus $(u_{j})$ is bounded with
respect to the $C^{2,a}$ norm. Therefore, it has a convergent subsequence in
the $C^{2}$-norm in a $\dfrac{d_{0}}{2}$ neighbourhood of the boundary
$\partial\Omega$. Obviously $0\leq g\leq 2\chi_{(-1,1)}$ and hence
$\displaystyle\begin{split}\pm\Delta
u_{j}&=\pm\frac{1}{\alpha_{j}}g\left(\frac{(u_{j}-1)_{+}}{\alpha_{j}}\right)\mp\lambda(u_{j}-1)_{+}^{q-1}\\\
&\leq\frac{2}{\alpha_{j}}\chi_{\\{|u_{j}-1|<\alpha_{j}\\}}(x)+\lambda
C_{0}.\end{split}$ (4.6)
Since, $(u_{j})$ is bounded in $L^{2}(\Omega)$ and by Lemma 4.2 it follows
that there exists $A>0$ such that
$\displaystyle\underset{x\in B_{\frac{r}{2}}(x_{0})}{\text{esssup}}\\{|\nabla
u_{j}(x)|\\}$ $\displaystyle\leq\frac{A}{r}$ (4.7)
for a suitable $r>0$ such that $B_{r}(0)\subset\Omega$. However, since
$(u_{j})$ is a sequence of Lipschitz continuous functions that are also
$C^{1}$, therefore
$\displaystyle\underset{x\in B_{\frac{r}{2}}(x_{0})}{\sup}\\{|\nabla
u_{j}(x)|\\}$ $\displaystyle\leq\frac{A}{r}.$ (4.8)
Thus $(u_{j})$ is uniformly Lipschitz continuous on the compact subsets of
$\Omega$ such that its distance from the boundary $\partial\Omega$ is at least
$\frac{d_{0}}{2}$ units.
Thus by the Ascoli-Arzela theorem applied to $(u_{j})$ we have a subsequence,
still named the same, such that it converges uniformly to a Lipschitz
continuous function $u$ in $\Omega$ with zero boundary values and with strong
convergence in $C^{2}$ on a $\frac{d_{0}}{2}$-neighbourhood of
$\partial\Omega$. By the Eberlein-Šmulian theorem we conclude that
$u_{j}\rightharpoonup u$ in $W_{0}^{1,p}(\Omega)$.
We now prove that $u$ satisfies
$\displaystyle-\Delta_{p}u$
$\displaystyle=\alpha\chi_{\\{u>1\\}}(x)(u-1)_{+}^{q-1}$ (4.9)
in the set $\\{u\neq 1\\}$. Let $\varphi\in C_{0}^{\infty}(\\{u>1\\})$ and
therefore $u\geq 1+2\delta$ on the support of $\varphi$ for some $\delta>0$.
On using the convergence of $u_{j}$ to $u$ uniformly on $\Omega$ we have
$|u_{j}-u|<\delta$ for any sufficiently large $j,\delta_{j}<\delta$. So
$u_{j}\geq 1+\delta_{j}$ on the support of $\varphi$. On testing (4.9) with
$\varphi$ yields
$\displaystyle\int_{\Omega}|\nabla u_{j}|^{p-2}\nabla u_{j}\cdot\nabla\varphi
dx$ $\displaystyle=\lambda\int_{\Omega}(u_{j}-1)_{+}^{q-1}\varphi dx.$ (4.10)
On passing the limit $j\rightarrow\infty$ to (4.9), we get
$\displaystyle\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla\varphi dx$
$\displaystyle=\lambda\int_{\Omega}(u_{j}-1)_{+}^{q-1}\varphi dx.$ (4.11)
To arrive at (4.11) we have used the weak convergence of $u_{j}$ to $u$ in
$W_{0}^{1,p}(\Omega)$ and the uniform convergence of the same in $\Omega$.
Hence $u$ is a weak solution of $-\Delta_{p}u=\lambda(u-1)_{+}^{q-1}$ in
$\\{u>1\\}$. Since $u$ is a Lipschitz continuous function, hence by the
Schauder estimates we conclude that it is also a classical solution of
$-\Delta_{p}u=\lambda(u-1)_{+}^{q-1}$ in $\\{u>1\\}$. Similarly on choosing
$\varphi\in C_{0}^{\infty}(\\{u<1\\})$ one can find a $\delta>0$ such that
$u\leq 1-2\delta$. Therefore, $u_{j}<1-\delta$.
Let us now analyze the nature of $u$ in the set $\\{u\leq 1\\}^{\circ}$. On
testing (4.9) with any nonnegative function and passing the limit
$j\rightarrow\infty$ and using the fact that $g\geq 0$, $G\leq 1$ we can show
that $u$ satisfies
$\displaystyle-\Delta_{p}u$
$\displaystyle\leq\lambda(u-1)_{+}^{q-1}~{}\text{in}~{}\Omega$ (4.12)
in the distributional sense. Furthermore, $\mu=\Delta_{p}u$ is a positive
Radon measure supported on $\Omega\cap\partial\\{u<1\\}$ (refer Lemma 4.3 in
Appendix). From (4.12), the positivity of the Radon measure $\mu$ and the
usage of Section $9.4$ in Gilbarg-Trudinger [8] we conclude that $u\in
W_{\text{loc}}^{2,p}(\\{u\leq 1\\}^{\circ})$, $1<p<\infty$. Thus $\mu$ is
supported on $\Omega\cap\partial\\{u<1\\}\cap\partial\\{u>1\\}$ and $u$
satisfies $-\Delta_{p}u=0$ in the set $\\{u\leq 1\\}^{\circ}$.
In order to prove $(ii)$, we will show that $u_{j}\rightarrow u$ locally in
$C^{1}(\Omega\setminus\\{u=1\\})$. Note that we have already proved that
$u_{j}\rightarrow u$ in the $C^{2}$ norm in a neighbourhood of
$\partial\Omega$ of $\bar{\Omega}$. Suppose $M\subset\subset\\{u>1\\}$. In
this set $M$ we have $u\geq 1+2\delta$ for some $\delta>0$. Thus for
sufficiently large $j$, with $\delta_{j}<\delta$, we have $|u_{j}-u|<\delta$
in $\Omega$ and hence $u_{j}\geq 1+\delta_{j}$ in $M$. From (4.3) we have
$-\Delta_{p}u_{j}=\lambda(u_{j}-1)_{+}^{q-1}~{}\text{in}~{}M.$
Clearly, $(u_{j}-1)_{+}^{q-1}\rightarrow(u-1)_{+}^{q-1}$ in $L^{p}(\Omega)$
for $1<p<\infty$ and $u_{j}\rightarrow u$ uniformly in $\Omega$. This analysis
says something more stronger - since
$(-\Delta_{p})u_{j}=\lambda(u_{j}-1)_{+}^{q-1}$ in $M$, we have that
$u_{j}\rightarrow u$ in $W^{2,p}(M)$. By the embedding
$W^{2,p}(M)\hookrightarrow C^{1}(M)$ for $p>2$, we have $u_{j}\rightarrow u$
in $C^{1}(M)$. This shows that $u_{j}\rightarrow u$ in $C^{1}(\\{u>1\\})$.
Working on similar lines we can also show that $u_{j}\rightarrow u$ in
$C^{1}(\\{u<1\\})$.
We will now prove $(iii)$. Since $u_{j}\rightharpoonup u$ in
$W_{0}^{1,p}(\Omega)$, we have that by the weak lower semicontinuity of the
norm $\|\cdot\|$ that
$\|u\|\leq\lim\inf\|u_{j}\|.$
It is sufficient to prove that $\lim\sup\|u_{j}\|\leq\|u\|$. To achieve this,
we multiply (4.3) with $(u_{j}-1)$ and then integrate by parts. We will also
use the fact that $tg\left(\frac{t}{\delta_{j}}\right)\geq 0$ for any
$t\in\mathbb{R}$. This gives,
$\displaystyle\begin{split}\int_{\Omega}|\nabla
u_{j}|^{p}dx&\leq\lambda\int_{\Omega}f(u_{j}-1)_{+}^{q}dx-\int_{\partial\Omega}\frac{\partial
u_{j}}{\partial\hat{n}}dS\\\
&\rightarrow\lambda\int_{\Omega}(u-1)_{+}^{q}dx-\int_{\partial\Omega}\frac{\partial
u}{\partial\hat{n}}dS\end{split}$ (4.13)
as $j\rightarrow\infty$. Here $\hat{n}$ is the outward drawn normal to
$\partial\Omega$. ∎
We choose $\vec{\varphi}\in C_{0}^{1}(\Omega,\mathbb{R}^{N})$ such that $u\neq
1$ a.e. on the support of $\vec{\varphi}$. On multiplying $\nabla
u_{n}\cdot\vec{\varphi}$ to the weak formulation of (4.3) and integrating over
the set $\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}$ gives
$\displaystyle\begin{split}\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}\left[-\Delta_{p}u_{n}+\frac{1}{\alpha_{n}}g\left(\frac{u_{n}-1}{\alpha_{n}}\right)\right]\nabla
u_{n}\cdot\vec{\varphi}dx\\\
=\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}(u_{n}-1)_{+}^{q-1}\nabla
u_{n}\cdot\vec{\varphi}dx.\end{split}$ (4.14)
The term on the left hand side of (4.14) can be expressed as follows.
$\displaystyle\nabla\cdot\left(\frac{1}{p}|\nabla
u_{n}|^{p}\vec{\varphi}-(\nabla u_{n}\cdot\vec{\varphi})|\nabla
u_{n}|^{p-2}\nabla u_{n}\right)+(\nabla\vec{\varphi}\cdot\nabla
u_{n})\cdot\nabla u_{n}|\nabla u_{n}|^{p-2}$ $\displaystyle-\frac{1}{p}|\nabla
u_{n}|^{p}\nabla\cdot\vec{\varphi}$ $\displaystyle+\nabla
G\left(\frac{u_{n}-1}{\alpha_{n}}\right)\cdot\vec{\varphi}.$ (4.15)
Using (4) and on integrating by parts we obtain
$\displaystyle\begin{split}\int_{\\{u_{n}=1+\epsilon^{+}\\}\cup\\{u_{n}=1-\epsilon^{-}\\}}\left[\frac{1}{p}|\nabla
u_{n}|^{p}\vec{\varphi}-(\nabla u_{n}\cdot\vec{\varphi})|\nabla
u_{n}|^{p-2}\nabla
u_{n}+G\left(\frac{u_{n}-1}{\alpha_{j}}\right)\hat{\varphi}\right]\cdot\hat{n}dS\\\
=\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}\left(\frac{1}{p}|\nabla
u_{n}|^{p}\nabla\cdot\vec{\varphi}-(\nabla\vec{\varphi}\cdot\nabla
u_{n})|\nabla u_{n}|^{p-2}\nabla u_{n}\right)dx\\\
+\int_{\\{1-\epsilon^{-}<u_{n}<1+\epsilon^{+}\\}}\left[G\left(\frac{u_{n}-1}{\alpha_{n}}\right)\nabla\cdot\vec{\varphi}+\lambda(u_{n}-1)_{+}^{q-1}(\nabla
u_{n}\cdot\vec{\varphi})\right]dx.\end{split}$ (4.16)
The integral on the left of equation (4.16) converges to
$\displaystyle\int_{\\{u_{n}=1+\epsilon^{+}\\}\cup\\{u_{n}=1-\epsilon^{-}\\}}\left(\frac{1}{p}|\nabla
u|^{p}\vec{\varphi}-(\nabla u_{n}\cdot\vec{\varphi})|\nabla u_{n}|^{p-2}\nabla
u_{n}\right)\cdot\hat{n}dS+\int_{\\{u_{n}=1+\epsilon^{+}\\}}\vec{\varphi}\cdot\hat{n}dS$
(4.17)
$\displaystyle=\int_{\\{u_{n}=1+\epsilon^{+}\\}}\left[1-\left(\frac{p-1}{p}\right)|\nabla
u_{n}|^{p}\right]\vec{\varphi}\cdot\hat{n}dS-\int_{\\{u_{n}=1-\epsilon^{-}\\}}\left(\frac{p-1}{p}\right)|\nabla
u_{n}|^{p}\vec{\varphi}\cdot\hat{n}dS.$ (4.18)
Thus the equation (4.17) under the limit $\epsilon\rightarrow 0$ becomes
$\displaystyle 0=\underset{\epsilon\rightarrow
0}{\lim}\int_{\\{u=1+\epsilon^{+}\\}}\left[\left(\frac{p}{p-1}\right)-|\nabla
u|^{p}\right]\vec{\varphi}\cdot\hat{n}dS-\underset{\epsilon\rightarrow
0}{\lim}\int_{\\{u=1-\epsilon^{-}\\}}|\nabla
u|^{p}\vec{\varphi}\cdot\hat{n}dS$ (4.19)
This is because $\hat{n}=\pm\dfrac{\nabla u}{|\nabla u|}$ on the set
$\\{u=1+\epsilon^{+}\\}\cup\\{u=1-\epsilon^{-}\\}$. This proves that $u$
satisfies the free boundary condition. The solution cannot be trivial as it
satisfies the free boundary condition. Thus a solution to (1.1) exists that
obeys the free boundary condition besides the Dirichlet boundary condition.
## Appendix
###### Lemma 4.3.
$u$ is in $W_{\text{loc}}^{1,p}(\Omega)$ and the Radon measure
$\mu=\Delta_{p}u$ is nonnegative and supported on $\Omega\cap\\{u<1\\}$.
###### Proof.
We follow the proof due to Alt-Caffarelli [1]. Choose $\delta>0$ and a test
function $\varphi^{p}\chi_{\\{u<1-\delta\\}}$ where $\varphi\in
C_{0}^{\infty}(\Omega)$. Therefore,
$\displaystyle\begin{split}0&=\int_{\Omega}|\nabla u|^{p-2}\nabla
u\cdot\nabla(\varphi^{p}\min\\{u-1+\delta,0\\})dx\\\
&=\int_{\Omega\cap\\{u<1-\delta\\}}|\nabla u|^{p-2}\nabla
u\cdot\nabla(\varphi^{p}\min\\{u-1+\delta,0\\})dx\\\
&=\int_{\Omega\cap\\{u<1-\delta\\}}|\nabla
u|^{p}\varphi^{p}dx+p\int_{\Omega\cap\\{u<1-\delta\\}}\varphi^{p-1}(u-1+\delta)|\nabla
u|^{p-2}\nabla u\cdot\nabla\varphi dx,\end{split}$ (4.20)
and so by Caccioppoli like estimate we have
$\displaystyle\begin{split}\int_{\Omega\cap\\{u<1-\delta\\}}|\nabla
u|^{p}\varphi^{p}dx&=-p\int_{\Omega\cap\\{u<1-\delta\\}}\varphi^{p-1}(u-1+\delta)|\nabla
u|^{p-2}\nabla u\cdot\nabla\varphi dx\\\ &\leq
c\int_{\Omega}u^{p}|\nabla\varphi|^{p}dx.\end{split}$ (4.21)
Since $\int_{\Omega}|u|^{p}dx<\infty$, therefore on passing the limit
$\delta\rightarrow 0$ we conclude that $u\in W_{\text{loc}}^{1,p}(\Omega)$.
Furthermore, for a nonnegative $\zeta\in C_{0}^{\infty}(\Omega)$ we have
$\displaystyle\begin{split}-\int_{\Omega}|\nabla u|^{p-2}\nabla
u\cdot\nabla\zeta
dx=&\left(\int_{\Omega\cap\\{0<u<1-2\delta\\}}+\int_{\Omega\cap\\{1-2\delta<u<1-\epsilon\\}}+\int_{\Omega\cap\\{1-\delta<u<1\\}}\right.\\\
&\left.+\int_{\Omega\cap\\{u>1\\}}\right)\\\ &\left[|\nabla u|^{p-2}\nabla
u\cdot\nabla\left(\zeta\max\left\\{\min\left\\{2-\frac{1-u}{\delta},1\right\\},0\right\\}\right)\right]dx\\\
\geq&\int_{\Omega\cap\\{1-2\delta<u<1-\delta\\}}\left[|\nabla u|^{p-2}\nabla
u\cdot\left(2-\frac{1-u}{\delta}\right)\nabla\zeta+\frac{\zeta}{\delta}|\nabla
u|^{p}\right]dx\geq 0.\end{split}$ (4.22)
On passing the limit $\delta\rightarrow 0$ we obtain $\Delta_{p}(u-1)_{-}\geq
0$ in the distributional sense and hence there exists a Radon measure $\mu$
(say) such that $\mu=\Delta(u-1)_{-}\geq 0$. ∎
## Acknowledgement
The author thanks the community of the free boundary value problems for
injecting a new lease of life to the study of elliptic PDEs and the CSIR,
India (25(0292)/18/EMR-II) for the financial support.
## References
* [1] Alt, H.W., Caffarelli, L.A., Existence and regularity for a minimum problem with free boundary, J. Reine Angew. Math., 325, 105-144, 1981.
* [2] Batchelor, G.K., On steady state laminar flow with closed streamlines at large Reynolds number, J. Fluid mech., 1, 177-190, 1956.
* [3] Batchelor, G.K., A proposal concerning laminar wakes behind bluff bodies at large Reynolds number, J. Fluid mech., 1, 388-398, 1956.
* [4] Caflisch, R.E., Mathematical analysis of vortex dynamics. In: Mathematical Aspects of Vortex Dynamics (Leesburg VA, 1988), pp 1-24, SIAM, Philadelphia, PA, 1989.
* [5] Caffarelli, L.A., Peral, I., On $W^{1,p}$ estimates for elliptic equations in divergence form, Comm. Pure Appl. Math., 51, 1-21, 1998.
* [6] Caffarelli, L.A., Jerison, D., Kenig, C.E., Some new monotonicity theorems with applications to free boundary problems, Ann. Math. (2), 155(2), 369-404, 2002.
* [7] Elcrat, A.R., Miller, K.G., Variational formulas on Lipschitz domains, Trans. Am. Math. Soc., 347(7), 2669-2678, 1995.
* [8] Gilbarg, D., Trudinger, N.S., Elliptic Partial Differential Equations of Second Order, Springer-Verlag, Berlin Heidelberg, 2001.
* [9] Jerison, D., Perera, K., A multiplicity result for the Prandtl-Batchelor free boundary problem (Preprint arXiv:2003.05921).
* [10] Kanishka Perera, Ravi P. Agarwal and Donald O’Regan, Morse Theoretic Aspects of $p$-Laplacian Type Operators, Mathematical surveys and Monographs, Amer. Math. Soc., 161, 2010.
* [11] Kesavan, S., Topics in functional analysis and applications, New Age International (P) Ltd., 2003.
* [12] Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu, Dušan D. Repovš, Nonlinear Analysis - Theory and Methods, Springer, 2019.
* [13] Perera, K., On a class of elliptic free boundary problems with multiple solutions, Nonlinear Differ. Equ. Appl., 28, Art: 36, 2021.
|
2024-09-04T02:54:58.480708 | 2020-03-05T21:18:21 | 2003.04294 | {
"authors": "Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, Tao Tang, Zheng\n Wang",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26122",
"submitter": "Zheng Wang",
"url": "https://arxiv.org/abs/2003.04294"
} | arxiv-papers | # Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A
Machine Learning Based Approach
Peng Zhang, Jianbin Fang, Canqun Yang, Chun Huang, Tao Tang, Zheng Wang Peng
Zhang, Jianbin Fang, Canqun Yang, Chun Huang, and Tao Tang are with National
University of Defense Technology, China.
E-mail: {zhangpeng13a, j.fang, canqun<EMAIL_ADDRESS>Zheng Wang is
with University of Leeds, United Kingdom.
E-mail<EMAIL_ADDRESS>
###### Abstract
As many-core accelerators keep integrating more processing units, it becomes
increasingly more difficult for a parallel application to make effective use
of all available resources. An effective way for improving hardware
utilization is to exploit spatial and temporal sharing of the heterogeneous
processing units by multiplexing computation and communication tasks – a
strategy known as heterogeneous streaming. Achieving effective heterogeneous
streaming requires carefully partitioning hardware among tasks, and matching
the granularity of task parallelism to the resource partition. However,
finding the right resource partitioning and task granularity is extremely
challenging, because there is a large number of possible solutions and the
optimal solution varies across programs and datasets. This article presents an
automatic approach to quickly derive a good solution for hardware resource
partition and task granularity for task-based parallel applications on
heterogeneous many-core architectures. Our approach employs a performance
model to estimate the resulting performance of the target application under a
given resource partition and task granularity configuration. The model is used
as a utility to quickly search for a good configuration at runtime. Instead of
hand-crafting an analytical model that requires expert insights into low-level
hardware details, we employ machine learning techniques to automatically learn
it. We achieve this by first learning a predictive model offline using
training programs. The learnt model can then be used to predict the
performance of any unseen program at runtime. We apply our approach to 39
representative parallel applications and evaluate it on two representative
heterogeneous many-core platforms: a CPU-XeonPhi platform and a CPU-GPU
platform. Compared to the single-stream version, our approach achieves, on
average, a 1.6x and 1.1x speedup on the XeonPhi and the GPU platform,
respectively. These results translate to over 93% of the performance delivered
by a theoretically perfect predictor.
###### Index Terms:
Heterogeneous computing; Parallelism; Performance Tuning; Machine learning
## 1 Introduction
Heterogeneous many-cores, as representative by GPGPUs and Intel’s XeonPhi, are
widely used for accelerating parallel applications [1, 2, 3]. As users demand
higher performance, many-core accelerators have become more powerful by
providing more and more processing units. While the abundant computing
resources offer the potential for higher performance, it becomes harder for a
parallel application to utilize all the available computing resources [4, 5].
As a result, many parallel applications fail to fully unlock the performance
potential of a many-core accelerator.
One way for improving heterogeneous many-core utilization is to exploit
spatial and temporal sharing of processing resources. This strategy is also
known as heterogeneous streaming [6]. The idea is to exploit the computation
and communication independency of task parallelism to improve hardware
utilization. It works by partitioning the processor cores to allow independent
communication and computation tasks (i.e. streams) to run concurrently on
different hardware resources, which effectively overlaps the concurrent kernel
execution with data movements. Representative heterogeneous streaming
implementations include CUDA Streams [7], OpenCL Command Queues [8], and Intel
heterogeneous streams library (hStreams) [9, 6]. These implementations allow a
parallel program to spawn more than one stream (or pipeline) so that the data
movement stage of one pipeline overlaps the kernel execution stage of another.
Prior work on heterogeneous streaming mainly targets GPUs [10, 11, 12].
Compared to GPU implementations, OS-enabled coprocessors, like the Intel
XeonPhi, provides some unique features that are currently unavailable on the
GPU. For example, besides specifying the number of streams, developers can
explicitly map streams to different groups of cores on XeonPhi to control the
number of cores of each hardware partition. This parameter is not exposed to
programmers on GPUs, making previous work on GPU-based parallel streaming
optimizations infeasible to fully exploit Xeon-Phi-like many-core accelerators
(see also Section 6.3). On the other hand, ample evidence is showing that
choosing the right stream configuration, i.e., the number of processor core
partitions and the number of concurrent tasks of a multi-stream application,
values, has a significant impact the application’s performance on many-core
architectures [13, 14, 15]. However, attempting to find the optimal values
through exhaustive profiling would be ineffective, because the range of the
possible values for the two parameters is huge. What we need is a technique
that automatically determines the optimal stream configuration for any
streamed application in a fast manner.
This article presents a novel approach to determine the right number of
processor core partitions and tasks for heterogeneous streams, targeting
heterogeneous many-core architectures. Our key insight is to use a performance
model to quickly search for the optimal stream configuration. The performance
model estimates the resulting performance of the target streamed application
when it runs under a given stream configuration. If the prediction can be
performed quickly with low overhead, we can then quickly explore a large
configuration space. Instead of hand-crafting the performance model that
requires human modification whenever the architecture evolves (i.e., when the
number and types of cores change), we employ machine learning techniques to
automatically construct a predictive model. Our predictor is first trained
_off-line_. Then, using code and dynamic runtime features of the program, the
model predicts performance for a _new_ , _unseen_ program under a given stream
configuration.
Our prior work [16] develops a machine learning based classifier to predict
the optimal stream configuration. However, this approach can only choose from
a limited set of configurations seen during the training phase. Unlike a
classification-based approach, the approach presented in the article allows us
to explore a larger number of stream configurations (including those that are
not seen during the training phase) with negligible runtime overhead. This
advantage significantly improves the generalization ability of the proposed
approach (Section 3).
Due to the newness of heterogeneous streaming execution model, there are very
few multi-stream benchmarks available. To evaluate our approach on a wide
range of applications, we have developed a compiler-based tool to
automatically translate standard OpenMP benchmarks into their streamed
variants for the backends of XeonPhi and GPU architectures (Section 4). With
the help of this code generator, we can apply our approach to 39 parallel
benchmarks. We argue that this tool can help generate more streamed code and
thus is an added value to the community.
We evaluate our approach on two representative heterogeneous many-core
platforms: a 57-core Intel XeonPhi and an NVIDIA 1080Ti GPU platforms. We
achieve, on average, a 1.6x and 1.1x speedup over the single-stream execution
on the XeonPhi and the GPU platforms, respectively. This translates to over
93% of the best available performance.
The core contribution of this paper is a novel machine-learning-guided
approach for automatically determining the optimal stream configuration on
heterogeneous many-cores. We show that our approach delivers good performance
across benchmarks and heterogeneous many-core platforms. While we do not seek
to advance the machine learning algorithm itself, our work shows how machine
learning can be used to address the challenging problem of tuning fine-grained
streaming parallelism on heterogeneous many-core architectures. In this work,
we demonstrate the usefulness of our approach on XeonPhi and an NVIDIA GPU,
but our approach is equally applicable on other heterogeneous platforms like
AMD GPUs.
## 2 Background and Overview
In this section, we first give a brief introduction of heterogeneous
streaming; we then define the scope of this work, before motivating the need
of our scheme and providing an overview of our approach.
### 2.1 Heterogeneous Streaming
The idea of heterogeneous streaming is to exploit spatial and temporal sharing
of computing resources to utilize the hardware resources to improve
application performance.
Spatial Sharing. Modern many-core accelerators offer a large number of
processing units. Since many applications cannot fully utilize all the cores
at a time, we can partition the computing units into multiple groups to
concurrently execute multiple tasks. In this way, the computing resource is
spatially shared across concurrently-running application tasks. The key to
spatial sharing is to determine the right number of partitions, because over-
provisioning of processing units would waste computing resources but under-
provisioning would lead to slowed down performance.
Temporal Sharing. Code written for heterogeneous computing devices typically
consists of several stages, such as host device communication and computation.
Using temporal sharing, one can overlap some of these stages to exploit
pipeline parallelism to improve performance by overlapping the host-device
communication and kernel execution.
⬇ 1//setting the partition-size and task granularity
hStreams_app_init(partition_size,streams_p_part); 3 //stream queue id
5stream_id = 0; for(…){ 7 //enquque host-device transfer to current stream
hStreams_app_xfer_memory(,,, stream_id, HSTR_SRC_TO_SINK,…); 9 … //enqueue
computation to the current stream 11 hStreams_EnqueueCompute(stream_id,
”kernel1”, …); … 13 //move to the next stream stream_id = (stream_id++) %
MAX_STR; 15} //transfer data back to host 17hStreams_app_xfer_memory(,,,
HSTR_SINK_TO_SRC,…);
Figure 1: Heterogeneous streaming using hStreams as an example.
(a) binomial
(b) prefixsum
Figure 2: Heatmaps show the resultant speedup (over single-stream) of binomial
and prefixsum under different stream configurations. The #partitions and
#tasks have a significant impact on the resultant performance, and the sweet
spots are sparse and vary across programs.
### 2.2 Problem Scope
Our work aims to improve the performance of a data parallel application by
exploiting spatial and temporal sharing of heterogeneous streams. We do so by
determining at runtime how many partitions should be used to group the cores
(_#partitions_) and how many data parallel tasks (_#tasks_) should be used to
run the application. Our current implementation is applicable to XeonPhi and
GPUs by using different runtime back-ends (hStream for XeonPhi, and CUDA or
OpenCL for GPUs).
Code Example. Figure 1 gives a simplified code example written with Intel’s
hStreams APIs that can run on the XeonPhi many-core. At line 2 we initialize
the stream execution by setting the number of partitions and tasks/streams per
partition. This initialization process essentially creates multiple processor
domains and determines how many logical streams can run on a partition. In the
_for_ loop (lines 7-14) we enqueue the communication and computation tasks to
a number of streams identified by the stream_id variable. In this way,
communication and computation of different streams can be overlapped during
execution (temporal sharing); and streams on different processor domains (or
partitions) can run concurrently (spatial sharing). Our predictive model
determines the #partitions and the #tasks before invoking the hStreams
initialization routine, hStreams_app_init().
Figure 3: Color table showing the speedups of best-performing configurations
across inputs for dct. Each cell shows the performance for one of the 16 best-
performing configurations, $Cn$, on a given input, $Dn$. The best
configuration varies across inputs and a good configuration on one input can
give poor performance on another dataset.
### 2.3 Motivating Examples
Consider Figure 2 which shows the resultant performance improvement given by
multi-stream parallelism over the single-stream version of the code for two
applications on a 57-core Intel XeonPhi system. We use two streamed programs
from prior work [13]: binomial computes the price evolution over a given
period and prefixSum calculates the prefix sum for a sequence of numbers.
It is observed from this example that not all multi-stream configurations give
improved performance. As can be seen from the diagrams, the search space of
multi-stream configurations is huge but good configurations are sparse. The
performance varies significantly over stream configurations (#partitions,
#tasks). The optimal #tasks for binomial ranges from 1 to 30, and the best
#partitions is between 1 and 40. In contrast to binomial, prefixsum benefits
from fine-grained parallelism when using a larger #tasks (220 to 224) and
#partitions (60 to 80). However, the stream configurations that are effective
for prefixsum give no speedup over the single-stream version for binomial.
Now consider Figure 3 that shows the speedups of dct under 16 multi-stream
configurations over the single-stream version, where each configuration is
found to give the best-performance for one of the 16 inputs. In the color
table, each cell shows the performance of a stream configuration
($C1,...,C16$) on a specific input dataset ($D1,...,D16$); and the values
along the diagonal line represent the best-available performance (found
through profiling) for an input. As can be seen from the figure, the best
stream configuration can vary across inputs for the same benchmark. For
example, while $C4$ gives a speedup of 1.33x over the baseline for dataset
$D4$, it delivers a poor performance for dataset $D14$ by doubling the
execution time over the single-stream version. This diagram also suggests that
no single configuration can give improved performance for all inputs.
Lesson Learned. These two examples show that choosing the stream configuration
has a great impact on performance and the best configuration must be
determined on a per-program and per-dataset basis. Later, we will show that
this observation is not unique to XeonPhi but also holds for GPUs. Attempting
to find the optimal configuration through means of an exhaustive search would
be ineffective, and the overhead involved would be far bigger than the
potential benefits. Online search algorithms, while can speed up the search
process, the overhead can still outweigh the benefit. For example, when
applying simulated annealing to binomial, the best-found configuration only
reaches 84% of the best-available performance after 310,728 iterations111In
Section 6.1, we show that our approach achieves 93% of the best-available
performance for binomial on XeonPhi.. Classical hand-written heuristics are
not ideal either, as they are not only complex to develop, but are likely to
fail due to the variety of programs and the ever-changing hardware
architecture. An alternate approach, and the one we chose to use, is to use
machine learning to automatically construct a performance model to estimate
the benefit of any candidate configuration, providing minimal runtime overhead
for searching for a good configuration, and having little development cost
when targeting new architectures.
### 2.4 Overview of Our Approach
Our library-based approach, depicted in Figure 4, is completely automated. To
determine the best streaming configuration, our approach follows a number of
steps described as follows. We use a set of information or _features_ to
capture the characteristics of the program. We develop a LLVM [17] compiler
pass to extract static code features at compile time, and a low-overhead
profiling pass to collect runtime information at execution time (i.e., during
the first few loop iterations). Because profiling also contributes to the
final program output, no computation cycle is wasted. At runtime, we search
for a good configuration through an offline trained performance model to
estimate the resulting performances for all candidate configurations. The
performance model takes in the feature values, a given configuration of
resource partition and task granularity and estimates the potential speedup
for the given configuration over the single-stream version. The overhead of
runtime feature collection and search is a few milliseconds, which is included
in all our experimental results. Since our training process can be performed
automatically, we can easily target our performance model for different
architectures.
Figure 4: Our machine learning based performance model (trained _offline_)
predicts the speedup based on the extracted feature values of the code and a
given stream configuration. We use the predictions to quickly rank candidate
configurations at runtime to choose the one with the best predicted
performance.
## 3 Performance Modeling
At the core of our approach is a machine learned performance model built upon
the Multi-layer Perceptron (MLP) artificial neural network (ANN). Our
prototype is implemented using the Python scikit-learn machine learning
package [18]. It is to note that our prior work [16] uses a Support Vector
Machine (SVM) based classifier. However, such an approach can only make
predictions on a limited set of configurations seen at the training time.
Unlike a classification-based approach, the new approach presented in this
article is a _regression-based_ model which can make predictions on any stream
configuration. This new approach thus has a better generalization ability for
various heterogeneous architectures. We have also evaluated a number of
alternative modeling techniques, including MLP, SVM, and decision trees. We
chose MLP because it gives the best performance and has modest training
overhead (see Section 6.6.1).
Our performance model takes as input the feature values and a given
configuration (e.g., #partitions and #tasks for XeonPhi and #tasks for GPUs).
It predicts the speedup for the given configuration. Building and using such a
model follows a 3-step process for supervised learning: (i) generate training
data (ii) train a performance model (iii) use the performance model, described
as follows.
Figure 5: The training process of our performance model.
### 3.1 Training the Performance Model
Our method for model training is shown in Figure 5. To learn a regression
model, we first need to profile the execution time (in order to calculate the
speedup over the single-stream version) of all candidate configurations for
each training program, and extract the feature values from the program. We
then use the feature values, configuration settings and speedups to train a
model.
#### 3.1.1 Generating Training Data
To generate training data, we apply _cross-validation_ to 39 benchmarks, i.e.,
by excluding the testing benchmarks from the training dataset (see also
Section 5.3.1). We execute each training program and benchmark a number of
times until the gap of the upper and lower confidence bounds is smaller than
5% under a 95% confidence interval setting. We then calculate the average
speedup for a given stream configuration over the single-stream version. We
exhaustively execute each training program across a wide range of stream
configurations, and record the performance of each. Next, we calculate the
speedup for each configuration, program and dataset. Finally, we extract the
values of our selected set of features from each program and dataset. We
stress that the trained model can be applied to stream configurations that are
not seen in the training phase.
#### 3.1.2 Profiling Configurations
During the training phase, we exhaustively execute each training program
across a set of streamed configurations. On XeonPhi, we profile each training
program using the _#partitions_ ranging from 1 to 224 (the maximum number of
physical threads on XeonPhi) and the _#tasks_ ranging from 1 to 256 222We
chose these values because configuration settings beyond these values give a
poor performance during our initial evaluation.. On GPUs, we cannot configure
the number of partitions currently, we set the _#partitions_ to the same as
_#tasks_ to be consistent with XenPhi. On this platform, we also set the
_#tasks_ to be range between $2^{0}$ and $2^{10}$, which is big enough to
include the optimal values according to our experiments. Note that these
parameter ranges can be configured by the user.
#### 3.1.3 Building The Model
Each evaluated configuration is appended to the feature value vector of a
training program to form a model input. The model inputs and the corresponding
speedups (i.e., ground truths) for all training programs are passed to a
learning algorithm. The algorithm finds a correlation between the input vector
and the desired prediction. The output of our learning algorithm is an MLP
model where the weights of the model are determined from the training data.
Model parameter tuning is performed on the training dataset for each targeting
hardware architecture, using cross-validation (see also Section 6.6.3). In our
case, the overall training process for all the 39 training programs (which is
dominated by training data generation) takes less than a week on a single
machine. Since training is performed only once “at the factory”, this is a
_one-off_ cost.
### 3.2 Features
Our performance models are based exclusively on code and dynamic features of
the target programs. Code features are extracted from the program source code,
and dynamic features are collected using hardware performance counters during
the initial profiling run of the target application. We restrict us in using
hardware performance counters that are commonly available on modern processors
such as the data cache misses to ensure that our approach can be applied to a
wide range of many-core architectures.
We considered 38 candidate raw features in this work. Some features were
chosen from our intuition based on factors that can affect the performance
such as dts (host-device data transfer size) and #xfer_mem, while other
features were chosen based on previous work [19, 20].
#### 3.2.1 Feature Selection
To build an accurate model through supervised learning, the training sample
size typically needs to be at least one order of magnitude greater than the
number of features. In this work, we start from 311 training samples and 38
raw features, so we would like to reduce the number of features in use. Our
process for feature selection is fully automatic, described as follows.
We first combine several raw features to form a set of combined normalized
features, which are able to carry more information than the individual parts.
For example, instead of reporting raw branch hit and miss counts, we use the
branch miss rate. Next, we removed raw features that carried similar
information which is already captured by chosen features. To find which
features are closely correlated, we constructed a correlation coefficient
matrix using the Pearson correlation coefficient [21]. The closer a
coefficient between two features is to +/-1, the stronger the correlation
between the two input features. We removed any feature which had a correlation
coefficient (taking the absolute value) greater than 0.7. Similar features
include the number of executed instructions and the number of E-stage cycles
that were successfully completed.
Our feature selection process reduces the number of features to 10 for XeonPhi
(see Table I) and 10 for the NVIDIA Titan 1080Ti GPU (see Table II), where
some features are shared. Since our approach for feature selection is
automatic, the approach can be applied to other sets of candidate features. It
is to note that feature selection is also performed using cross-validation
(see also Section 5.2).
Table I: Chosen features for XeonPhi performance model Feature | Description
---|---
loop nest | at which level the outermost parallelizable loop lies on
loop count | # of the parallel loop iterations
#xfer_mem | # of host-device transfer API calls
dts | total host-device transfer size
redundant transfer size | host-device transfer size among overlapping tasks
max blocks | the maximum number of tasks of the application
min task unit | the minimum task granularity for a partition
# instructions | the total number of instructions of the kernel
branch miss | branch miss rate
L1 DCR | L1 Data cache miss rate
Table II: Chosen features for GPU programs Feature | Description
---|---
Access type 1 | # array access, whose fastest varying index is an affine function of the block id
Access type 2 | #array accesses, whose second or higher dimensional index is an affine function of the block id
#xfer_mem | # of host-device transfer API calls
host to device transfer size | total host to device transfer size
device to host transfer size | total device to host transfer size
redundant transfer size | host-device transfer size among overlapping tasks
max blocks | the maximum number of tasks
# instructions | the total number of instructions of the kernel
divergent branches | # divergent branches
L2 read miss rate | L2 cache read miss rate
#### 3.2.2 Feature Standardization
Supervised learning typically requires the feature values to lie in a certain
range. Therefore, we scaled the value for each of our features between the
range of 0 and 1. We record the maximum and minimum value of each feature
found at the training phase, and use these values to scale features extracted
from a new application after deployment. We truncate a value during deployment
if the value is outside the minimum/maximum value range seen during training.
It is to note that we also use the same approach to normalize the model
predictions (speedups) to the range of 0 and 1. In this work, we choose
Z-score to standardize the training data, and the details of quantifying the
impact of feature engineering methods can be found in Section 6.6.2.
(a) XeonPhi
(b) NVIDIA GPU
Figure 6: Feature importance on (a) XeonPhi and (b) NVIDIA GPU.
#### 3.2.3 Feature Importance
To understand the usefulness333In Section 6.6.4, we give a further breakdown
of the impact of individual feature to the model performance on a per
benchmark basis. of each feature, we apply a factor analysis technique called
Varimax rotation [22] to the feature space transformed by the principal
component analysis (PCA). This technique quantifies the contribution of each
feature to the overall variance in each of the PCA dimensions. Intuitively,
the more variances a feature brings to the space, the more useful information
the feature carries.
As an example, Figure 6 shows the top features chosen for XeonPhi and NVIDIA
GPU architectures. For the XeonPhi platform, features that capture the
parallelism degree (e.g. max blocks), host-device communication (e.g.
redundant transfer size), and computation (e.g. #instructions) are found to be
important. Other features such as L1 DCR and loop nest are useful, but are
less important compared to others. On the NVIDIA GPU platform, we note that
the parallelism degree is important, and the other features are equally useful
(Figure 6b). This figure shows that prediction can accurately draw upon a
subset of aggregated feature values.
### 3.3 Runtime Deployment
Once we have built and trained our performance model as described above, we
can use it as a cost function to search for the best stream configuration for
any _new_ , _unseen_ program. Feature values are extracted from the single-
stream version of the code. Static code features (such as loop count) are
extracted from the program source at compile time. Dynamic features (such as
branch miss) are extracted by profiling the program without partitioning for a
few loop iterations (which typically translate to several microseconds). After
feature collection, we feed the feature values to the search engine to rank
all candidate configurations using the performance model. The top-ranked
stream configuration is then used for the target program. In Section 4.4, we
provide further details on how the performance model can be integrated with
the host code generation process.
#### 3.3.1 Adapt to Changing Program Phases
Our current implementation chooses a configuration for each kernel and does
not change the configuration throughout the kernel execution. Therefore, it
can adapt to different behaviors across kernels because predictions are
performed on a per-kernel basis. We found that this strategy is sufficient for
many data-parallel kernels targeted in this work.
Our approach can be extended to adapt phase or program behavior changes within
a kernel. One way of doing this is to first partition the input data into
groups and then perform configuration selection before launching the kernel
that performs on an input data group. To reduce the prediction and
configuration overhead, we can sample periodically to see if the performance
counter readings are significantly different from the ones used for the
current prediction to trigger re-configuration. Dynamic re-configuration of a
running kernel will require extending the underlying runtime (e.g., hStreams
or CUDA) to adjust thread mapping and having hardware support to stop and
resume the execution contexts. We leave this as future work.
## 4 OpenMP to Streamed Code Generator
Figure 7: Work flow for translating OpenMP programs to streamed programs using
our automatic code generator.
Currently, there are very few publicly available benchmarks for utilizing the
streaming capability of heterogeneous many-core architectures, in particular,
XeonPhi. To evaluate our approach on a diverse set of benchmarks, we have
developed a compiler-based code generator, autostreamer, to automatically
translate OpenMP programs onto streamed code depending on the target
architecture. Our code generator is open sourced444Available at:
https://github.com/wisdom-moon/autostreamer.. Our implementation currently
supports converting OpenMP code to hStreams, CUDA and OpenCL programs. While
we do not claim novelty on this as several works on source-to-source
translation from OpenMP to CUDA[23, 24, 25, 26] or OpenCL[20, 27] exist, we
believe the tool could serve as a useful utility for translating OpenMP
programs to exploit multi-stream performance on heterogeneous many-core
architectures.
### 4.1 Code Generator Overview
Figure 7 depicts our source to source code generator for translating OpenMP
code to streamed programs. We use LLVM’s Clang front-end to convert OpenMP
code into the abstract syntax tree (AST). We then traverse the AST to obtain
the information to generate candidate streamed kernels and host-device
management code. The generated kernel and host code make use of exiting
programming models for kernel launching and communication management. We use
hStreams for XeonPhi and CUDA or OpenCL for GPUs.
Our current implementation supports the translation of OpenMP parallel loops,
i.e., loops annotated with omp for or omp for reduction constructs. For each
parallel loop, we outline the loop body and translate it into an individual
kernel function. We then replace the original loop body with a function call
(running on the host CPU) to launch the generated kernel. We also generate
management code for streaming context initialization, data partitioning, data
movements between the host and the accelerator, etc.
Our code generator relies on the native host/device compiler to optimize the
generated code. We have also compared our automatically generated code against
the manually translated code used in our prior work [16] and found that there
is little difference in performance for the set of OpenMP benchmarks used in
this work.
### 4.2 Preprocessing
As an example, Figure 8 illustrates how an OpenMP parallel loop can be
translated into hStreams code for XeonPhi. Note that a similar code generation
process is implemented for GPUs, using CUDA for NVIDIA GPU architectures and
OpenCL for other GPU platforms.
For each OpenMP parallel loop, we extract information of loop iterations from
the loop head. In this work, partitioning is achieved by splitting the loop
iteration space. Furthermore, we collect all the variables needed by the
hStreams kernel. Because hStreams requires kernel parameters to be passed as
the uint64_t (lines 1-2 of Figure 8b), the kernel parameters will be cast into
this type. The kernel parameters need to be packed into an array (line 21 in
Figure 8c). Then the hStreams library will unpack kernel parameters from the
array and pass the parameters to kernel function.
During the preprocessing stage, we also extract the static code feature values
of each target parallel loop. The code feature values will be encoded into the
source code during host code generation. It is to note that our approach can
be easily applied to existing hStreams programs – by first gathering feature
values from an hStreams kernel, and then storing the extracted information in
an auxiliary file or source code through a compiler front-end pass.
⬇ 1// An OpenMP C code for vector addition float * hostOutput = (float *)
malloc(inputLength*sizeof(float)); 3… #pragma omp parallel for 5for(int i=0;
i<inputLength; i++) { 7 hostOutput[i] = hostInput1[i] + hostInput2[i]; } 9…
(a) OpenMP code.
⬇ 1COINATIVELIBEXPORT void kernel (uint64_t arg0, uint64_t arg1, … uint64_t
arg5) 3{ int _start = (int) arg0; 5 … float *hostInput2 = (float *) arg5; 7
#pragma omp parallel for 9 for(int i= _start; i< _end; i++) hostOutput[i] =
hostInput1[i] + hostInput2[i]; 11}
(b) hStreams kernel code.
⬇ 1//output buffer float * hostOutput = (float *)
malloc(inputLength*sizeof(float)); 3 //Feature update and prediction 5Stream
config; 7conf_search(&config, &kernel_1_features, kernel_1_profile_runs); int
partitions = config.partitions; 9int tasks = config.tasks; 11//hStreams
Initialization hStreams_app_init(partitions, 1); . 13…
hStreams_app_create_buf((float *)hostInput1, …); 15… 17//Work partition int
sub_blocks = inputLength / tasks; 19int remain_index = inputLength % tasks;
21//Initialize kernel arguments uint64_t args[6]; args[2] = (uint64_t)
inputLength; 23… for (int idx = 0; idx < tasks; idx++) { 25 args[0] =
(uint64_t) _start; _end = _start + sub_blocks; 27 if (idx < remain_index) 29
_end ++; 31 args[1] = (uint64_t) _end;
hStreams_app_xfer_memory(&hostInput1[_start], &hostInput1[_start], (_end-
_start)*sizeof(float), idx % partitions, HSTR_SRC_TO_SINK, NULL); 33
hStreams_app_xfer_memory(&hostInput2[_start], …); 35 //Kernel launch
hStreams_EnqueueCompute(idx % partitions, ”kernel_1”, 3, 3, args, …); 37
//Read back results 39 hStreams_app_xfer_memory(&hostOutput[_start], …);
_start = _end; 41} … 43//hStreams cleanup code hStreams_app_fini();
(c) hStreams host code.
Figure 8: A running example of translating (a) an OpenMP parallel loop to (b)
hStreams kernel and (c) host management code.
### 4.3 Kernel Code Generation
Generating a streamed kernel function is straightforward as much of the OpenMP
code can be re-used. Figure 8b gives an example of the automatically generated
kernel for the OpenMP loop given in Figure 8a for hStreams kernels.
For the example given in Figure 8, an hStreams kernel starts with a pre-
processor macro COINATIVELIBEXPORT (lines 1-2 in Figure 8b). The number and
the type of the kernel parameters are loop-specific and are automatically
determined by our code generator. Within the generated kernel, all the
function parameters are cast from uint64_t into an appropriate type before
they are used. Note that the OpenMP parallel for pragmas are kept in the
generated kernel code per hStreams requirement (line 8 in Figure 8b).
With our code generator, the original outer-most loop iteration space will be
partitioned among parallel streams. The amount of work given to a specific
stream is determined by the _start and _end variables, which define which part
of the loop iteration space a stream instance will work on. A similar kernel
code generation approach is implemented for GPUs using CUDA or OpenCL.
### 4.4 Host Code Generation
To generate host code, we replace the original OpenMP parallel loop with a
function call to invoke the generated kernel (e.g., hStreams_EnqueueCompute in
Figure 8c)) together with additional code to initialize the host context and
to manage data transfer.
#### 4.4.1 Feature Value Collection
Static code features, extracted by our code generator, will be encoded as a
feature vector of real values. The feature vector will be passed to our
configuration search engine to find the optimal stream configuration at
runtime. Dynamic feature values are automatically collected by running the
generated streamed kernel for 5 iterations under the single-stream
configuration. As some loop bounds are dependent on the input, we might be
unable to determine certain feature values at compile time. These features are
represented as static symbolic pre-computation of loop bound variables, which
will be updated using runtime values at runtime.
#### 4.4.2 Setting Stream Configurations
To partition tasks among streams, we break the loop iterations into a number
of chunks of an equal size of subtask. We then group the hardware processor
cores into partitions, where each partition contains a fixed set of streams.
Processor partitioning and streams creation are achieved by calling the
hStreams_app_init (line 12 in Figure 8c) function for XeonPhi (and
cudaStreamCreate and clCreateCommandQueue for CUDA and OpenCL programs
respectively) by passing the stream configuration given by our search engine.
To overlap host device communications, we further split the input/output data
arrays to multiple data blocks (lines 32-39 in Figure 8c) where each task
operates on one block at a time while another data block is transferring
between the host and the accelerator. The number of data blocks is determined
by the stream configuration chosen at program runtime. The amount of work per
task and the size of transferred data can be determined with kernel
parameters. For example, in _for-loop_ at line 24 of Figure 8c, we calculate
them with the starting position (_start) and the block size (sub_block).
Thereafter, we schedule tasks and transfer the corresponding data blocks onto
streams in a round-robin fashion.
#### 4.4.3 Runtime Prediction
When a streamed (e.g., hStreams or CUDA) kernel is invoked, the configuration
selection engine library will choose a stream configuration (line 7 in Figure
8c) for the kernel. It uses the performance model to rank the candidate stream
configurations and returns the optimal configuration (_#partitions_ and
_#tasks_ for the example shown in Figure 8). The returned values are then used
to initialize the streamed context (lines 8-9 of Figure 8c). The overhead of
prediction is negligible (a few milliseconds) and is included in the results.
#### 4.4.4 Supporting OpenMP Constructs
OpenMP variables may have additional type information specified by directives,
including default, share, private, firstprivate, lastprivate, copyin and
threadprivate. Our generator uses these directives to map data onto the
accelerator memory space. Each variable with the share or default directive
will be translated into a global variable shared by all parallel threads.
Variables declared as private and threadprivate are translated such that there
is a private copy for each streamed kernel; no memory transfer between the
host and the accelerator is needed. For each variable specified as copyin or
first private, we create a private copy for each streamed kernel but
initialize each copy using explicit memory transfers before its first use.
Similarly, we create a private copy of a last private variable and the
original variable is updated by a stream that executes the last iteration.
Our implementation also supports a number of synchronization and thread
constructs. Structured blocks identified with master, and single directives
are executed by one thread on the host multi-core. barrier is implemented by
splitting up the parallel loop into smaller tasks to create synchronization
points among multiple streams. critical is implemented by using a mutex lock
to restrict the execution of the associated structured blocks to a single
thread at a time. The atomic and flush directives are already supported by
hStreams, CUDA or OpenCL.
#### 4.4.5 Host-Accelerator Communication Optimization
For each buffer that is used by both the host and the accelerator, we manage
two copies: one on the host memory and the other on the accelerator memory.
Our runtime records the status of each variable and checks whether the copy on
a device memory space is valid or not. No memory transfer is needed as long as
the copy in the target memory space is valid. We currently use a conservative
approach: if an element of an buffer has been updated, the entire buffer needs
to be synchronized before it can be used by threads running on a different
device. We also avoid unnecessary device to host data transfer by tracking the
data dependence between the kernel and the host program. For example, when
there are data-dependencies between two kernels but the host does not access
this data in between the two kernels, we directly pass the memory address of
the buffer to the later kernel (without moving the data back to the host).
## 5 Experimental Setup
### 5.1 Hardware, Systems Software and Benchmarks
Table III: Our evaluation platforms | CPU-XeonPhi | CPU-GPU
---|---|---
CPU | 8-core Xeon CPU @ 2.6 GHz | Core i7-8700K CPU @ 3.7 GHz
Accelerator | Intel Xeon 31SP Phi | NVIDIA GeForce GTX 1080 Ti GPU
Platforms. We evaluate our approach on two heterogeneous many-core platforms:
one is a CPU-XeonPhi platform and the other is a CPU-GPU platform. Table III
gives details of our hardware platforms.
Systems software. On the CPU-XeonPhi platform, the host CPU and the
accelerator are connected through PCIe. The host runs Redhat Linux v7.0 (with
kernel v3.10). The coprocessor runs a customized uOS (v2.6.38.8). We use
Intel’s MPSS (v3.6) to communicate between the host and the coprocessor. We
use the Intel hStreams library (v3.6) and Intel ICC (v16.0.3) for compilation
(with -O3 as the compiler option). The CPU-GPU platform runs Ubuntu 16.04
(with kernel v4.15). We use CUDA v10.0 and gcc v5.3 as the host compiler with
option “-O3”.
Benchmarks. We use our code generator to translate 37 OpenMP applications from
commonly used benchmark suites into hStreams and CUDA programs. We have
excluded benchmarks where the data transfer cannot be overlapped with the
kernel execution, which do not benefit from streamed parallelization. Table IV
gives the full list of these benchmarks. Among them, convolutionFFT2d and
convolutionSeparable have algorithm-dependent parameters, which are regarded
as different benchmarks in the experiments. This setting gives us a total of
39 programs. We run the majority of the programs using over 25 different
datasets, except for some applications where we used around 10 datasets
because the algorithmic constraints prevent us from using a larger number of
inputs.
Table IV: Streamed benchmarks used in our experiments. Suite | Name | Acronym | Name | Acronym
---|---|---|---|---
| convol.Separable | convsepr1(8) | dotProduct | dotprod
| convolutionFFT2d | fftx1y1(4y3) | fwt | fwt
| MonteCarlo | montecarlo | matVecMul | mvmult
| scalarProd | scalarprod | transpose | transpose
NVIDIA SDK | vectorAdd | vecadd | |
AMD SDK | binomial | binomial | BlackScholes | blackscholes
dct | dct | prefixSum | prefix
| bfs | bfs | histo | histo
| lbm | lbm | mri-q | mri-q
| mri-gridding | mri-gridding | sad | sad
Parboil | sgemm | sgemm | spmv | spmv
POLY BENCH | 2mm | 2mm | 3mm | 3mm
adi | adi | correlation | correlation
covariance | covariance | deriche | deriche
gemm | gemm | gemver | gemver
gesummv | gesummv | heat-3d | heat-3d
jacobi-1d | jacobi-1d | jacobi-2d | jacobi-2d
mvt | mvt | syr2k | syr2k
syrk | syrk | |
### 5.2 Competitive Approaches
We compare our regression-based approach against our preliminary work that
employs an SVM-based classifier to predict the optimal stream configuration
[16]. We denote our prior approach as SVM-classifier. We also compare our
approach against two recent models for predicting the optimal stream
configuration on GPUs. As it is currently not possible to configure the number
of processor partitions on GPUs, the relevant GPU models can only predict the
number of tasks.
_Liu et al._ In [12], Liu _et al._ use linear regression models to search for
the optimal number of tasks for GPU programs [12]. The approach employs
several analytic models, described as follows.
For a task with an input data size of $m$, the transferring time between the
CPU and the accelerator, $T_{t}$, is determined as $T_{t}=\alpha\cdot
m+\beta$, and the computation time, $T_{c}$, is calculated as:
$T_{c}=\eta\cdot m+\gamma$ where the model coefficients, $\alpha$, $\beta$,
$\eta$ and $\gamma$, are determined through empirical experiments. For a given
kernel with $N$ input data elements running using $n$ streams, this approach
partitions the computation into $n$ tasks, where the data size for each task,
$m$, is equal to $N$/$n$. For the programs which kernel dominated, the total
execution time, $T_{total}$, can be determined by:
$T_{total}=T_{t}+nT_{c}=\alpha\cdot m+\frac{N\gamma}{m}+N\eta+\beta$
For the programs which data transfer dominated:
$T_{total}=\alpha\cdot N+2\frac{N}{m}\beta$
By calculating the partial differential and second-order partial differential
of $T_{total}$ with respect to $m$, we can obtain the optimal task-granularity
as $m=\sqrt{\frac{N\gamma}{\alpha}}$. Then we can calculate the number of
tasks ($n$).
Note that $m=N/2$ is the optimal parameter for programs which data transfer
dominated, i.e., the optimal number of tasks is 2. Another problem of this
model is that it does not consider scenarios where communications in different
direction (i.e., host to device and device to host) can overlap with each
other. Note that we set the #partitions to be the same as $n$ for XeonPhi.
_Werkhoven et al._ The work presented by Werkhoven _et al._ models the
performance of data transfers between the CPU and the GPU [10]. They use the
LogGP model to estimate the host-device data transfer time. Specifically, the
model estimates the data transfer time using five parameters: the
communication latency ($L$), overhead ($o$), the gap ($g$), the number of
processors ($P$), and the PCIe bandwidth ($G$).
Let $B_{hd}$ denotes the amount of data transferred from the host to the
device and $B_{dh}$ denotes vice versa, and $T_{kernel}$ donates the kernel
execution time. For the dominant transfer scenario, the optimal number of
tasks(i.e., _#tasks_), $N_{s}$, can be estimated by solving the following
equations:
$B_{dh}*G_{dh}+g*(N_{s}-1)=\begin{cases}\frac{T_{kernel}}{N_{s}}+\frac{B_{dh}}{N_{s}}*G_{dh},&\text{if}B_{dh}>B_{hd}\\\
\frac{B_{hd}}{N_{s}}*G_{hd}+\frac{T_{kernel}}{N_{s}},&\text{otherwise}\end{cases}$
This model does not consider the dominant kernel scenario, as it assumes the
kernel execution time will increase as the number of streams increases and can
not model the kernel execution time. Here, we use the same equation to
calculate the optimal number of tasks. For this model, we also set the
#partitions to be equal to the optimal $N_{s}$ value on XeonPhi.
### 5.3 Evaluation Methodology
#### 5.3.1 Model Evaluation
We use cross-validation to evaluate our machine learning models. To test the
portability of our approach, we apply _leave-one-out_ cross-validation,
described as follows. We exclude the target program for predictions from the
training program set, and learn a model using the _remaining_ programs. We
then apply the learned model to the testing program. We repeat this process
until each benchmark is tested once. This is a standard evaluation
methodology, providing an estimate of the generalization ability of a machine
learned model in predicting _unseen_ data. Note that we exclude both
convolutionFFT2d and convolutionSeparable from the training set when one of
the two is evaluated, and we make sure all approaches are trained on the same
benchmarks for fair comparisons.
#### 5.3.2 Performance Report
We run each program under a stream configuration multiple times and report the
_geometric mean_ of the runtime. Compared to the arithmetic mean, the
geometric mean is often considered as a more suitable metric for reporting
program performance, as it can better minimize the impact of outliers [28]. To
determine how many runs are needed, we calculated the confidence range using a
95% confidence interval and make sure that the difference between the upper
and lower confidence bounds is smaller than 5%.
## 6 Experimental Results
In this section, we first present the overall performance of our approach on
both platforms. We then compare our approach to that uses fixed stream
configurations, two prior analytical models and our previous work. We futher
discuss the benefit sources of the streaming parallelism and the working
mechanism of our approach. At last, we demonstrate the tunning process of our
model.
### 6.1 Overall Performance
(a) XeonPhi
(b) NVIDIA GPU
Figure 9: Overall performance of our approach over a single-stream version on
XeonPhi (a) and NVIDIA GPU (b). Our approach achieves, on average, 93.7% and
97.9% of the oracle performance on XeonPhi and NVIDIA GPU, respectively. The
min-max bars show the range of performance achieved across different inputs.
In this experiment, we exhaustively profiled each application with all
possible stream configurations and report the best-found performance as the
_Oracle_ performance. The Oracle gives an indication of how close our approach
is to a _theoretically perfect_ solution. The baseline used to calculate the
speedup is running the application using a single-stream without processor
core or task partitioning.
The overall result is shown in Figure 9. The min-max bar on the diagram shows
the range of speedups per application across all evaluated inputs. Overall,
our approach achieves an average speedup of 1.57$\times$ and 1.1$\times$ over
the single-stream configuration on XeonPhi and the GPU respectively. This
translates to 93.7% and 97.9% of the Oracle performance on XeonPhi and the GPU
respectively.
On XeonPhi, the performance improvement of our approach comes from two
factors. First, by predicting the right processor partition size, our approach
allows effective overlapping of the host-device communication and computation.
Second, by matching task parallelism to the number of available processor
cores, our approach can reduce the overhead of thread management, compared to
the single-stream execution. When the host-device communication time dominates
the streaming process, the performance improvement mainly comes from
computation-communication overlapping and the speedup from streaming is
consistently less than 2$\times$. When the kernel execution time dominates the
stream process, the application can benefit from the overhead reduction of
thread management. In this case, the speedup can be as large as 5$\times$. We
provide a further discussion on this later in Section 6.5.1.
On the GPU, we can exploit bidirectional data transfer between the host and
the device by using pined memory which is not supported by hStreams. The
support of bidirectional data transfer allows us to obtain further performance
gains by overlapping host-device data transfer and computation. The
theoretically up-bound speedup on the GPU platform is 3$\times$, when data
transfer is perfectly overlapped with computation. The representative sample
is fftx4y3 with the larges dataset, the data transfer time in the two
directions is the same, and the kernel execution time is 1.5 times of the data
transfer time. The oracle speedup is 2.3$\times$, and our approach achieves a
speedup of 2.2 $\times$. On the other hand, because the current GPU
implementation does not support processor core partition, the kernel execution
time benefits less from using multiple streams. Programs which the kernel
execution time dominated have no speedup using multiple streams, such as bfs,
MonteCarlo.
### 6.2 Comparison to Fixed Stream Configurations
Our approach predicts from a wide range of stream configurations, which
configuration is likely to give the best performance for a given program and
dataset. A natural question to ask is that: is there a fixed stream
configuration that gives reasonable good performance across benchmarks and
datasets? To answer this question, we compare our predictive modeling based
approach to two specific configurations on each of our evaluation platforms.
Our justification for why we selecting the fixed configurations are described
as follows. On XeonPhi, our initial results in Section 2 indicate that using
the stream configuration of $(4,16)$, i.e. partitioning the cores to 4 groups
and running 4 tasks on each partition (16 tasks in total), gives good
performance. The statistics obtained from the training data suggest that the
configuration of $(17,85)$ give the best average performance across training
samples. On the GPU, several programs support a maximum of 4 tasks. Thus we
select the two configurations $(2,2)$ and $(4,4)$. The results are shown in
Figure 10.
(a) XeonPhi
(b) NVIDIA GPU
Figure 10: Comparing the performance with two fixed configurations on XeonPhi
(a) and NVIDIA GPU (b): config. $(4,16)$ of 4 partitions and 4 tasks per
partition, config. $(17,85)$ of 17 partitions and 5 tasks per partition,
config. $(2,2)$ of 2 partitions and 1 tasks per partition, and config. $(4,4)$
of 4 partitions and 1 tasks per partition.
(a) XeonPhi
(b) NVIDIA GPU
Figure 11: Violin plot showing the distribution of speedups per scheme across
benchmarks and datasets on XeonPhi (a) and GPU (b). The shape of the violin
corresponds to the speedup distribution to the oracle performance. The thick
black line shows where 50% of the data lies.
#### 6.2.1 XeonPhi
On XeonPhi, we observe improved performance for several benchmarks such as
mri-gridding, transpose, sad, under both configurations, but slower
performance for dotprod, vecadd, blackscholes, lbm, and mir-q (Figure 10a).
For prefix, configuration $(17,85)$ delivers improved performance while
configuration $(4,16)$ leads to slowdown performance. Overall, none of the two
fixed configurations give an improved performance on average. On average, our
approach outperforms the two fixed configurations by a factor of 1.4, and
delivers consistently improved performance across benchmarks and datasets.
The violin plot in Figure 11a shows how far is each of the three schemes to
the Oracle performance across benchmarks and datasets. Our approach not only
delivers the closest performance to the Oracle, but also has the largest
number of samples whose performance is next to the Oracle. By contrast, the
performance given by the fixed configurations for many samples is farther from
the Oracle performance.
#### 6.2.2 GPU
On the GPU, in most cases, the performance of configuration $(2,2)$ is
moderate, not great, but not much worse than single-version, leading to an
average speedup 1.03$\times$ (Figure 10b). By contrast, although configuration
$(4,4)$ performs poorly on two programs, it delivers a slightly larger
averaged speedup of 1.04$\times$. By choosing the stream configuration on a
per-program basis, our approach outperforms the two fixed configurations,
achieving an averaged speedup 1.10$\times$. On only four programs, our
approach delivers slightly worse performance with a small margin.
The violin plot in Figure 11b also confirms the strengths of our approach by
presenting the distribution of performance improvement. The results on the
diagram are normalized to the Oracle (best-available) performance. For most of
the programs, the two fixed configurations deliver 80% to 100% to the Oracle
performance. However, configuration $(4,4)$ can lead to rather poor
performance (less than 40% to the best available performance) on some
programs. Compared to the fixed configurations, the performance distribution
given by our approach is centralized on a range between 90% to 100%, where
most programs are within this percentile range. Furthermore, compared to the
fixed configurations, our approach has a fewer number of performance outliers,
which have less serious performance slowdown. Therefore, our approach delivers
consistently better performance compared with the fixed configurations.
#### 6.2.3 Summary
This experiment confirms that a fixed configuration fails to deliver improved
performance across applications and datasets, and selecting a right stream
configuration on a per program, per dataset basis is thus required.
### 6.3 Comparison to Analytical Models
(a) XeonPhi
(b) NVIDIA GPU
Figure 12: Comparing against _Liu et al._ and _Werkhoven et al._ on XeonPhi
(a) and NVIDIA GPU (b).
In this experiment, we compare our approach to the two recent analytical
models described in Section 5.2. The results are shown in Figures 12 and 13.
On XeonPhi, both competitive models prefer using $2$ tasks across benchmarks
and datasets. This is because that many programs are kernel dominated, the
analytical models simply assume that task partition has no effect on kernel’s
performance, and do not consider the thread management overhead. On the GPU,
the model proposed by _Liu et al._ tends to use $2$ tasks across benchmarks
and datasets. This is due to the fact that most programs are data transfer
dominated and this model ignores the overlap of the bidirectional data
transfers between the host and the device.
XeonPhi. Figure 12a demonstrates that our approach gives better performance
for nearly all programs on XeonPhi. For the remaining handful programs, all
three approaches deliver comparable performance. Compared to the results
Figure 10, we can find the performance of the analytical models is similar to
fixed stream configurations. This is because the performance of the seven
programs, such as binomial, changes dramatically with different stream
configurations (see also Figure 2). The performance of the remaining programs
is not sensitive to the variation of stream configurations. From Figure 13a,
we can further see that _Liu et al._ and _Werkhoven et al._ deliver a speedup
within a range on 20% to 80%, while the performance of our approach is
centralized on a range between 80% to 100%. Thus, our approach delivers
consistently better performance compared with the alternative models.
GPU. Figure 12b shows that our approach delivers better performance for around
75% of the programs on the GPU. Since _Werkhoven et al._ and _Liu et al._ are
manually tuned for the GPUs, they give better performance on some benchmarks
over our approach. However, our approach has the advantages of being
automatically learned from training data, with little expert involvement. The
performance of our approach can be further improved by using more training
examples to better cover the program space. Figure 13b shows that _Liu et al._
and _Werkhoven et al._ delivers a speedup within a range of 5% to 80%, and 70%
to 100%, respectively. By contrast, the performance of our approach is
centralized within a range between 90% to 100% for more programs. Therefore,
overall, our approach delivers better average performance compared with the
alternative models.
(a) XeonPhi
(b) NVIDIA GPU
Figure 13: Violin plots showing the distribution of speedups across benchmarks
and datasets on XeonPhi (a) and GPU (b).
### 6.4 Comparison to Classification-based Approach
(a) XeonPhi
(b) NVIDIA GPU
Figure 14: Comparing against a classification based approach on XeonPhi (a)
and NVIDIA GPU (b).
Our prior work uses a SVM classifier to predict the configurations [16].
Compared with it, the regression-based model presented in this article has
several advantages.
A classification model predicts which of a set of predefined labels the input
belongs to. Using this strategy, we will need to label each unique stream
configuration. This will lead to a total of 175 labels for 311 profiling
samples on the XeonPhi, and 11 labels on the GPU. On the XeonPhi, the ratio of
samples to labels is too small to build an accurate model. As a result, we
have to merge labels in our prior work [16] at the cost of losing accuracy.
Classification is a constraint optimization problem where the model has to
know all the possible configurations during training. Our new regression-based
approach avoids this pitfall by directly modeling the impact of the stream
configuration; it thereby can be used on any stream configuration as the
configuration is the model’s input.
Figure 14a presents results obtained on the XeonPhi. Our regression-based
approach outperforms the SVM-classifier in 21 of the 39 programs and achieves
over 5% performance improvement for 13 programs. It is to note that the
overhead for ranking stream configurations is included in the experimental
results. Overall, our regression-based approach improves the SVM-classifier
by, on average, 3% (up to 46%). Unlike XeonPhi, we were able to obtain
sufficient training samples per label (because the optimization space is
smaller) on the GPU to build a more accurate classification model. As can be
seen from Figure 14b, the average speedup of SVM-classifier and the
regression-based approach is comparable.
Compared to a classifier, our regression-based approach has the advantage of
being able to be applied to configurations that were not seen during the
training phase. Therefore, our approach has a better generalization ability.
### 6.5 Further Analysis of Performance Results
We now take a closer look into the performance results, using XeonPhi as a
case study.
Figure 15: Reduction of kernel computation time over a single-stream execution
on XeonPhi. The performance improvement comes from the reduction of the
threading overhead. A stream configuration is annotated as (_#partitions_ ,
_#tasks_).
#### 6.5.1 High Speedup Cases
On XeonPhi, bidirectional data transfer between the host and the accelerator
cannot be overlapped, i.e., we can only issue data transfer from the host to
the device or vice versa at once but not simultaneously. As a result, the
theoretical up-bound speedup for overlapping computation and communication is
2$\times$, when the computation is perfectly overlapped with the data transfer
time. It is interesting to observe that several benchmarks achieve a speedup
of over 2$\times$ on XeonPhi (see Figure 9a). After having a closer
investigation, we notice that such performance is attributed to the reduction
in the kernel execution time in additional to the overlapping of communication
and computation.
To quantify the benefit of kernel time reduction, we measure the kernel
execution time with and without multiple streams and calculate the speedup
between them. Note that we _exclude the host-device communication time in this
case_ to isolate the contributing factors. The kernel time improvement for
transpose, binomial, and fftx1y1 is shown in Figure 15. As can be seen from
the diagram, choosing a good stream configuration can lead to more than 4x
reduction on the kernel execution time. This is because these benchmarks are
implemented by parallelizing the inner loop within a nested loop. During
runtime, the parallel threads working on the inner loop will be created,
synchronized, or destroyed for each outer loop iteration. Such threading
overhead could be significant when the outer loop iterates a large number of
times. With multiple streams, we divide the whole outer loop iteration space
into multiple smaller iterations. This allows multiple groups of threads to be
managed simultaneously, leading to a significant decrease in threading
overhead and faster kernel execution time. On the other hand, using too many
streams and partitions will lead to a performance decrease. This is because
stream management also comes at a cost, which increases as the number of
partitions grows. Nonetheless, for applications where the kernel computation
dominates the program execution time, by reducing the kernel time can lead to
additional improvement, yielding more than 2x speedups.
(a) XeonPhi
(b) NVIDIA GPU
Figure 16: Violin plot showing the distribution of speedups per benchmark
across datasets on XeonPhi (a) and NVIDIA GPU (b). The shape of the violin
corresponds to the speedup distribution. The thick black line shows where 50%
of the data lies.
#### 6.5.2 Speedup Distribution
Figure 16 gives the speedup per benchmark across datasets on XeonPhi and the
GPU. The shape of the violin plot corresponds to the speedup distribution.
On XeonPhi, we see that the speedups of montecarlo and prefix distribute
fairly uniformly while the data distribution of fftx1y1 and fftx4y3 is
multimodal (i.e. it has two peaks). Further, the input datasets have little
impact on the behavior of fwt and lbm, so the speedups remain constant across
datasets.
On the GPU, the speedups of dotprod, vecadd, blackscholes and mri-q distribute
fairly uniformly while the data distribution of convsepr1, convsepr8, fftx1y1,
fftx4y3 and dct is unimodal (i.e. it has one peak). Furthermore, the input
datasets have a very slight impact on the performance behaviors of montecarlo,
scalarprod, transpose and binomial. Thus, their speedups remain constant
across datasets.
To conclude, the streaming speedups of some applications are sensitive to
their input datasets whereas the others are not. And the distribution of
speedups on the GPU is more concentrated than XeonPhi. This is because the
current GPU implementation does not support processor core partition, the
kernel execution time benefits less from multiple streams than XeonPhi.
Figure 17: The relation between computation-communication ratio and the
speedup. The computation-communication ratio is normalized using the natural
logarithm function. Thus, the kernel computation time equals the host-device
communication time when $ratio=0$. In general, a higher computation-
communication ratio leads to a better speedup.
#### 6.5.3 Correlation Analysis
Figure 17 shows the relation between the computation-communication ratio and
the achieved speedup when using heterogeneous streams across all benchmarks
and datasets on XeonPhi. We see that the computation-communication ratio
varies over the benchmarks and the speedup changes accordingly, but in
general, a higher computation-to-communication ratio leads to a greater
speedup. As explained in Section 6.5.1, in addition to overlapping computation
and communication, our approach can also reduce the kernel computation time by
choosing the right stream configuration. Therefore, benchmarks with a high
computation-communication ratio also benefit from a reduction in the kernel
computation time.
To quantify the relation between the computation-communication ratio and the
speedup, we calculate the Pearson correlation coefficient of the two
variables. The calculation gives a correlation coefficient of 0.7, indicating
that the two variables (the computation-communication ratio and the speedup)
have a strong linear correlation. By carefully selecting the stream
configuration, our approach tries to maximize the overlap between
communication and computation, which thus leads to favourable performance.
#### 6.5.4 Impact of Streaming Parallelism
Figure 18: Breakdown of program execution time ($T$), host-device data
transfer time ($T_{m}$), kernel execution time ($T_{k}$), hStreams context
initialization overhead ($T_{c}$) and communication-computation overlapping
time ($T_{o}$) for single and best-performing multi-stream configurations.
Our earlier experiments show that by carefully exploiting streaming
parallelism, we can significantly improve application performance. We now take
a closer look at three representative benchmarks, fftx1y1, fwt and gesummv, to
get a better understanding of streaming performance on XeonPhi. These
benchmarks represent different degrees of benefits obtained from streamed
parallelism (with a speedup of 2$\times$, 1.5$\times$ and 1$\times$,
respectively).
We use the following analytical model to breakdown the execution time of a
multi-stream program:
$T=T_{m}+T_{k}+T_{c}-T_{o}$ (1)
where $T_{m}$ is host-device data transfer time, $T_{k}$ is kernel execution
time, $T_{c}$ is the overhead for initializing the context, and $T_{o}$ is
overlapping time between data transfer and kernel execution. We measure $T$,
$T_{m}$, $T_{k}$, and $T_{c}$, and use the measurements to calculate $T_{o}$.
Figure 18 gives the breakdown for the five components in Equation 1. For each
testing program, we compare the single-stream configuration against the best-
performing multi-stream configuration. The host-device data transfer time,
$T_{m}$, is nearly constant among a single and a multiple stream
configuration, but multi-streaming can reduce the kernel execution time,
$T_{k}$, by exploiting the spatial sharing of processing resources among
computation tasks. The overhead of initializing the hStreams context, $T_{c}$,
depends on the kernel execution time. For fftx1y1 and fwt, whose kernels run
for a sufficiently long time, this one-off runtime overhead is negligible.
However, for gesummv, this overhead cannot be ignored due to the relatively
short kernel running time. The contribution for overlapping host-device
communications with kernel execution, $T_{o}$, varies across programs. For
fftx1y1 and fwt, it accounts for around 50% of $T_{m}$, suggesting that by
exploiting temporal sharing to overlap communication with kernel execution can
amortize the host-device communication overhead. For gesummv, $T_{o}$ is small
due to little alignment between data transfer and kernel execution. As such,
there is little benefit for exploiting temporal sharing for this program.
This experiment gives a more detailed analysis for the benefits of exploiting
multiple streams. The results reinforce our claim that the benefit for
streaming parallelism depends on the computation kernel and hence an adaptive
scheme for choosing the optimal stream configuration is needed. Our work aims
to offer such a capability.
### 6.6 Analysis of Predictive Modeling Techniques
In this section, we analyse the working mechanism of our predictive model,
using XeonPhi as an evaluation platform.
#### 6.6.1 Comparison to Alternative Modeling Techniques
We compare our MLP-based model against four widely used regression methods:
the DCT (Decision Tree), the RF (Random Forest), the XGB (eXtreme Gradient
Boosting) and SVM (Support Vector Machine) as well as four classification
models: SVM, DCT, MLP and KNN (K-Nearest Neighbors). We use the Radial basis
function kernel for the SVM models.
For each technique, we follow the same training methodology and use the same
features and training examples to build a model. For classification models, we
apply the label merging process described in our prior work [16] to improve
the prediction accuracy. Table V compares the training overhead, average
prediction time and achieved average speedup for each model. We note that
training a regression-based SVM model has the largest overhead. Although
training a DCT has less overhead over our MLP-based regression model, MLP
gives better prediction performance. The RF and XGB models are based on DCT,
but they do not yield a better performance. Compared to regression models, a
classification model takes less time to train and make predictions. However,
classification models give worse performance over regression models as they
require more training data to cover the optimization space. Overall, we choose
to use a regression-based approach and employ MLP because it gives the best
overall prediction performance and has modest training overhead.
Table V: Comparison to alternative modeling techniques Technique | Training time | Avg. pred. time | Avg. speedup
---|---|---|---
SVM (regression) | 100 hours | 2280 ms | 1.56
DCT (regression) | 65.57 seconds | 0.74 ms | 1.51
RF (regression) | 317.89 seconds | 11.94 ms | 1.51
XGB (regression) | 28.46 seconds | 0.74 ms | 1.49
MLP (regression, ours) | 245.8 seconds | 0.76 ms | 1.57
SVM (classifier) | 1.28 seconds | 0.10 ms | 1.53
DCT (classifier) | 0.79 seconds | 0.05 ms | 1.38
MLP(classifier) | 46.45 seconds | 0.05 ms | 1.41
KNN (classifier) | 0.22 seconds | 0.23 ms | 1.43
#### 6.6.2 Feature Engineering
Feature engineering has a significant impact on the performance of a machine
learning model (Section 3.2). Here we quantify the impact of feature
engineering methods. In this work, we consider three standard feature
engineering approaches including standardization, normalization and dimension
reduction.
Standardization converts all features value to be in a common range, e.g.,
between 0 and 1. The idea is to prevent the feature value range to dominate
the importance of that feature. In this work we apply a commonly used
standardization method called _Z-score_ [29] to standardize the raw feature
values and the speedups (i.e., prediction targets) in the training data. We
found that feature standardization improves the achieved speedup by 3% on
average, and speedup standardization improves the achieved speedup by 5% on
average.
Normalization scales the feature values to make them follow the normal
distribution. We have tested a range of normalization methods including the
square root, the reciprocal of square root and the natural logarithm
transformation. However, we found that normalization does not improve our
model prediction accuracy.
Dimension reduction reduces the number of features, which is often useful when
the number of training examples is not proportional to the number of feature
dimensions. In this work, we apply factor analysis (FA) [30] and principal
component analysis (PCA) [31] to the raw features. Applying PCA and using 9
PCA components gives the best overall result, by improving the average speedup
by 17%. PCA outperforms FA which gives an average 3% improvement on the
achieved speedup.
#### 6.6.3 MLP Parameters Tuning
We now discuss the impact of the MLP parameter choices. There are four
configurable parameters for an MLP model: the activation function, the number
of hidden layers, the number of neurons, and the learning algorithm (i.e., the
solver). For activation functions, we consider identity, logistic, tanh and
relu. For hidden layers and neurons, we vary the number of hidden layers from
1 to 5 and the number of neurons per layer from 3 to 100. For the solver, we
consider three commonly used weight optimizers: lbfgs, sgd and and adam. We
use scikit-learn implementations for the activation function and the solver.
Our experimental results suggest that the best-performing activation function
and solver are tanh and adam respectively, and using three hidden layers with
9 neurons per layers gives the best overall results on our training data.
Overall, tuning MLP model parameter improves the average speedup by 5% over
the default parameter setting.
Figure 19: A Hinton diagram showing the impact of each feature used by the
performance model to the resultant application performance. The larger the
box, the more likely a feature has a greater impact on the performance of the
respective benchmark.
#### 6.6.4 Impact of Individual Feature
In this experiment, we consider the impact of a specific feature to the
resultant performance. Figure 19 presents a Hinton diagram illustrating how
important a feature contribution to the performance model prediction accuracy
(which in turns affects the resulting application performance). The larger the
box, the more significant a feature for a given program’s performance. Here,
the x-axis denotes the programs, and the y-axis denotes the features used by
our performance model. The impact of a feature is quantified by measuring how
much speedup improvement can be obtained if that feature is used by the
performance model. Note that this is a post-hoc analysis and, in general, we
cannot know in advance the importance of a feature on _unseen_ programs.
Figure 19 shows that all the features are important for the set of benchmarks
targeted in the work, but the importance of features varies across programs.
This diagram illustrates how hard it is to develop an analytical model to
capture the diverse behaviors and characteristics of streaming programs.
## 7 Related Work
Our work builds upon the following past foundation, while qualitatively
differing from each.
Task Scheduling. There is considerable work on distributing work across
heterogeneous processors to improve application performance [32, 33, 34].
Prior work in the area typically assumes that the processor configuration is
fixed and relies on the operating system to schedule parallel tasks across
processing units. Recent studies show that by partitioning the processing
units into groups it is possible to significantly improve the application
performance by overlapping the host-device communication and computation on
coprocessors like Intel XeonPhi [14, 6]. However, existing approaches rely on
static tuning to find the processor partition and the best number of streams
to run within a partition. As a result, previous approaches cannot adapt to
the change of program inputs. As a departure from prior work, we develop an
automatic approach to dynamically adjust the processor partition and task-
granularity during runtime, considering the characteristics of applications
and input datasets; our approach thereby can adapt to the change of program
inputs.
Domain-specific Optimizations There is considerable work on domain-specific
optimization on Intel XeonPhi. Cheng _et al._ [35] and Jha _et al._ [36] show
that in-memory database applications suffer from under-utilization of
processor resources and hence a fine-grained tuning approach is required.
Mrphi is a framework for optimizing MapReduce workload on the XeonPhi [37]. It
employs a set of techniques to improve the resource utilization to obtain
higher application performance. Other works look at performance optimization
for numerical solvers [38], sparse matrix vector multiplication [39, 40], and
dynamic stochastic economic models [39]. Ferrão _et al._ [41] and Memeti _et
al._ [42] develop a stream processing framework for XeonPhi to increase the
programming productivity. The runtime can automatically distribute workloads
across CPUs and accelerating devices. These approaches improve the processor
utilization by adjusting the algorithmic design, which are complementary to
our work on tuning multi-streaming parallelism for data parallel applications.
Multiple Streams Modeling. Gomez-Luna _et al._ [11] develop a set of models to
estimate the asynchronous data transfer overhead on different GPU
architectures. The models can be used to estimate the optimal number of
streams to use on a given GPU platform. Werkhoven _et al._ [10] present an
analytical model to determine when to apply an overlapping method on GPUs. Liu
_et al._ [12] also develop an analytical based approach to determine the
optimal number of streams to use on GPUs. However, none of these approaches
considers the processor partition. As we have shown in Section 6.3, ignoring
the processor partitioning parameter can lead to poor performance on Intel
XeonPhi. Furthermore, these hand-crafted models have the drawback of being not
portable across architectures as the model is tightly coupled to a specific
GPU architecture. Our work advances prior work by employing machine learning
to automatically learn the optimal processor partition and the number of
streams/tasks to use. Since our models are automatically learned from
empirical observations, one can easily re-learn a model for a new
architecture.
Predictive Modeling. Recent studies have shown that machine learning based
predictive modeling is effective in code optimization [43, 44], performance
predicting [45, 46], parallelism mapping [47, 48, 20, 49, 50], and task
scheduling [51, 52, 53, 54, 55, 56]. Its great advantage is its ability to
adapt to the ever-changing platforms as it has no prior assumption about their
behavior. The work presented by Wen _et al._ [57] employs SVMs to develop a
binary classifier to predict that if a given OpenCL kernel can achieve a large
speedup or not. Our work differs from [57] in that it targets a different
architecture and programming model, and it predicts from a larger number of
configurations instead of making a binary prediction. Our prior work developed
an SVM based classifier to predict the optimal stream configuration for Intel
XeonPhi [16]. However, it requires having sufficient training data samples to
cover all possible stream configurations. Our approach improves the prior work
by directly modeling the impact of the stream configuration. As a result, our
approach can make predictions for any stream configuration (even those are not
seen in the training data).
Autotuning Parallel Programs. Our approach is closely related to autotuning
that searches for the best-performing optimization configuration [58, 59].
This technique is demonstrated to be effective for choosing algorithmic
choices [60], tuning GPU code [61, 62, 63], optimizing structured parallel
programs [64, 65, 66] and non-uniform memory access (NUMA) architectures [67],
and more recently for deep neural networks [68]. Many of the prior works in
this area employ an evolutionary-based approach by applying and profiling
candidate optimization options to choose a good option to use. One of the key
changes of autotuning is how to avoid the profiling overhead which could be
prohibitively expensive. We do so by using a performance model to quickly
evaluate the profitability of a candidate optimization option. We show that
our approach has low runtime overhead, which thus permits us to apply it at
runtime to best match the optimization strategy to the program input.
Furthermore, our work is the first for tuning heterogeneous streaming
parallelism on heterogeneous many-cores (XeonPhis and GPUs).
Automatic Generation of Parallel Programs. The OpenMPC compiler [69]
translates OpenMP to CUDA programs. Wang _et al._ [24, 20, 70] translates
OpenMP to OpenCL programs and use machine learning to select the most suitable
device from the host CPU and the GPU to run the code. Rawat _et al._ presents
an automatic approach to generate GPU code from a domain-specific language
(DSL) for stencil programs [71]. All of the above approaches target GPUs, and
do not utilize the multi-streaming strategy.
## 8 Conclusion
This article has presented an automatic approach to exploit streaming
parallelism on heterogeneous many-cores. Central to our approach is a machine
learning-based model that predicts the resulting performance when running the
target application under a given streamed configuration. The performance
predictor is then used as a cost function to quickly rank candidate
configurations at runtime, to determine which stream configuration should be
used on a per-program per-dataset basis. We have evaluated our approach on an
Intel XeonPhi and an NVIDIA GTX 1080 Ti GPU, with 39 representative
benchmarks. Experimental results show that our approach delivers an average
speedup of 1.6x and 1.1x on XeonPhi and the GPU, respectively. These results
translate to over 93% of the best-available performance.
## Acknowledgment
This work was partially funded by the National Key Research and Development
Program of China under Grant No. 2018YFB0204301, the National Natural Science
Foundation of China under Grant agreements 61972408, 61602501 and 61872294;
For any correspondence, please contact Jianbin Fang (Email:
[email protected]).
## References
* [1] J. D. Owens _et al._ , “Gpu computing,” _Proceedings of the IEEE_ , 2008\.
* [2] A. Li _et al._ , “Exploring and analyzing the real impact of modern on-package memory on HPC scientific kernels,” in _SC_ , 2017.
* [3] C. Chen _et al._ , “LU factorization on heterogeneous systems: an energy-efficient approach towards high performance,” _Computing_ , 2017.
* [4] M. R. Meswani _et al._ , “Modeling and predicting performance of high performance computing applications on hardware accelerators,” _IJHPCA_ , 2013\.
* [5] J. Fang _et al._ , “A comprehensive performance comparison of CUDA and opencl,” in _ICPP_ , 2011.
* [6] C. J. Newburn, _et al._ , “Heterogeneous streaming,” in _IPDPSW_ , 2016\.
* [7] _CUDA C Best Practices Guide Version 7.0_ , NVIDIA Inc., 2015.
* [8] The Khronos OpenCL Working Group, “OpenCL - The open standard for parallel programming of heterogeneous systems,” http://www.khronos.org/opencl/, 2016.
* [9] _hStreams Architecture for MPSS 3.5_ , Intel Inc., 2015.
* [10] B. Van Werkhoven _et al._ , “Performance models for cpu-gpu data transfers,” in _CCGrid_ , 2014.
* [11] J. GóMez-Luna _et al._ , “Performance models for asynchronous data transfers on consumer graphics processing units,” _JPDC_ , 2012.
* [12] B. Liu _et al._ , “Software pipelining for graphic processing unit acceleration: Partition, scheduling and granularity,” _IJHPCA_ , 2016.
* [13] Z. Li _et al._ , “Streaming applications on heterogeneous platforms,” in _NPC_ , 2016.
* [14] J. Fang _et al._ , “Evaluating multiple streams on heterogeneous platforms,” _Parallel Processing Letters_ , 2016.
* [15] Z. Li _et al._ , “Evaluating the performance impact of multiple streams on the mic-based heterogeneous platform,” in _IPDPSW_ , 2016.
* [16] P. Zhang _et al._ , “Auto-tuning streamed applications on intel xeon phi,” in _IPDPS_ , 2018.
* [17] C. Lattner and V. Adve, “LLVM: A compilation framework for lifelong program analysis & transformation,” in _CGO_ , 2004.
* [18] F. Pedregosa _et al._ , “Scikit-learn: Machine learning in python,” _Journal of machine learning research_ , 2011.
* [19] G. Fursin _et al._ , “Milepost gcc: machine learning based research compiler,” in _GCC summit_ , 2008.
* [20] Z. Wang _et al._ , “Automatic and portable mapping of data parallel programs to opencl for gpu-based heterogeneous systems,” _ACM TACO_ , 2015\.
* [21] S. Boslaugh, _Statistics in a Nutshell, 2nd Edition_ , 2nd ed. O’Reilly Media, 2012.
* [22] B. F. Manly, _Multivariate statistical methods: a primer_. CRC Press, 2004.
* [23] S. Lee _et al._ , “Openmp to gpgpu: a compiler framework for automatic translation and optimization,” _ACM Sigplan Notices_ , 2009.
* [24] D. Grewe _et al._ , “Portable mapping of data parallel programs to opencl for heterogeneous systems,” in _CGO_ , 2013.
* [25] D. Mikushin _et al._ , “Kernelgen–the design and implementation of a next generation compiler platform for accelerating numerical models on gpus,” in _IPDSW_ , 2014.
* [26] T. Grosser and T. Hoefler, “Polly-acc transparent compilation to heterogeneous hardware,” in _Supercomputing_ , 2016.
* [27] R. Sotomayor _et al._ , “Automatic cpu/gpu generation of multi-versioned opencl kernels for c++ scientific applications,” _International Journal of Parallel Programming_ , 2017.
* [28] W. Ertel, “On the definition of speedup,” in _International Conference on Parallel Architectures and Languages Europe_ , 1994.
* [29] E. Kreyszig, _Advanced Engineering Mathematics, 10th Eddition_ , 2009.
* [30] R. L. Gorsuch, _Factor Analysis, 2nd Edition_. Routledge, 2014.
* [31] H. Hotelling, “Analysis of a complex of statistical variables into principal components.” _Journal of educational psychology_ , 1933.
* [32] S. Mittal and J. S. Vetter, “A survey of CPU-GPU heterogeneous computing techniques,” _ACM Computing Surveys (CSUR)_ , vol. 47, no. 4, p. 69, 2015\.
* [33] C.-K. Luk _et al._ , “Qilin: exploiting parallelism on heterogeneous multiprocessors with adaptive mapping,” in _MICRO_ , 2009.
* [34] J. Shen _et al._ , “Workload partitioning for accelerating applications on heterogeneous platforms,” _IEEE TPDS_ , 2016.
* [35] X. Cheng _et al._ , “Many-core needs fine-grained scheduling: A case study of query processing on intel xeon phi processors,” _JPDC_ , 2018.
* [36] S. Jha _et al._ , “Improving main memory hash joins on intel xeon phi processors: An experimental approach,” _PVLDB_ , 2015.
* [37] M. Lu _et al._ , “Mrphi: An optimized mapreduce framework on intel xeon phi coprocessors,” _IEEE TPDS_ , 2015.
* [38] A. Lastovetsky _et al._ , “Model-based optimization of eulag kernel on intel xeon phi through load imbalancing,” _IEEE TPDS_ , 2017.
* [39] W. T. Tang _et al._ , “Optimizing and auto-tuning scale-free sparse matrix-vector multiplication on intel xeon phi,” in _CGO_ , 2015.
* [40] M. E. Guney _et al._ , “Optimizing matrix multiplication on intel® xeon phi th x200 architecture,” in _ARITH_ , 2017\.
* [41] P. Ferrão _et al._ , “Stream processing on hybrid cpu/intel® xeon phi systems,” in _European Conference on Parallel Processing_ , 2018.
* [42] S. Memeti and S. Pllana, “Hstream: A directive-based language extension for heterogeneous stream computing,” in _CSE_ , 2018.
* [43] C. Cummins _et al._ , “End-to-end deep learning of optimization heuristics,” in _PACT_ , 2017.
* [44] Z. Wang and M. O’Boyle, “Machine learning in compiler optimisation,” _Proc. IEEE_ , 2018.
* [45] J. Zhao _et al._ , “Predicting cross-core performance interference on multicore processors with regression analysis,” _IEEE TPDS_ , 2016.
* [46] Z. Wang and M. F. O’boyle, “Using machine learning to partition streaming programs,” _ACM TACO_ , 2013.
* [47] G. Tournavitis _et al._ , “Towards a holistic approach to auto-parallelization: integrating profile-driven parallelism detection and machine-learning based mapping,” _ACM Sigplan Notices_ , 2009.
* [48] Z. Wang and M. F. O’Boyle, “Partitioning streaming parallelism for multi-cores: a machine learning based approach,” in _PACT_ , 2010.
* [49] Z. Wang _et al._ , “Integrating profile-driven parallelism detection and machine-learning-based mapping,” _ACM TACO_ , 2014.
* [50] B. Taylor _et al._ , “Adaptive optimization for opencl programs on embedded heterogeneous systems,” in _LCTES_ , 2017.
* [51] M. K. Emani _et al._ , “Smart, adaptive mapping of parallelism in the presence of external workload,” in _CGO_ , 2013.
* [52] V. S. Marco _et al._ , “Improving spark application throughput via memory aware task co-location: A mixture of experts approach,” in _Middleware_ , 2017.
* [53] J. Ren _et al._ , “Optimise web browsing on heterogeneous mobile platforms: a machine learning based approach,” in _INFOCOM_ , 2017.
* [54] ——, “Proteus: Network-aware web browsing on heterogeneous mobile systems,” in _CoNEXT ’18_ , 2018.
* [55] L. Yuan _et al._ , “Using machine learning to optimize web interactions on heterogeneous mobile systems,” _IEEE Access_ , 2019.
* [56] B. Taylor _et al._ , “Adaptive deep learning model selection on embedded systems,” _ACM SIGPLAN Notices_ , 2018.
* [57] Y. Wen _et al._ , “Smart multi-task scheduling for opencl programs on cpu/gpu heterogeneous platforms,” in _HiPC_ , 2014.
* [58] K. Datta _et al._ , “Stencil computation optimization and auto-tuning on state-of-the-art multicore architectures,” in _Supercomputing_ , 2008.
* [59] J. Ansel _et al._ , “Opentuner: An extensible framework for program autotuning,” in _PACT_ , 2014.
* [60] J. Ragan-Kelley _et al._ , “Halide: A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines,” in _PLDI_ , 2013.
* [61] A. Nukada and S. Matsuoka, “Auto-tuning 3-d fft library for cuda gpus,” in _SC_ , 2009.
* [62] P. Tillet and D. Cox, “Input-aware auto-tuning of compute-bound hpc kernels,” in _SC_ , 2017.
* [63] T. T. Dao and J. Lee, “An auto-tuner for opencl work-group size on gpus,” _IEEE TPDS_ , 2018.
* [64] U. Dastgeer _et al._ , “Auto-tuning skepu: a multi-backend skeleton programming framework for multi-gpu systems,” in _IWMSE_ , 2011.
* [65] J. J. Thiagarajan _et al._ , “Bootstrapping parameter space exploration for fast tuning,” in _Supercomputing_ , 2018.
* [66] D. Chen _et al._ , “Optimizing sparse matrix–vector multiplications on an armv8-based many-core architecture,” _International Journal of Parallel Programming_ , 2019.
* [67] T. Katagiri _et al._ , “Auto-tuning on numa and many-core environments with an fdm code,” in _IPDPSW_ , 2017.
* [68] L. Liao _et al._ , “Uhcl-darknet: An opencl-based deep neural network framework for heterogeneous multi-/many-core clusters,” in _ICPP_ , 2018\.
* [69] S. Lee and R. Eigenmann, “Openmpc: Extended openmp programming and tuning for gpus,” in _SC_ , 2010.
* [70] Z. Wang _et al._ , “Exploitation of gpus for the parallelisation of probably parallel legacy code,” in _CC ’14_ , 2014.
* [71] P. S. Rawat _et al._ , “Domain-specific optimization and generation of high-performance gpu code for stencil computations,” _Proceedings of the IEEE_ , 2018.
|
2024-09-04T02:54:58.500113 | 2020-03-09T18:36:38 | 2003.04352 | {
"authors": "LHCb collaboration: R. Aaij, C. Abell\\'an Beteta, T. Ackernley, B.\n Adeva, M. Adinolfi, H. Afsharnia, C.A. Aidala, S. Aiola, Z. Ajaltouni, S.\n Akar, P. Albicocco, J. Albrecht, F. Alessio, M. Alexander, A. Alfonso Albero,\n G. Alkhazov, P. Alvarez Cartelle, A.A. Alves Jr, S. Amato, Y. Amhis, L. An,\n L. Anderlini, G. Andreassi, M. Andreotti, F. Archilli, A. Artamonov, M.\n Artuso, K. Arzymatov, E. Aslanides, M. Atzeni, B. Audurier, S. Bachmann, J.J.\n Back, S. Baker, V. Balagura, W. Baldini, A. Baranov, R.J. Barlow, S. Barsuk,\n W. Barter, M. Bartolini, F. Baryshnikov, J.M. Basels, G. Bassi, V.\n Batozskaya, B. Batsukh, A. Battig, A. Bay, M. Becker, F. Bedeschi, I.\n Bediaga, A. Beiter, L.J. Bel, V. Belavin, S. Belin, V. Bellee, K. Belous, I.\n Belyaev, G. Bencivenni, E. Ben-Haim, S. Benson, S. Beranek, A. Berezhnoy, R.\n Bernet, D. Berninghoff, H.C. Bernstein, C. Bertella, E. Bertholet, A.\n Bertolin, C. Betancourt, F. Betti, M.O. Bettler, Ia. Bezshyiko, S. Bhasin, J.\n Bhom, M.S. Bieker, S. Bifani, P. Billoir, A. Bizzeti, M. Bj{\\o}rn, M.P.\n Blago, T. Blake, F. Blanc, S. Blusk, D. Bobulska, V. Bocci, O. Boente Garcia,\n T. Boettcher, A. Boldyrev, A. Bondar, N. Bondar, S. Borghi, M. Borisyak, M.\n Borsato, J.T. Borsuk, T.J.V. Bowcock, C. Bozzi, M.J. Bradley, S. Braun, A.\n Brea Rodriguez, M. Brodski, J. Brodzicka, A. Brossa Gonzalo, D. Brundu, E.\n Buchanan, A. B\\\"uchler-Germann, A. Buonaura, C. Burr, A. Bursche, A.\n Butkevich, J.S. Butter, J. Buytaert, W. Byczynski, S. Cadeddu, H. Cai, R.\n Calabrese, L. Calero Diaz, S. Cali, R. Calladine, M. Calvi, M. Calvo Gomez,\n P. Camargo Magalhaes, A. Camboni, P. Campana, D.H. Campora Perez, A.F.\n Campoverde Quezada, L. Capriotti, A. Carbone, G. Carboni, R. Cardinale, A.\n Cardini, I. Carli, P. Carniti, K. Carvalho Akiba, A. Casais Vidal, G. Casse,\n M. Cattaneo, G. Cavallero, S. Celani, R. Cenci, J. Cerasoli, M.G. Chapman, M.\n Charles, Ph. Charpentier, G. Chatzikonstantinidis, M. Chefdeville, V.\n Chekalina, C. Chen, S. Chen, A. Chernov, S.-G. Chitic, V. Chobanova, S.\n Cholak, M. Chrzaszcz, A. Chubykin, P. Ciambrone, M.F. Cicala, X. Cid Vidal,\n G. Ciezarek, F. Cindolo, P.E.L. Clarke, M. Clemencic, H.V. Cliff, J. Closier,\n J.L. Cobbledick, V. Coco, J.A.B. Coelho, J. Cogan, E. Cogneras, L. Cojocariu,\n P. Collins, T. Colombo, A. Comerma-Montells, A. Contu, N. Cooke, G. Coombs,\n S. Coquereau, G. Corti, C.M. Costa Sobral, B. Couturier, D.C. Craik, J.\n Crkovsk\\'a, A. Crocombe, M. Cruz Torres, R. Currie, C.L. Da Silva, E.\n Dall'Occo, J. Dalseno, C. D'Ambrosio, A. Danilina, P. d'Argent, A. Davis, O.\n De Aguiar Francisco, K. De Bruyn, S. De Capua, M. De Cian, J.M. De Miranda,\n L. De Paula, M. De Serio, P. De Simone, J.A. de Vries, C.T. Dean, W. Dean, D.\n Decamp, L. Del Buono, B. Delaney, H.-P. Dembinski, A. Dendek, V. Denysenko,\n D. Derkach, O. Deschamps, F. Desse, F. Dettori, B. Dey, A. Di Canto, P. Di\n Nezza, S. Didenko, H. Dijkstra, V. Dobishuk, F. Dordei, M. Dorigo, A.C. dos\n Reis, L. Douglas, A. Dovbnya, K. Dreimanis, M.W. Dudek, L. Dufour, G. Dujany,\n P. Durante, J.M. Durham, D. Dutta, M. Dziewiecki, A. Dziurda, A. Dzyuba, S.\n Easo, U. Egede, V. Egorychev, S. Eidelman, S. Eisenhardt, R. Ekelhof, S.\n Ek-In, L. Eklund, S. Ely, A. Ene, E. Epple, S. Escher, S. Esen, T. Evans, A.\n Falabella, J. Fan, N. Farley, S. Farry, D. Fazzini, P. Fedin, M. F\\'eo, P.\n Fernandez Declara, A. Fernandez Prieto, F. Ferrari, L. Ferreira Lopes, F.\n Ferreira Rodrigues, S. Ferreres Sole, M. Ferrillo, M. Ferro-Luzzi, S.\n Filippov, R.A. Fini, M. Fiorini, M. Firlej, K.M. Fischer, C. Fitzpatrick, T.\n Fiutowski, F. Fleuret, M. Fontana, F. Fontanelli, R. Forty, V. Franco Lima,\n M. Franco Sevilla, M. Frank, C. Frei, D.A. Friday, J. Fu, Q. Fuehring, W.\n Funk, E. Gabriel, A. Gallas Torreira, D. Galli, S. Gallorini, S. Gambetta, Y.\n Gan, M. Gandelman, P. Gandini, Y. Gao, L.M. Garcia Martin, J. Garc\\'ia\n Pardi\\~nas, B. Garcia Plana, F.A. Garcia Rosales, L. Garrido, D. Gascon, C.\n Gaspar, D. Gerick, E. Gersabeck, M. Gersabeck, T. Gershon, D. Gerstel, Ph.\n Ghez, V. Gibson, A. Giovent\\`u, O.G. Girard, P. Gironella Gironell, L.\n Giubega, C. Giugliano, K. Gizdov, V.V. Gligorov, C. G\\\"obel, D. Golubkov, A.\n Golutvin, A. Gomes, P. Gorbounov, I.V. Gorelov, C. Gotti, E. Govorkova, J.P.\n Grabowski, R. Graciani Diaz, T. Grammatico, L.A. Granado Cardoso, E.\n Graug\\'es, E. Graverini, G. Graziani, A. Grecu, R. Greim, P. Griffith, L.\n Grillo, L. Gruber, B.R. Gruberg Cazon, C. Gu, E. Gushchin, A. Guth, Yu. Guz,\n T. Gys, P. A. G\\\"unther, T. Hadavizadeh, G. Haefeli, C. Haen, S.C. Haines,\n P.M. Hamilton, Q. Han, X. Han, T.H. Hancock, S. Hansmann-Menzemer, N. Harnew,\n T. Harrison, R. Hart, C. Hasse, M. Hatch, J. He, M. Hecker, K. Heijhoff, K.\n Heinicke, A.M. Hennequin, K. Hennessy, L. Henry, J. Heuel, A. Hicheur, D.\n Hill, M. Hilton, P.H. Hopchev, J. Hu, W. Hu, W. Huang, W. Hulsbergen, T.\n Humair, R.J. Hunter, M. Hushchyn, D. Hutchcroft, D. Hynds, P. Ibis, M. Idzik,\n P. Ilten, A. Inglessi, K. Ivshin, R. Jacobsson, S. Jakobsen, E. Jans, B.K.\n Jashal, A. Jawahery, V. Jevtic, F. Jiang, M. John, D. Johnson, C.R. Jones, B.\n Jost, N. Jurik, S. Kandybei, M. Karacson, J.M. Kariuki, N. Kazeev, M. Kecke,\n F. Keizer, M. Kelsey, M. Kenzie, T. Ketel, B. Khanji, A. Kharisova, K.E. Kim,\n T. Kirn, V.S. Kirsebom, S. Klaver, K. Klimaszewski, S. Koliiev, A.\n Kondybayeva, A. Konoplyannikov, P. Kopciewicz, R. Kopecna, P. Koppenburg, M.\n Korolev, I. Kostiuk, O. Kot, S. Kotriakhova, L. Kravchuk, R.D. Krawczyk, M.\n Kreps, F. Kress, S. Kretzschmar, P. Krokovny, W. Krupa, W. Krzemien, W.\n Kucewicz, M. Kucharczyk, V. Kudryavtsev, H.S. Kuindersma, G.J. Kunde, T.\n Kvaratskheliya, D. Lacarrere, G. Lafferty, A. Lai, D. Lancierini, J.J. Lane,\n G. Lanfranchi, C. Langenbruch, O. Lantwin, T. Latham, F. Lazzari, C.\n Lazzeroni, R. Le Gac, R. Lef\\`evre, A. Leflat, O. Leroy, T. Lesiak, B.\n Leverington, H. Li, L. Li, X. Li, Y. Li, Z. Li, X. Liang, R. Lindner, V.\n Lisovskyi, G. Liu, X. Liu, D. Loh, A. Loi, J. Lomba Castro, I. Longstaff,\n J.H. Lopes, G. Loustau, G.H. Lovell, Y. Lu, D. Lucchesi, M. Lucio Martinez,\n Y. Luo, A. Lupato, E. Luppi, O. Lupton, A. Lusiani, X. Lyu, S. Maccolini, F.\n Machefert, F. Maciuc, V. Macko, P. Mackowiak, S. Maddrell-Mander, L.R. Madhan\n Mohan, O. Maev, A. Maevskiy, D. Maisuzenko, M.W. Majewski, S. Malde, B.\n Malecki, A. Malinin, T. Maltsev, H. Malygina, G. Manca, G. Mancinelli, R.\n Manera Escalero, D. Manuzzi, D. Marangotto, J. Maratas, J.F. Marchand, U.\n Marconi, S. Mariani, C. Marin Benito, M. Marinangeli, P. Marino, J. Marks,\n P.J. Marshall, G. Martellotti, L. Martinazzoli, M. Martinelli, D. Martinez\n Santos, F. Martinez Vidal, A. Massafferri, M. Materok, R. Matev, A. Mathad,\n Z. Mathe, V. Matiunin, C. Matteuzzi, K.R. Mattioli, A. Mauri, E. Maurice, M.\n McCann, L. Mcconnell, A. McNab, R. McNulty, J.V. Mead, B. Meadows, C. Meaux,\n G. Meier, N. Meinert, D. Melnychuk, S. Meloni, M. Merk, A. Merli, M.\n Mikhasenko, D.A. Milanes, E. Millard, M.-N. Minard, O. Mineev, L. Minzoni,\n S.E. Mitchell, B. Mitreska, D.S. Mitzel, A. M\\\"odden, A. Mogini, R.D. Moise,\n T. Momb\\\"acher, I.A. Monroy, S. Monteil, M. Morandin, G. Morello, M.J.\n Morello, J. Moron, A.B. Morris, A.G. Morris, R. Mountain, H. Mu, F. Muheim,\n M. Mukherjee, M. Mulder, D. M\\\"uller, K. M\\\"uller, C.H. Murphy, D. Murray, P.\n Muzzetto, P. Naik, T. Nakada, R. Nandakumar, T. Nanut, I. Nasteva, M.\n Needham, N. Neri, S. Neubert, N. Neufeld, R. Newcombe, T.D. Nguyen, C.\n Nguyen-Mau, E.M. Niel, S. Nieswand, N. Nikitin, N.S. Nolte, C. Nunez, A.\n Oblakowska-Mucha, V. Obraztsov, S. Ogilvy, D.P. O'Hanlon, R. Oldeman, C.J.G.\n Onderwater, J. D. Osborn, A. Ossowska, J.M. Otalora Goicochea, T.\n Ovsiannikova, P. Owen, A. Oyanguren, P.R. Pais, T. Pajero, A. Palano, M.\n Palutan, G. Panshin, A. Papanestis, M. Pappagallo, L.L. Pappalardo, C.\n Pappenheimer, W. Parker, C. Parkes, G. Passaleva, A. Pastore, M. Patel, C.\n Patrignani, A. Pearce, A. Pellegrino, M. Pepe Altarelli, S. Perazzini, D.\n Pereima, P. Perret, L. Pescatore, K. Petridis, A. Petrolini, A. Petrov, S.\n Petrucci, M. Petruzzo, B. Pietrzyk, G. Pietrzyk, M. Pili, D. Pinci, J.\n Pinzino, F. Pisani, A. Piucci, V. Placinta, S. Playfer, J. Plews, M. Plo\n Casasus, F. Polci, M. Poli Lener, M. Poliakova, A. Poluektov, N. Polukhina,\n I. Polyakov, E. Polycarpo, G.J. Pomery, S. Ponce, A. Popov, D. Popov, S.\n Poslavskii, K. Prasanth, L. Promberger, C. Prouve, V. Pugatch, A. Puig\n Navarro, H. Pullen, G. Punzi, W. Qian, J. Qin, R. Quagliani, B. Quintana,\n N.V. Raab, R.I. Rabadan Trejo, B. Rachwal, J.H. Rademacker, M. Rama, M. Ramos\n Pernas, M.S. Rangel, F. Ratnikov, G. Raven, M. Reboud, F. Redi, F. Reiss, C.\n Remon Alepuz, Z. Ren, V. Renaudin, S. Ricciardi, D.S. Richards, S. Richards,\n K. Rinnert, P. Robbe, A. Robert, A.B. Rodrigues, E. Rodrigues, J.A. Rodriguez\n Lopez, M. Roehrken, S. Roiser, A. Rollings, V. Romanovskiy, M. Romero Lamas,\n A. Romero Vidal, J.D. Roth, M. Rotondo, M.S. Rudolph, T. Ruf, J. Ruiz Vidal,\n A. Ryzhikov, J. Ryzka, J.J. Saborido Silva, N. Sagidova, N. Sahoo, B. Saitta,\n C. Sanchez Gras, C. Sanchez Mayordomo, R. Santacesaria, C. Santamarina Rios,\n M. Santimaria, E. Santovetti, G. Sarpis, A. Sarti, C. Satriano, A. Satta, M.\n Saur, D. Savrina, L.G. Scantlebury Smead, S. Schael, M. Schellenberg, M.\n Schiller, H. Schindler, M. Schmelling, T. Schmelzer, B. Schmidt, O.\n Schneider, A. Schopper, H.F. Schreiner, M. Schubiger, S. Schulte, M.H.\n Schune, R. Schwemmer, B. Sciascia, A. Sciubba, S. Sellam, A. Semennikov, A.\n Sergi, N. Serra, J. Serrano, L. Sestini, A. Seuthe, P. Seyfert, D.M.\n Shangase, M. Shapkin, L. Shchutska, T. Shears, L. Shekhtman, V. Shevchenko,\n E. Shmanin, J.D. Shupperd, B.G. Siddi, R. Silva Coutinho, L. Silva de\n Oliveira, G. Simi, S. Simone, I. Skiba, N. Skidmore, T. Skwarnicki, M.W.\n Slater, J.G. Smeaton, A. Smetkina, E. Smith, I.T. Smith, M. Smith, A. Snoch,\n M. Soares, L. Soares Lavra, M.D. Sokoloff, F.J.P. Soler, B. Souza De Paula,\n B. Spaan, E. Spadaro Norella, P. Spradlin, F. Stagni, M. Stahl, S. Stahl, P.\n Stefko, O. Steinkamp, S. Stemmle, O. Stenyakin, M. Stepanova, H. Stevens, S.\n Stone, S. Stracka, M.E. Stramaglia, M. Straticiuc, S. Strokov, J. Sun, L.\n Sun, Y. Sun, P. Svihra, K. Swientek, A. Szabelski, T. Szumlak, M. Szymanski,\n S. Taneja, Z. Tang, T. Tekampe, F. Teubert, E. Thomas, K.A. Thomson, M.J.\n Tilley, V. Tisserand, S. T'Jampens, M. Tobin, S. Tolk, L. Tomassetti, D.\n Tonelli, D. Torres Machado, D.Y. Tou, E. Tournefier, M. Traill, M.T. Tran, E.\n Trifonova, C. Trippl, A. Trisovic, A. Tsaregorodtsev, G. Tuci, A. Tully, N.\n Tuning, A. Ukleja, A. Usachov, A. Ustyuzhanin, U. Uwer, A. Vagner, V.\n Vagnoni, A. Valassi, G. Valenti, M. van Beuzekom, H. Van Hecke, E. van\n Herwijnen, C.B. Van Hulse, M. van Veghel, R. Vazquez Gomez, P. Vazquez\n Regueiro, C. V\\'azquez Sierra, S. Vecchi, J.J. Velthuis, M. Veltri, A.\n Venkateswaran, M. Vernet, M. Veronesi, M. Vesterinen, J.V. Viana Barbosa, D.\n Vieira, M. Vieites Diaz, H. Viemann, X. Vilasis-Cardona, A. Vitkovskiy, A.\n Vollhardt, D. Vom Bruch, A. Vorobyev, V. Vorobyev, N. Voropaev, R. Waldi, J.\n Walsh, J. Wang, J. Wang, J. Wang, M. Wang, Y. Wang, Z. Wang, D.R. Ward, H.M.\n Wark, N.K. Watson, D. Websdale, A. Weiden, C. Weisser, B.D.C. Westhenry, D.J.\n White, M. Whitehead, D. Wiedner, G. Wilkinson, M. Wilkinson, I. Williams, M.\n Williams, M.R.J. Williams, T. Williams, F.F. Wilson, W. Wislicki, M. Witek,\n L. Witola, G. Wormser, S.A. Wotton, H. Wu, K. Wyllie, Z. Xiang, D. Xiao, Y.\n Xie, H. Xing, A. Xu, L. Xu, M. Xu, Q. Xu, Z. Xu, Z. Yang, Z. Yang, Y. Yao,\n L.E. Yeomans, H. Yin, J. Yu, X. Yuan, O. Yushchenko, K.A. Zarebski, M.\n Zavertyaev, M. Zdybal, M. Zeng, D. Zhang, L. Zhang, S. Zhang, W.C. Zhang, Y.\n Zhang, A. Zhelezov, Y. Zheng, X. Zhou, Y. Zhou, X. Zhu, V. Zhukov, J.B.\n Zonneveld, S. Zucchelli",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26123",
"submitter": "Matthew Rudolph",
"url": "https://arxiv.org/abs/2003.04352"
} | arxiv-papers | EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-EP-2020-020 LHCb-PAPER-2019-043 9 March 2020
Search for the lepton flavour violating decay
${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ using $B_{s2}^{*0}$
decays
LHCb collaboration†††Authors are listed at the end of this paper.
A search is presented for the lepton flavour violating decay
${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ using a sample of
proton–proton collisions at centre-of-mass energies of 7, 8, and
$13\text{\,}\mathrm{Te\kern-1.00006ptV}$, collected with the LHCb detector and
corresponding to a total integrated luminosity of 9$\text{\,fb}^{-1}$. The
$\tau$ leptons are selected inclusively, primarily via decays with a single
charged particle. The four-momentum of the $\tau$ lepton is determined by
using ${B}^{+}$ mesons from ${B_{s2}^{*0}}\rightarrow{{B}^{+}}{{K}^{-}}$
decays. No significant excess is observed, and an upper limit is set on the
branching fraction
${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})<$3.9\text{\times}{10}^{-5}$\text{
at 90\% confidence level}.$
The obtained limit is comparable to the world-best limit.
Submitted to JHEP
© 2024 CERN for the benefit of the LHCb collaboration. CC BY 4.0 licence.
## 1 Introduction
A number of experimental hints of lepton flavour universality violation in the
semileptonic transitions $b\\!\rightarrow s\ell^{+}\ell^{-}$ [1, 2, 3] and
$b\\!\rightarrow c\ell^{-}{\overline{\nu}}_{\ell}$ [4, 5, 6, 7, 8, 9] have
recently been found.111The inclusion of charge-conjugate processes is implied
throughout. In general, physics beyond the Standard Model that generates
lepton flavour non-universality is likely to also produce direct lepton
flavour violation [10]. Theoretical models seeking to simultaneously explain
all these anomalies, for example with a vector leptoquark, often lead to
relatively large branching fractions for the decays
${B}\\!\rightarrow{K}\mu^{\pm}\tau^{\mp}$ [11, 12, 13, 14, 15, 16].
The branching fractions for the two $\mu\tau$ charge combinations are not in
general the same, as they depend on the details of the physics mechanism
producing the decay. In this paper, we present a search for the decay
${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$. From an experimental
point of view, this combination is preferred over
${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{+}}{\tau^{-}}$ as it has a lower
background from semileptonic ${B}\\!\rightarrow{\kern
1.79993pt\overline{\kern-1.79993ptD}}X{\mu^{+}}{{\nu}_{\mu}}$ decays, because
Cabibbo-favoured decays of the charm meson are likely to lead to kaons of the
same charge as the muon. An upper limit on the branching fraction for the
signal decay has been previously set by the BaBar collaboration [17]
${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})<$2.8\text{\times}{10}^{-5}$$
at 90% confidence level (CL).
We reconstruct the full four-momentum of the $\tau$ lepton using ${B}^{+}$
mesons from the decay ${B_{s2}^{*0}}\\!\rightarrow{{B}^{+}}{{K}^{-}}$, which
amounts to about 1% of ${B}^{+}$ production. By reconstructing the decay
vertex of the ${B}^{+}$ meson from the ${{K}^{+}}{\mu^{-}}$ pair and the
momentum of the ${K}^{-}$ meson, it is possible to determine the momentum of
the ${B}^{+}$ meson up to a quadratic ambiguity by imposing mass constraints
on the $B_{s2}^{*0}$ and ${B}^{+}$ mesons [18]. This technique was first used
to study relative branching fractions in ${{B}^{+}}\\!\rightarrow{{\kern
1.79993pt\overline{\kern-1.79993ptD}}{}^{0}}X{\mu^{+}}\nu$ decays [19]. We
then search for a peak in the missing-mass squared distribution corresponding
to the $\tau$ mass squared, $m_{\tau}^{2}$. Even signal ${B}^{+}$ mesons not
coming from a $B_{s2}^{*0}$ decay show a peak at $m_{\tau}^{2}$. We account
for the contribution of these non-$B_{s2}^{*0}$ candidates in the analysis.
The $\tau$ leptons are selected inclusively, as we only require one additional
charged track near the ${{K}^{+}}{\mu^{-}}$ pair to help discriminate against
background. To normalise the branching fraction, we use the decay
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$, with
${{J\mskip-3.0mu/\mskip-2.0mu\psi}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$. The
normalisation channel is also used to quantify the contributions from
$B_{s2}^{*0}$ decays, as well as non-$B_{s2}^{*0}$ candidates with nearby
kaons.
In addition to providing the missing-mass discriminating variable, this method
allows us to study the control sample composed of same-sign
${{B}^{+}}{{K}^{+}}$ decays, which does not include any $B_{s2}^{*0}$
component. We use this sample to optimise the signal selection, and motivate
our description of the background missing-mass shape.
## 2 Detector, data samples, and simulation
The LHCb detector [20, 21] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$, designed for the study of particles
containing $b$ or $c$ quarks. The detector includes a high-precision tracking
system consisting of a silicon-strip vertex detector surrounding the $pp$
interaction region, a large-area silicon-strip detector located upstream of a
dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three
stations of silicon-strip detectors and straw drift tubes placed downstream of
the magnet. The tracking system provides a measurement of the momentum, $p$,
of charged particles with a relative uncertainty that varies from 0.5% at low
momentum to 1.0% at 200 GeV.222Natural units with $c=1$ are used throughout.
The minimum distance of a track to a primary $pp$ interaction vertex (PV), the
impact parameter, is measured with a resolution of
$(15+29/p_{\mathrm{T}})\,\upmu\text{m}$, where $p_{\mathrm{T}}$ is the
component of the momentum transverse to the beam, in GeV. Different types of
charged hadrons are distinguished using information from two ring-imaging
Cherenkov detectors. Muons are identified by a system composed of alternating
layers of iron and multiwire proportional chambers. The online event selection
is performed by a trigger, which consists of a hardware stage, based on
information from the calorimeter and muon systems, followed by a software
stage, which applies a full event reconstruction. At the hardware trigger
stage, events are required to have a muon with high $p_{\mathrm{T}}$ or a
hadron, photon or electron with high transverse energy deposited in the
calorimeters. The software trigger requires a two-, three- or four-track
secondary vertex with a significant displacement from any primary vertex.
We use data samples collected from 2011 to 2018, at centre-of-mass energies of
7, 8, and $13\text{\,}\mathrm{Te\kern-1.00006ptV}$, corresponding to an
integrated luminosity of 9$\text{\,fb}^{-1}$. We model signal and
normalisation decays using simulation. In the simulation, $pp$ collisions are
generated using Pythia [22, *Sjostrand:2007gs] with a specific LHCb
configuration [24]. Decays of hadronic particles are described by EvtGen [25].
The interaction of the generated particles with the detector, and its
response, are implemented using the Geant4 toolkit [26, *Agostinelli:2002hh]
as described in Ref. [28].
For the signal, we consider both a phase space model and variations of the
decay kinematics with effective operators for the $b\\!\rightarrow
s{\mu^{+}}{\tau^{-}}$ interaction and their corresponding Wilson coefficients
using the distributions from Ref. [29] and the form factors from Ref. [30].
The branching fraction limit is determined for various hypotheses: for the
phase-space decay, for a decay via the vector or axial-vector operators
$\mathcal{O}_{9}^{(^{\prime})}$ or $\mathcal{O}_{10}^{(^{\prime})}$, and for a
decay using the scalar or pseudoscalar operators
$\mathcal{O}^{(^{\prime})}_{S}$ or $\mathcal{O}^{(^{\prime})}_{P}$ [29].
## 3 Selection and missing mass calculation
The selection of ${B}^{+}$ candidates begins with a ${{K}^{+}}{\mu^{-}}$ pair
with an invariant mass
$m_{{{K}^{+}}{\mu^{-}}}>$1800\text{\,}\mathrm{Me\kern-1.00006ptV}$$ to reduce
background from semileptonic charm decays. The ${K}^{+}$ and $\mu^{-}$
candidates are formed from high-quality tracks consistent with kaon and muon
hypotheses and inconsistent with being produced at any PV in the event. The
${{K}^{+}}{\mu^{-}}$ vertex must be of high quality and well separated from
any PV.
To better separate signal candidates with $\tau$ leptons from background, we
require an additional track, labelled $t^{+}$, with charge opposite to that of
the muon. By adding this third track, we also fully reconstruct the
normalisation mode
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$, with
${{J\mskip-3.0mu/\mskip-2.0mu\psi}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$. Many
background candidates are expected to come from $B$-meson decays of the form
${B}\\!\rightarrow{\kern
1.79993pt\overline{\kern-1.79993ptD}}\quantity(\\!\rightarrow{{K}^{+}}X{\mu^{-}}){{K}^{+}}Y$,
where $X$ and $Y$ refer to any number of additional particles. In these cases
the kaon originating from the $\kern 1.79993pt\overline{\kern-1.79993ptD}$
meson is assigned as the additional track. Since only approximately 2% of
$\tau$ decays contain a charged kaon, we apply particle identification
requirements so that the track is unlikely to be a charged kaon. Events in
which a candidate
${\tau^{+}}\\!\rightarrow{{\pi}^{+}}{{\pi}^{-}}{{\pi}^{+}}{{\overline{\nu}}_{\tau}}$
decay is found are not used in this search to avoid overlap with ongoing
searches at LHCb exclusively using this decay channel. In addition, events in
which we find multiple candidates are not used in this analysis. These
requirements do remove signal with multi-prong $\tau$ decays, with an overall
loss of less than 3%. Multiple candidate events are more likely to come from
background, however. We split the data samples into signal and normalisation
regions based on the invariant mass of the ${{K}^{+}}{\mu^{-}}t^{+}$ triple,
using the muon hypothesis for the third track. Candidates with
$m_{K\mu\mu}<$4800\text{\,}\mathrm{Me\kern-1.00006ptV}$$ fall into the signal
region, while candidates with
$5180<m_{K\mu\mu}<$5380\text{\,}\mathrm{Me\kern-1.00006ptV}$$ and
$\absolutevalue{m_{\mu\mu}-m_{{{J\mskip-3.0mu/\mskip-2.0mu\psi}}}}<$40\text{\,}\mathrm{Me\kern-1.00006ptV}$$
fall into the normalisation region.
The ${B}^{+}$ candidate direction is estimated using the PV and
${{K}^{+}}{\mu^{-}}$ vertex positions. We next consider prompt tracks, _i.e._
those that are consistent with being produced at that PV. Those tracks
identified as kaons, with a charge opposite to that of the kaon in the
${{K}^{+}}{\mu^{-}}$ pair and a small perpendicular momentum relative to the
${B}^{+}$ candidate direction, are combined with the ${B}^{+}$ candidates to
form $B_{s2}^{*0}$ candidates. We refer to this sample as the opposite-sign
kaon (OS$K$) sample. Additionally, we select a control sample, referred to as
same-sign kaon (SS$K$) sample, by adding prompt kaons of the same sign as the
kaon in the ${{K}^{+}}{\mu^{-}}$ pair.
From Ref. [19], the two $B$-meson energy solutions are
$\displaystyle E_{B}$
$\displaystyle=\frac{\Delta^{2}}{2E_{K}}\frac{1}{1-\quantity(p_{K}/E_{K})^{2}\cos^{2}\theta}\quantity[1\pm\sqrt{d}],\text{
where}$ (1) $\displaystyle d$
$\displaystyle=\frac{p_{K}^{2}}{E_{K}^{2}}\cos^{2}\theta-\frac{4m_{B}^{2}p_{K}^{2}\cos^{2}\theta}{\Delta^{4}}\quantity(1-\frac{p_{K}^{2}}{E_{K}^{2}}\cos^{2}\theta),$
(2) $\displaystyle\Delta^{2}$ $\displaystyle=m_{BK}^{2}-m_{B}^{2}-m_{K}^{2},$
(3)
where $m_{BK}=m_{{B_{s2}^{*0}}}$ is the assumed ${{B}^{+}}{{K}^{-}}$ mass,
$p_{K}$ and $E_{K}$ are the reconstructed prompt kaon momentum and energy, and
$\theta$ is the laboratory frame angle between the prompt kaon and $B$-meson
directions. The missing four-momentum of the $\tau$ lepton, $P_{\text{miss}}$,
is then reconstructed as $P_{B}-P_{{{K}^{+}}{\mu^{-}}}$, where $P_{B}$ and
$P_{{{K}^{+}}{\mu^{-}}}$ are the four-momenta of the $B$ meson and
${{K}^{+}}{\mu^{-}}$ pair. The missing mass squared is calculated using the
lowest energy, real solution for which the resulting missing energy is greater
than the reconstructed energy of the third track under a pion mass hypothesis.
With this choice, we correctly reconstruct the energy of signal decays in
simulation in more than 75% of cases. About 9% of all signal decays have no
such solution and are lost. Both signal and normalisation candidates, as well
as the SS$K$ control-sample candidates, are required to pass this procedure.
Candidates in the signal region are additionally required to have the residual
missing mass squared, defined as the four-momentum difference of the $B$ meson
and ${{K}^{+}}{\mu^{-}}t^{+}$ triple,
$\quantity(P_{B}-P_{{{K}^{+}}{\mu^{-}}}-P_{t})^{2}$, greater than
$-0.5\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$. This requirement removes
background and only poorly reconstructed signal candidates which do not peak
at the $\tau$ mass squared. The minimum mass difference, defined in Ref. [19]
as
$\Delta
m_{\mathrm{min}}=\sqrt{m_{B}^{2}+m_{K}^{2}+2m_{B}\sqrt{p_{K}^{2}\sin^{2}\theta+m_{K}^{2}}}-m_{B}-m_{K},$
(4)
is required to be greater than $30\text{\,}\mathrm{Me\kern-1.00006ptV}$. This
removes contributions from $B_{s1}^{0}$ and
${B_{s2}^{*0}}\\!\rightarrow{B}^{*+}{{K}^{-}}$ decays, as well as background
in which a kaon from the $B$ decay is wrongly associated to the primary
vertex.
Missing-mass distributions for the signal simulation and the full data sample
after the above selection are shown in Fig. 1. All signal decays, whether they
come from a $B_{s2}^{*0}$ meson or not, peak at the known $m_{\tau}^{2}$,
however the non-$B_{s2}^{*0}$ candidates have a much wider peak than the
$B_{s2}^{*0}$ ones. The data distributions are shown for both the OS$K$ and
SS$K$ samples. They have similar shapes with a broad hump centred near
$5\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$. We note that the OS$K$ sample
has a higher yield than the SS$K$; this excess has been observed in both fully
and partially reconstructed decays [31, 19].
Figure 1: Missing mass squared, $m_{\mathrm{miss}}^{2}$, distributions for
(left) simulated signal ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$
decays and (right) all selected candidates in data before applying the signal
optimisation described in Sect. 5.
## 4 Normalisation
We determine the yield of the normalisation decay, as well as the relative
efficiency of the signal modes with respect to the normalisation mode,
separately for each data-taking year. For the normalisation mode, we determine
the inclusive yield of
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ decays,
whether or not they originate from a $B_{s2}^{*0}$ meson, by a binned maximum-
likelihood fit to the ${{K}^{+}}{\mu^{-}}t^{+}$ mass distribution, where we
assign the muon mass hypothesis to the third track. The signal is described
with a Gaussian distribution, and the background with a linear model.
We determine the fraction of the normalisation candidates coming from
$B_{s2}^{*0}$ decays using a ${{K}^{+}}{\mu^{-}}t^{+}$ mass fit for the
combined-years data sample using the same model as the separated-years
samples, along with a binned maximum-likelihood fit to the measured mass-
difference distribution $m_{{{B}^{+}}{{K}^{-}}}-m_{{{B}^{+}}}-m_{{{K}^{-}}}$
around the $B_{s2}^{*0}$ peak. For the latter fit, we describe the signal peak
with a Gaussian core that transitions to an exponential tail on each side, and
we model the background with a third-degree polynomial. The results of these
fits are shown in Fig. 2. The total data sample contains $4240\pm 70$
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ decays;
the fraction originating from $B_{s2}^{*0}$ decays is
$f_{B_{s2}^{*0}}=$25.4\pm 1.8\text{\,}\mathrm{\char 37\relax}$$, where the
uncertainty combines the statistical and systematic uncertainties from the
choice of fit function. The year-to-year variation is not found to be
statistically significant, so we use the value obtained from the combined
dataset for all years.
Figure 2: Distributions of normalization candidates in (left) mass,
$m_{{{K}^{+}}{\mu^{-}}{\mu^{+}}}$, and (right) the mass difference,
$m_{{{B}^{+}}{{K}^{-}}}-m_{{{B}^{+}}}-m_{{{K}^{-}}}$. The result of each fit
is shown as a solid line, with the background component as a dashed line.
The relative efficiency of the signal and normalisation modes is determined
using simulation with corrections from data. For $B_{s2}^{*0}$ decays the
relative efficiencies in different years average around 30%, with an absolute
year-to-year variation of less than 3%. Different signal decay models change
the relative efficiency by approximately 10%, with the decays via scalar and
pseudoscalar operators having a lower overall efficiency. Signal events in
which the ${B}^{+}$ meson does not originate from a $B_{s2}^{*0}$ decay have a
lower selection efficiency, primarily because fewer of these candidates pass
the residual missing-mass requirement and fall into the missing-mass fit
range. Using simulation, we derive an additional efficiency factor for this
signal component of $r_{\text{non-{$B_{s2}^{*0}$}}}=0.849\pm 0.007$.
## 5 Multivariate signal selection
We further improve the signal selection using a Boosted Decision Tree (BDT)
classification with the Adaboost algorithm [32]. The BDT inputs are primarily
chosen to distinguish additional tracks coming from signal $\tau$ lepton
decays from various sources of background. Some examples are semileptonic
$b$-hadron decays to charm where the charm hadron produces a kaon with charge
opposite that of the muon, or $b$-hadron decays where the muon is produced in
the semileptonic decay of a child charm hadron. The background training sample
is taken from the SS$K$ sample in the $m_{\mathrm{miss}}^{2}$ region around
$m_{\tau}^{2}$. This focuses the training on the sources of background which
fall near the signal peak. We describe the signal with simulation samples that
include only $B_{s2}^{*0}$ decays; the effect of the BDT on non-$B_{s2}^{*0}$
signal simulation is then estimated separately. The training makes use of
different topological reconstructions of the ${{K}^{+}}{\mu^{-}}t^{+}$ triple:
in addition to the signal selection, we also first combine either the kaon and
the track or the muon and the track into a pair before adding the third
particle. The pair masses and the flight distance of the pair in each topology
help to distinguish the signal from background, for instance when the pair
comes from a charm hadron decay. We also include the flight distance of the
$\tau$, which we reconstruct as the distance along the $\tau$ trajectory found
in the missing-mass calculation from the ${{K}^{+}}{\mu^{-}}$ vertex to the
point of closest approach of the third track.
The result of a separate isolation discriminant is included to reduce
background with additional charged tracks; this discriminant is trained to
distinguish additional tracks belonging to the same $b$-hadron decay from
other tracks in the event based on kinematic and topological variables. We
perform the rest of the analysis in four bins of the signal optimisation BDT
output, keeping about 70% of all simulated $B_{s2}^{*0}$ signal candidates and
about 40% of non-$B_{s2}^{*0}$ signal candidates. The bins are chosen by
optimising the expected upper limit using a number of background events
derived from the OS$K$ and SS$K$ $m_{\mathrm{miss}}^{2}$ sidebands.
## 6 Background studies
The background in this analysis is composed of a large number of different
partially reconstructed $b$-hadron decays. None of them, however, produce a
narrow peak in $m_{\mathrm{miss}}^{2}$. Only ${B}^{+}$ mesons produced from
$B_{s2}^{*0}$ decays have a resolution comparable to the signal. Furthermore,
if there is more than one missing particle then the true missing-mass
distribution will be much wider than the expected signal peak. Charm hadrons
have masses close to the $\tau$ mass, however there is no Standard Model decay
${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{{D}^{+}}$. Because of their low
branching fraction, we are not sensitive to decays such as
${{B}^{+}}\\!\rightarrow{{K}^{+}}{{\pi}^{-}}{{D}^{+}}$, where the pion is
misidentified as a muon. We expect that the missing-mass distribution, summed
over many different background components, is smooth, and we model it as a
polynomial.
These assumptions are tested using simulation and data. We produce fast
simulation samples with RapidSim [33] of a number of potential exclusive
background sources from ${B}^{+}$, ${B}^{0}$, ${B}^{0}_{s}$, and ${\mathchar
28931\relax}^{0}_{b}$ hadrons; the true missing-mass distributions for these
decays are smeared to estimate their shapes in data. No sign of any sharply
peaking component is found. In data we consider a number of different control
samples, namely all possible $K\mu t$ charge combinations in both OS$K$ and
SS$K$ samples, excluding the signal selection of ${{K}^{+}}{\mu^{-}}t^{+}$ in
the OS$K$ sample. There is no sign of any narrow peak in any of the
distributions, even after applying a tight requirement on the BDT output.
Maximum-likelihood fits to the SS$K$ sample using polynomials of different
degrees in the restricted $m_{\mathrm{miss}}^{2}$ range from
$16\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$ are used to study the
background shape in more detail. The optimal number of free polynomial
parameters in the most signal-like BDT output bin, based on the best-fit value
of $-2\log\mathcal{L}$, penalised by one for each additional parameter, is
four. We further study the effect of background modelling by performing a
large number of pseudoexperiments, both background-only and with injected
signal at branching fractions of $1\text{\times}{10}^{-5}$ and
$2\text{\times}{10}^{-5}$. In these studies, we first fit a background model
of some polynomial degree to one of the control samples. From this background
model we generate many pseudodatasets that we fit with a model of a different
degree. Based on these studies, we take into account the systematic
uncertainty due to the background modelling by reporting the weakest limit
using background descriptions of third, fourth, or fifth degree polynomials,
all of which well describe the background shapes in the pseudoexperiments.
## 7 Fit description
We search for the ${{K}^{+}}{\mu^{-}}{\tau^{+}}$ missing-mass peak with an
unbinned maximum-ikelihood fit simultaneously in four bins of BDT output in
the OS$K$ ${{K}^{+}}{\mu^{-}}t^{+}$ signal channel. The fit is performed in
the missing-mass range
$1<m_{\mathrm{miss}}^{2}<$6\text{\,}{\mathrm{Ge\kern-1.00006ptV}}^{2}$$. The
parameter of interest is the branching fraction
${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})$.
We describe the $m_{\mathrm{miss}}^{2}$ shape for the signal component with a
generalized hyperbolic distribution with shape parameters obtained from
simulation. Two signal shapes are used: one for $B_{s2}^{*0}$ decays, and one
for the wider non-$B_{s2}^{*0}$ contribution. We determine the shapes
separately in each bin of BDT response. The signal decay model does not
significantly affect the signal missing-mass shape. The background is
described by polynomial functions which vary independently in each BDT output
bin.
We base the normalization of the signal components on the yields of the
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ decays
determined in data year-by-year. We combine this together with the relative
efficiencies, $\varepsilon_{\text{rel}}$; the known
${{B}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}$ with
${{J\mskip-3.0mu/\mskip-2.0mu\psi}}\\!\rightarrow{\mu^{+}}{\mu^{-}}$ combined
branching fraction, abbreviated as
${\mathcal{B}}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}})$; and the
parameter of interest to derive a total number of
${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$ signal decays. This
total is divided between $B_{s2}^{*0}$ and non-$B_{s2}^{*0}$ decays based on
the observed fraction in the normalization channel, and then distributed
across the four BDT bins. This gives yields in each BDT bin $j$ of
$\displaystyle
N_{j}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}|{B_{s2}^{*0}})={}$
$\displaystyle\varepsilon_{{B_{s2}^{*0}},j}\frac{{\mathcal{B}}\quantity({{K}^{+}}{\mu^{-}}{\tau^{+}})}{{\mathcal{B}}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}})}f_{{B_{s2}^{*0}}}\times{}$
$\displaystyle\sum_{i\in\text{years}}\varepsilon_{\text{rel},i}N_{i}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}),$
(5) $\displaystyle
N_{j}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}|\text{non-{$B_{s2}^{*0}$}})={}$
$\displaystyle\varepsilon_{\text{non-{$B_{s2}^{*0}$}},j}\frac{{\mathcal{B}}\quantity({{K}^{+}}{\mu^{-}}{\tau^{+}})}{{\mathcal{B}}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}})}\quantity(1-f_{{B_{s2}^{*0}}})\times{}$
$\displaystyle\sum_{i\in\text{years}}\varepsilon_{\text{rel},i}r_{\text{non-{$B_{s2}^{*0}$}}}N_{i}\quantity({{J\mskip-3.0mu/\mskip-2.0mu\psi}}{{K}^{+}}),$
(6)
where $\varepsilon_{{B_{s2}^{*0}},j}$ and
$\varepsilon_{\text{non-{$B_{s2}^{*0}$}},j}$ are the separate efficiencies for
each signal component to be found in BDT bin $j$. The main parameters of the
fit are thus the ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$
branching fraction, four parameters for the background normalisation in each
BDT bin, and up to five parameters describing the polynomial background shapes
in each BDT bin.
The largest systematic uncertainty comes from the choice of background model.
The fifth degree background description obtains the weakest limit among the
tested background models. We include the effects of other systematic
uncertainties using Gaussian-constrained nuisance parameters. These nuisance
parameters modify the normalisation yield, the relative efficiency of the
signal and normalisation channels, the signal yield in each BDT bin, and the
signal shapes. The largest effects come from the modelling of the kinematics
of $B_{s2}^{*0}$ decays in simulation, which results in 5% changes in the
relative efficiency and in the signal fractions in each bin of BDT response.
The relative statistical uncertainty of the $B_{s2}^{*0}$ fraction taken from
the normalisation channel is also approximately 5%. Altogether, the total
effect of these systematic uncertainties on the final limit is small, at the
$10^{-6}$ level.
## 8 Results and conclusion
The result at the best fit point is shown in Fig. 3. The obtained value for
the signal branching fraction from the maximum-likelihood fit is
${\quantity(1.9\pm 1.5)\times 10^{-5}}$. No significant excess is observed,
and we set upper limits on the branching fraction using the CLs method [34].
We perform a scan in the signal branching fraction, obtaining the signal and
background $p$-values from the distributions of a one-sided profile-
likelihood-ratio test statistic obtained with pseudoexperiments in which we
vary the constraints on the systematic uncertainties. The scan used to
determine the observed limits, compared to the expected one, is shown in Fig.
4. The expected upper limit at 90% CL is $2.3\text{\times}{10}^{-5}$. The
observed 90% and 95% CL limits, assuming a phase space signal decay model,
are:
$\displaystyle{\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})$
$\displaystyle<$3.9\text{\times}{10}^{-5}$\text{ at 90\% CL},$
$\displaystyle<$4.5\text{\times}{10}^{-5}$\text{ at 95\% CL}.$
An identical limit is obtained when the decay is generated from the effective
operators $\mathcal{O}_{9}^{(^{\prime})}$ or $\mathcal{O}_{10}^{(^{\prime})}$.
If instead it is produced from $\mathcal{O}^{(^{\prime})}_{S}$ or
$\mathcal{O}^{(^{\prime})}_{P}$, the obtained limit is
${\mathcal{B}}\quantity({{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}})<$4.4\text{\times}{10}^{-5}$$
at 90% CL and ${}<$5.0\text{\times}{10}^{-5}$$ at 95% CL.
Figure 3: Fits to the missing-mass-squared distribution OS$K$ signal sample in
each bin of BDT output included in the final fit. The best fit is overlaid.
BDT bin 1 is the most background-like. The fit is performed using a fifth
degree polynomial description of the background. Figure 4: Scan of the
$p$-value in the signal branching fraction used to determine the CLs upper
limits, compared to the expected one. The horizontal red line shows a
$p$-value of 0.1, used to define the 90% CL upper limit.
This is the first result from the LHCb experiment for the lepton-flavour
violating decay ${{B}^{+}}\\!\rightarrow{{K}^{+}}{\mu^{-}}{\tau^{+}}$. By
studying ${B}^{+}$ mesons from $B_{s2}^{*0}$ decays, we are able to make the
first analysis at LHCb of a $B$ hadron decay using inclusive $\tau$ decays.
This provides complementary information to searches for lepton-flavour
violation at LHCb with three-prong $\tau$ decays, for example
$B_{(s)}^{0}\\!\rightarrow\tau^{\pm}\mu^{\mp}$ decays [35]. We observe no
significant signal, and set an upper limit slightly above that obtained by the
BaBar collaboration [17].
## Acknowledgements
We express our gratitude to our colleagues in the CERN accelerator departments
for the excellent performance of the LHC. We thank the technical and
administrative staff at the LHCb institutes. We acknowledge support from CERN
and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST
and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN
(Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE
(Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC
(United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources
that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN
(Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and
Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-
GRID (Poland) and OSC (USA). We are indebted to the communities behind the
multiple open-source software packages on which we depend. Individual groups
or members have received support from AvH Foundation (Germany); EPLANET, Marie
Skłodowska-Curie Actions and ERC (European Union); ANR, Labex P2IO and OCEVU,
and Région Auvergne-Rhône-Alpes (France); Key Research Program of Frontier
Sciences of CAS, CAS PIFI, and the Thousand Talents Program (China); RFBR, RSF
and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society
and the Leverhulme Trust (United Kingdom).
## References
* [1] LHCb collaboration, R. Aaij et al., _Test of lepton universality with ${{B}^{0}}\\!\rightarrow{{K}^{*0}}\ell^{+}\ell^{-}$ decays_, JHEP 08 (2017) 055, arXiv:1705.05802
* [2] LHCb collaboration, R. Aaij et al., _Search for lepton-universality violation in ${{{B}^{+}}}\\!\rightarrow{{K}^{+}}\ell^{+}\ell^{-}$ decays_, Phys. Rev. Lett. 122 (2019) 191801, arXiv:1903.09252
* [3] LHCb collaboration, R. Aaij et al., _Test of lepton universality using ${{\mathchar 28931\relax}^{0}_{b}}\\!\rightarrow p{{K}^{-}}\ell^{+}\ell^{-}$ decays_, arXiv:1912.08139, submitted to JHEP
* [4] BaBar collaboration, J. P. Lees et al., _Measurement of an excess of $\bar{B}\rightarrow D^{(*)}\tau^{-}\bar{\nu}_{\tau}$ decays and implications for charged Higgs bosons_, Phys. Rev. D88 (2013) 072012, arXiv:1303.0571
* [5] Belle collaboration, M. Huschle et al., _Measurement of the branching ratio of $\bar{B}\rightarrow D^{(\ast)}\tau^{-}\bar{\nu}_{\tau}$ relative to $\bar{B}\rightarrow D^{(\ast)}\ell^{-}\bar{\nu}_{\ell}$ decays with hadronic tagging at Belle_, Phys. Rev. D92 (2015) 072014, arXiv:1507.03233
* [6] LHCb collaboration, R. Aaij et al., _Measurement of the ratio of branching fractions $\mathcal{B}({{B}_{c}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{\tau^{+}}\nu_{\tau})/\mathcal{B}({{B}_{c}^{+}}\\!\rightarrow{{J\mskip-3.0mu/\mskip-2.0mu\psi}}{\mu^{+}}\nu_{\mu})$_, Phys. Rev. Lett. 120 (2018) 121801, arXiv:1711.05623
* [7] LHCb collaboration, R. Aaij et al., _Test of lepton flavor universality by the measurement of the ${{B}^{0}}\\!\rightarrow D^{\ast-}{\tau^{+}}\nu_{\tau}$ branching fraction using three-prong $\tau$ decays_, Phys. Rev. D97 (2018) 072013, arXiv:1711.02505
* [8] LHCb collaboration, R. Aaij et al., _Measurement of the ratio of the $\mathcal{B}({{B}^{0}}\\!\rightarrow D^{\ast-}{\tau^{+}}\nu_{\tau})$ and $\mathcal{B}({{B}^{0}}\\!\rightarrow D^{\ast-}{\mu^{+}}\nu_{\mu})$ branching fractions using three-prong $\tau$-lepton decays_, Phys. Rev. Lett. 120 (2018) 171802, arXiv:1708.08856
* [9] LHCb collaboration, R. Aaij et al., _Measurement of the ratio of branching fractions ${\mathcal{B}}({{\kern 1.79993pt\overline{\kern-1.79993ptB}}{}^{0}}\\!\rightarrow{{D}^{*+}}{\tau^{-}}{{\overline{\nu}}_{\tau}})/{\mathcal{B}}({{\kern 1.79993pt\overline{\kern-1.79993ptB}}{}^{0}}\\!\rightarrow{{D}^{*+}}{\mu^{-}}{{\overline{\nu}}_{\mu}})$_, Phys. Rev. Lett. 115 (2015) 111803, Publisher’s Note ibid. 115 (2015) 159901, arXiv:1506.08614
* [10] S. L. Glashow, D. Guadagnoli, and K. Lane, _Lepton flavor violation in $B$ decays?_, Phys. Rev. Lett. 114 (2015) 091801, arXiv:1411.0565
* [11] R. Barbieri, G. Isidori, A. Pattori, and F. Senia, _Anomalies in $B$-decays and $U(2)$ flavour symmetry_, Eur. Phys. J. C76 (2016) 67, arXiv:1512.01560
* [12] M. Bordone, C. Cornella, J. Fuentes-Martín, and G. Isidori, _A three-site gauge model for flavor hierarchies and flavor anomalies_ , Phys. Lett. B779 (2018) 317, arXiv:1712.01368
* [13] M. Bordone, C. Cornella, J. Fuentes-Martín, and G. Isidori, _Low-energy signatures of the $\mathrm{PS}^{3}$ model: from $B$-physics anomalies to LFV_, JHEP 10 (2018) 148, arXiv:1805.09328
* [14] M. Duraisamy, S. Sahoo, and R. Mohanta, _Rare semileptonic $B\rightarrow K(\pi)l_{i}^{-}l_{j}^{+}$ decay in a vector leptoquark model_, Phys. Rev. D95 (2017) 035022, arXiv:1610.00902
* [15] L. Di Luzio, A. Greljo, and M. Nardecchia, _Gauge leptoquark as the origin of B-physics anomalies_ , Phys. Rev. D96 (2017) 115011, arXiv:1708.08450
* [16] L. Di Luzio et al., _Maximal Flavour Violation: a Cabibbo mechanism for leptoquarks_ , JHEP 11 (2018) 081, arXiv:1808.00942
* [17] BaBar collaboration, J. P. Lees et al., _A search for the decay modes $B^{\pm}\rightarrow h^{\pm}\tau\ell$_, Phys. Rev. D86 (2012) 012004, arXiv:1204.2852
* [18] S. Stone and L. Zhang, _Method of studying ${{\mathchar 28931\relax}^{0}_{b}}$ decays with one missing particle_, Adv. High Energy Phys. 2014 (2014) 931257, arXiv:1402.4205
* [19] LHCb collaboration, R. Aaij et al., _Measurement of the relative ${{{B}^{-}}}\\!\rightarrow{{D}^{0}}/{{D}^{*0}}/D^{**0}{\mu^{-}}{{\overline{\nu}}_{\mu}}$ branching fractions using ${{B}^{-}}$ mesons from ${\kern 1.79993pt\overline{\kern-1.79993ptB}}{}_{s2}^{*0}$ decays_, Phys. Rev. D99 (2019) 092009, arXiv:1807.10722
* [20] LHCb collaboration, A. A. Alves Jr. et al., _The LHCb detector at the LHC_, JINST 3 (2008) S08005
* [21] LHCb collaboration, R. Aaij et al., _LHCb detector performance_ , Int. J. Mod. Phys. A30 (2015) 1530022, arXiv:1412.6352
* [22] T. Sjöstrand, S. Mrenna, and P. Skands, _PYTHIA 6.4 physics and manual_ , JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [23] T. Sjöstrand, S. Mrenna, and P. Skands, _A brief introduction to PYTHIA 8.1_ , Comput. Phys. Commun. 178 (2008) 852, arXiv:0710.3820
* [24] I. Belyaev et al., _Handling of the generation of primary events in Gauss, the LHCb simulation framework_ , J. Phys. Conf. Ser. 331 (2011) 032047
* [25] D. J. Lange, _The EvtGen particle decay simulation package_ , Nucl. Instrum. Meth. A462 (2001) 152
* [26] Geant4 collaboration, J. Allison et al., _Geant4 developments and applications_ , IEEE Trans. Nucl. Sci. 53 (2006) 270
* [27] Geant4 collaboration, S. Agostinelli et al., _Geant4: A simulation toolkit_ , Nucl. Instrum. Meth. A506 (2003) 250
* [28] M. Clemencic et al., _The LHCb simulation application, Gauss: Design, evolution and experience_, J. Phys. Conf. Ser. 331 (2011) 032023
* [29] D. Bečirević, O. Sumensari, and R. Zukanovich Funchal, _Lepton flavor violation in exclusive $b\rightarrow s$ decays_, Eur. Phys. J. C76 (2016) 134, arXiv:1602.00881
* [30] P. Ball and R. Zwicky, _New results on $B\rightarrow\pi,K,\eta$ decay form factors from light-cone sum rules_, Phys. Rev. D71 (2005) 014015, arXiv:hep-ph/0406232
* [31] LHCb collaboration, R. Aaij et al., _First observation of the decay $B_{s2}^{*}(5840)^{0}\\!\rightarrow B^{*+}{{K}^{-}}$ and studies of excited ${B}^{0}_{s}$ mesons_, Phys. Rev. Lett. 110 (2013) 151803, arXiv:1211.5994
* [32] Y. Freund and R. E. Schapire, _A decision-theoretic generalization of on-line learning and an application to boosting_ , J. Comput. Syst. Sci. 55 (1997) 119
* [33] G. A. Cowan, D. C. Craik, and M. D. Needham, _RapidSim: an application for the fast simulation of heavy-quark hadron decays_ , Comput. Phys. Commun. 214 (2017) 239, arXiv:1612.07489
* [34] A. L. Read, _Presentation of search results: The CL s technique_, J. Phys. G28 (2002) 2693
* [35] LHCb collaboration, R. Aaij et al., _Search for the lepton-flavour-violating decays ${{B}^{0}_{s}}\\!\rightarrow{\tau^{\pm}}{\mu^{\mp}}$ and ${{B}^{0}}\\!\rightarrow{\tau^{\pm}}{\mu^{\mp}}$_, Phys. Rev. Lett. 123 (2019) 211801, arXiv:1905.06614
LHCb collaboration
R. Aaij31, C. Abellán Beteta49, T. Ackernley59, B. Adeva45, M. Adinolfi53, H.
Afsharnia9, C.A. Aidala80, S. Aiola25, Z. Ajaltouni9, S. Akar66, P.
Albicocco22, J. Albrecht14, F. Alessio47, M. Alexander58, A. Alfonso Albero44,
G. Alkhazov37, P. Alvarez Cartelle60, A.A. Alves Jr45, S. Amato2, Y. Amhis11,
L. An21, L. Anderlini21, G. Andreassi48, M. Andreotti20, F. Archilli16, A.
Artamonov43, M. Artuso67, K. Arzymatov41, E. Aslanides10, M. Atzeni49, B.
Audurier11, S. Bachmann16, J.J. Back55, S. Baker60, V. Balagura11,b, W.
Baldini20,47, A. Baranov41, R.J. Barlow61, S. Barsuk11, W. Barter60, M.
Bartolini23,47,h, F. Baryshnikov77, J.M. Basels13, G. Bassi28, V.
Batozskaya35, B. Batsukh67, A. Battig14, A. Bay48, M. Becker14, F. Bedeschi28,
I. Bediaga1, A. Beiter67, L.J. Bel31, V. Belavin41, S. Belin26, V. Bellee48,
K. Belous43, I. Belyaev38, G. Bencivenni22, E. Ben-Haim12, S. Benson31, S.
Beranek13, A. Berezhnoy39, R. Bernet49, D. Berninghoff16, H.C. Bernstein67, C.
Bertella47, E. Bertholet12, A. Bertolin27, C. Betancourt49, F. Betti19,e, M.O.
Bettler54, Ia. Bezshyiko49, S. Bhasin53, J. Bhom33, M.S. Bieker14, S.
Bifani52, P. Billoir12, A. Bizzeti21,u, M. Bjørn62, M.P. Blago47, T. Blake55,
F. Blanc48, S. Blusk67, D. Bobulska58, V. Bocci30, O. Boente Garcia45, T.
Boettcher63, A. Boldyrev78, A. Bondar42,x, N. Bondar37, S. Borghi61,47, M.
Borisyak41, M. Borsato16, J.T. Borsuk33, T.J.V. Bowcock59, C. Bozzi20, M.J.
Bradley60, S. Braun16, A. Brea Rodriguez45, M. Brodski47, J. Brodzicka33, A.
Brossa Gonzalo55, D. Brundu26, E. Buchanan53, A. Büchler-Germann49, A.
Buonaura49, C. Burr47, A. Bursche26, A. Butkevich40, J.S. Butter31, J.
Buytaert47, W. Byczynski47, S. Cadeddu26, H. Cai72, R. Calabrese20,g, L.
Calero Diaz22, S. Cali22, R. Calladine52, M. Calvi24,i, M. Calvo Gomez44,m, P.
Camargo Magalhaes53, A. Camboni44,m, P. Campana22, D.H. Campora Perez31, A.F.
Campoverde Quezada5, L. Capriotti19,e, A. Carbone19,e, G. Carboni29, R.
Cardinale23,h, A. Cardini26, I. Carli6, P. Carniti24,i, K. Carvalho Akiba31,
A. Casais Vidal45, G. Casse59, M. Cattaneo47, G. Cavallero47, S. Celani48, R.
Cenci28,p, J. Cerasoli10, M.G. Chapman53, M. Charles12,47, Ph. Charpentier47,
G. Chatzikonstantinidis52, M. Chefdeville8, V. Chekalina41, C. Chen3, S.
Chen26, A. Chernov33, S.-G. Chitic47, V. Chobanova45, S. Cholak48, M.
Chrzaszcz33, A. Chubykin37, P. Ciambrone22, M.F. Cicala55, X. Cid Vidal45, G.
Ciezarek47, F. Cindolo19, P.E.L. Clarke57, M. Clemencic47, H.V. Cliff54, J.
Closier47, J.L. Cobbledick61, V. Coco47, J.A.B. Coelho11, J. Cogan10, E.
Cogneras9, L. Cojocariu36, P. Collins47, T. Colombo47, A. Comerma-Montells16,
A. Contu26, N. Cooke52, G. Coombs58, S. Coquereau44, G. Corti47, C.M. Costa
Sobral55, B. Couturier47, D.C. Craik63, J. Crkovská66, A. Crocombe55, M. Cruz
Torres1,ab, R. Currie57, C.L. Da Silva66, E. Dall’Occo14, J. Dalseno45,53, C.
D’Ambrosio47, A. Danilina38, P. d’Argent47, A. Davis61, O. De Aguiar
Francisco47, K. De Bruyn47, S. De Capua61, M. De Cian48, J.M. De Miranda1, L.
De Paula2, M. De Serio18,d, P. De Simone22, J.A. de Vries31, C.T. Dean66, W.
Dean80, D. Decamp8, L. Del Buono12, B. Delaney54, H.-P. Dembinski15, A.
Dendek34, V. Denysenko49, D. Derkach78, O. Deschamps9, F. Desse11, F.
Dettori26,f, B. Dey7, A. Di Canto47, P. Di Nezza22, S. Didenko77, H.
Dijkstra47, V. Dobishuk51, F. Dordei26, M. Dorigo28,y, A.C. dos Reis1, L.
Douglas58, A. Dovbnya50, K. Dreimanis59, M.W. Dudek33, L. Dufour47, G.
Dujany12, P. Durante47, J.M. Durham66, D. Dutta61, M. Dziewiecki16, A.
Dziurda33, A. Dzyuba37, S. Easo56, U. Egede69, V. Egorychev38, S.
Eidelman42,x, S. Eisenhardt57, R. Ekelhof14, S. Ek-In48, L. Eklund58, S.
Ely67, A. Ene36, E. Epple66, S. Escher13, S. Esen31, T. Evans47, A.
Falabella19, J. Fan3, N. Farley52, S. Farry59, D. Fazzini11, P. Fedin38, M.
Féo47, P. Fernandez Declara47, A. Fernandez Prieto45, F. Ferrari19,e, L.
Ferreira Lopes48, F. Ferreira Rodrigues2, S. Ferreres Sole31, M. Ferrillo49,
M. Ferro-Luzzi47, S. Filippov40, R.A. Fini18, M. Fiorini20,g, M. Firlej34,
K.M. Fischer62, C. Fitzpatrick47, T. Fiutowski34, F. Fleuret11,b, M.
Fontana47, F. Fontanelli23,h, R. Forty47, V. Franco Lima59, M. Franco
Sevilla65, M. Frank47, C. Frei47, D.A. Friday58, J. Fu25,q, Q. Fuehring14, W.
Funk47, E. Gabriel57, A. Gallas Torreira45, D. Galli19,e, S. Gallorini27, S.
Gambetta57, Y. Gan3, M. Gandelman2, P. Gandini25, Y. Gao4, L.M. Garcia
Martin46, J. García Pardiñas49, B. Garcia Plana45, F.A. Garcia Rosales11, L.
Garrido44, D. Gascon44, C. Gaspar47, D. Gerick16, E. Gersabeck61, M.
Gersabeck61, T. Gershon55, D. Gerstel10, Ph. Ghez8, V. Gibson54, A.
Gioventù45, O.G. Girard48, P. Gironella Gironell44, L. Giubega36, C.
Giugliano20, K. Gizdov57, V.V. Gligorov12, C. Göbel70, D. Golubkov38, A.
Golutvin60,77, A. Gomes1,a, P. Gorbounov38,6, I.V. Gorelov39, C. Gotti24,i, E.
Govorkova31, J.P. Grabowski16, R. Graciani Diaz44, T. Grammatico12, L.A.
Granado Cardoso47, E. Graugés44, E. Graverini48, G. Graziani21, A. Grecu36, R.
Greim31, P. Griffith20, L. Grillo61, L. Gruber47, B.R. Gruberg Cazon62, C.
Gu3, E. Gushchin40, A. Guth13, Yu. Guz43,47, T. Gys47, P. A. Günther16, T.
Hadavizadeh62, G. Haefeli48, C. Haen47, S.C. Haines54, P.M. Hamilton65, Q.
Han7, X. Han16, T.H. Hancock62, S. Hansmann-Menzemer16, N. Harnew62, T.
Harrison59, R. Hart31, C. Hasse14, M. Hatch47, J. He5, M. Hecker60, K.
Heijhoff31, K. Heinicke14, A.M. Hennequin47, K. Hennessy59, L. Henry46, J.
Heuel13, A. Hicheur68, D. Hill62, M. Hilton61, P.H. Hopchev48, J. Hu16, W.
Hu7, W. Huang5, W. Hulsbergen31, T. Humair60, R.J. Hunter55, M. Hushchyn78, D.
Hutchcroft59, D. Hynds31, P. Ibis14, M. Idzik34, P. Ilten52, A. Inglessi37, K.
Ivshin37, R. Jacobsson47, S. Jakobsen47, E. Jans31, B.K. Jashal46, A.
Jawahery65, V. Jevtic14, F. Jiang3, M. John62, D. Johnson47, C.R. Jones54, B.
Jost47, N. Jurik62, S. Kandybei50, M. Karacson47, J.M. Kariuki53, N. Kazeev78,
M. Kecke16, F. Keizer54,47, M. Kelsey67, M. Kenzie55, T. Ketel32, B. Khanji47,
A. Kharisova79, K.E. Kim67, T. Kirn13, V.S. Kirsebom48, S. Klaver22, K.
Klimaszewski35, S. Koliiev51, A. Kondybayeva77, A. Konoplyannikov38, P.
Kopciewicz34, R. Kopecna16, P. Koppenburg31, M. Korolev39, I. Kostiuk31,51, O.
Kot51, S. Kotriakhova37, L. Kravchuk40, R.D. Krawczyk47, M. Kreps55, F.
Kress60, S. Kretzschmar13, P. Krokovny42,x, W. Krupa34, W. Krzemien35, W.
Kucewicz33,l, M. Kucharczyk33, V. Kudryavtsev42,x, H.S. Kuindersma31, G.J.
Kunde66, T. Kvaratskheliya38, D. Lacarrere47, G. Lafferty61, A. Lai26, D.
Lancierini49, J.J. Lane61, G. Lanfranchi22, C. Langenbruch13, O. Lantwin49, T.
Latham55, F. Lazzari28,v, C. Lazzeroni52, R. Le Gac10, R. Lefèvre9, A.
Leflat39, O. Leroy10, T. Lesiak33, B. Leverington16, H. Li71, L. Li62, X.
Li66, Y. Li6, Z. Li67, X. Liang67, R. Lindner47, V. Lisovskyi14, G. Liu71, X.
Liu3, D. Loh55, A. Loi26, J. Lomba Castro45, I. Longstaff58, J.H. Lopes2, G.
Loustau49, G.H. Lovell54, Y. Lu6, D. Lucchesi27,o, M. Lucio Martinez31, Y.
Luo3, A. Lupato27, E. Luppi20,g, O. Lupton55, A. Lusiani28,t, X. Lyu5, S.
Maccolini19,e, F. Machefert11, F. Maciuc36, V. Macko48, P. Mackowiak14, S.
Maddrell-Mander53, L.R. Madhan Mohan53, O. Maev37,47, A. Maevskiy78, D.
Maisuzenko37, M.W. Majewski34, S. Malde62, B. Malecki47, A. Malinin76, T.
Maltsev42,x, H. Malygina16, G. Manca26,f, G. Mancinelli10, R. Manera
Escalero44, D. Manuzzi19,e, D. Marangotto25,q, J. Maratas9,w, J.F. Marchand8,
U. Marconi19, S. Mariani21, C. Marin Benito11, M. Marinangeli48, P. Marino48,
J. Marks16, P.J. Marshall59, G. Martellotti30, L. Martinazzoli47, M.
Martinelli24,i, D. Martinez Santos45, F. Martinez Vidal46, A. Massafferri1, M.
Materok13, R. Matev47, A. Mathad49, Z. Mathe47, V. Matiunin38, C. Matteuzzi24,
K.R. Mattioli80, A. Mauri49, E. Maurice11,b, M. McCann60, L. Mcconnell17, A.
McNab61, R. McNulty17, J.V. Mead59, B. Meadows64, C. Meaux10, G. Meier14, N.
Meinert74, D. Melnychuk35, S. Meloni24,i, M. Merk31, A. Merli25, M.
Mikhasenko47, D.A. Milanes73, E. Millard55, M.-N. Minard8, O. Mineev38, L.
Minzoni20,g, S.E. Mitchell57, B. Mitreska61, D.S. Mitzel47, A. Mödden14, A.
Mogini12, R.D. Moise60, T. Mombächer14, I.A. Monroy73, S. Monteil9, M.
Morandin27, G. Morello22, M.J. Morello28,t, J. Moron34, A.B. Morris10, A.G.
Morris55, R. Mountain67, H. Mu3, F. Muheim57, M. Mukherjee7, M. Mulder47, D.
Müller47, K. Müller49, C.H. Murphy62, D. Murray61, P. Muzzetto26, P. Naik53,
T. Nakada48, R. Nandakumar56, T. Nanut48, I. Nasteva2, M. Needham57, N.
Neri25,q, S. Neubert16, N. Neufeld47, R. Newcombe60, T.D. Nguyen48, C. Nguyen-
Mau48,n, E.M. Niel11, S. Nieswand13, N. Nikitin39, N.S. Nolte47, C. Nunez80,
A. Oblakowska-Mucha34, V. Obraztsov43, S. Ogilvy58, D.P. O’Hanlon53, R.
Oldeman26,f, C.J.G. Onderwater75, J. D. Osborn80, A. Ossowska33, J.M. Otalora
Goicochea2, T. Ovsiannikova38, P. Owen49, A. Oyanguren46, P.R. Pais48, T.
Pajero28,t, A. Palano18, M. Palutan22, G. Panshin79, A. Papanestis56, M.
Pappagallo57, L.L. Pappalardo20,g, C. Pappenheimer64, W. Parker65, C.
Parkes61, G. Passaleva21,47, A. Pastore18, M. Patel60, C. Patrignani19,e, A.
Pearce47, A. Pellegrino31, M. Pepe Altarelli47, S. Perazzini19, D. Pereima38,
P. Perret9, L. Pescatore48, K. Petridis53, A. Petrolini23,h, A. Petrov76, S.
Petrucci57, M. Petruzzo25,q, B. Pietrzyk8, G. Pietrzyk48, M. Pili62, D.
Pinci30, J. Pinzino47, F. Pisani19, A. Piucci16, V. Placinta36, S. Playfer57,
J. Plews52, M. Plo Casasus45, F. Polci12, M. Poli Lener22, M. Poliakova67, A.
Poluektov10, N. Polukhina77,c, I. Polyakov67, E. Polycarpo2, G.J. Pomery53, S.
Ponce47, A. Popov43, D. Popov52, S. Poslavskii43, K. Prasanth33, L.
Promberger47, C. Prouve45, V. Pugatch51, A. Puig Navarro49, H. Pullen62, G.
Punzi28,p, W. Qian5, J. Qin5, R. Quagliani12, B. Quintana8, N.V. Raab17, R.I.
Rabadan Trejo10, B. Rachwal34, J.H. Rademacker53, M. Rama28, M. Ramos
Pernas45, M.S. Rangel2, F. Ratnikov41,78, G. Raven32, M. Reboud8, F. Redi48,
F. Reiss12, C. Remon Alepuz46, Z. Ren3, V. Renaudin62, S. Ricciardi56, D.S.
Richards56, S. Richards53, K. Rinnert59, P. Robbe11, A. Robert12, A.B.
Rodrigues48, E. Rodrigues64, J.A. Rodriguez Lopez73, M. Roehrken47, S.
Roiser47, A. Rollings62, V. Romanovskiy43, M. Romero Lamas45, A. Romero
Vidal45, J.D. Roth80, M. Rotondo22, M.S. Rudolph67, T. Ruf47, J. Ruiz Vidal46,
A. Ryzhikov78, J. Ryzka34, J.J. Saborido Silva45, N. Sagidova37, N. Sahoo55,
B. Saitta26,f, C. Sanchez Gras31, C. Sanchez Mayordomo46, R. Santacesaria30,
C. Santamarina Rios45, M. Santimaria22, E. Santovetti29,j, G. Sarpis61, A.
Sarti30, C. Satriano30,s, A. Satta29, M. Saur5, D. Savrina38,39, L.G.
Scantlebury Smead62, S. Schael13, M. Schellenberg14, M. Schiller58, H.
Schindler47, M. Schmelling15, T. Schmelzer14, B. Schmidt47, O. Schneider48, A.
Schopper47, H.F. Schreiner64, M. Schubiger31, S. Schulte48, M.H. Schune11, R.
Schwemmer47, B. Sciascia22, A. Sciubba30,k, S. Sellam68, A. Semennikov38, A.
Sergi52,47, N. Serra49, J. Serrano10, L. Sestini27, A. Seuthe14, P. Seyfert47,
D.M. Shangase80, M. Shapkin43, L. Shchutska48, T. Shears59, L. Shekhtman42,x,
V. Shevchenko76,77, E. Shmanin77, J.D. Shupperd67, B.G. Siddi20, R. Silva
Coutinho49, L. Silva de Oliveira2, G. Simi27,o, S. Simone18,d, I. Skiba20, N.
Skidmore16, T. Skwarnicki67, M.W. Slater52, J.G. Smeaton54, A. Smetkina38, E.
Smith13, I.T. Smith57, M. Smith60, A. Snoch31, M. Soares19, L. Soares Lavra9,
M.D. Sokoloff64, F.J.P. Soler58, B. Souza De Paula2, B. Spaan14, E. Spadaro
Norella25,q, P. Spradlin58, F. Stagni47, M. Stahl64, S. Stahl47, P. Stefko48,
O. Steinkamp49, S. Stemmle16, O. Stenyakin43, M. Stepanova37, H. Stevens14, S.
Stone67, S. Stracka28, M.E. Stramaglia48, M. Straticiuc36, S. Strokov79, J.
Sun26, L. Sun72, Y. Sun65, P. Svihra61, K. Swientek34, A. Szabelski35, T.
Szumlak34, M. Szymanski47, S. Taneja61, Z. Tang3, T. Tekampe14, F. Teubert47,
E. Thomas47, K.A. Thomson59, M.J. Tilley60, V. Tisserand9, S. T’Jampens8, M.
Tobin6, S. Tolk47, L. Tomassetti20,g, D. Tonelli28, D. Torres Machado1, D.Y.
Tou12, E. Tournefier8, M. Traill58, M.T. Tran48, E. Trifonova77, C. Trippl48,
A. Trisovic54, A. Tsaregorodtsev10, G. Tuci28,47,p, A. Tully48, N. Tuning31,
A. Ukleja35, A. Usachov31, A. Ustyuzhanin41,78, U. Uwer16, A. Vagner79, V.
Vagnoni19, A. Valassi47, G. Valenti19, M. van Beuzekom31, H. Van Hecke66, E.
van Herwijnen47, C.B. Van Hulse17, M. van Veghel75, R. Vazquez Gomez44,22, P.
Vazquez Regueiro45, C. Vázquez Sierra31, S. Vecchi20, J.J. Velthuis53, M.
Veltri21,r, A. Venkateswaran67, M. Vernet9, M. Veronesi31, M. Vesterinen55,
J.V. Viana Barbosa47, D. Vieira64, M. Vieites Diaz48, H. Viemann74, X.
Vilasis-Cardona44,m, A. Vitkovskiy31, A. Vollhardt49, D. Vom Bruch12, A.
Vorobyev37, V. Vorobyev42,x, N. Voropaev37, R. Waldi74, J. Walsh28, J. Wang3,
J. Wang72, J. Wang6, M. Wang3, Y. Wang7, Z. Wang49, D.R. Ward54, H.M. Wark59,
N.K. Watson52, D. Websdale60, A. Weiden49, C. Weisser63, B.D.C. Westhenry53,
D.J. White61, M. Whitehead13, D. Wiedner14, G. Wilkinson62, M. Wilkinson67, I.
Williams54, M. Williams63, M.R.J. Williams61, T. Williams52, F.F. Wilson56, W.
Wislicki35, M. Witek33, L. Witola16, G. Wormser11, S.A. Wotton54, H. Wu67, K.
Wyllie47, Z. Xiang5, D. Xiao7, Y. Xie7, H. Xing71, A. Xu4, L. Xu3, M. Xu7, Q.
Xu5, Z. Xu4, Z. Yang3, Z. Yang65, Y. Yao67, L.E. Yeomans59, H. Yin7, J.
Yu7,aa, X. Yuan67, O. Yushchenko43, K.A. Zarebski52, M. Zavertyaev15,c, M.
Zdybal33, M. Zeng3, D. Zhang7, L. Zhang3, S. Zhang4, W.C. Zhang3,z, Y.
Zhang47, A. Zhelezov16, Y. Zheng5, X. Zhou5, Y. Zhou5, X. Zhu3, V.
Zhukov13,39, J.B. Zonneveld57, S. Zucchelli19,e.
1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
3Center for High Energy Physics, Tsinghua University, Beijing, China
4School of Physics State Key Laboratory of Nuclear Physics and Technology,
Peking University, Beijing, China
5University of Chinese Academy of Sciences, Beijing, China
6Institute Of High Energy Physics (IHEP), Beijing, China
7Institute of Particle Physics, Central China Normal University, Wuhan, Hubei,
China
8Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy,
France
9Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France
10Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France
11Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France , Orsay,
France
12LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3,
Paris, France
13I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany
14Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
15Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
16Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
17School of Physics, University College Dublin, Dublin, Ireland
18INFN Sezione di Bari, Bari, Italy
19INFN Sezione di Bologna, Bologna, Italy
20INFN Sezione di Ferrara, Ferrara, Italy
21INFN Sezione di Firenze, Firenze, Italy
22INFN Laboratori Nazionali di Frascati, Frascati, Italy
23INFN Sezione di Genova, Genova, Italy
24INFN Sezione di Milano-Bicocca, Milano, Italy
25INFN Sezione di Milano, Milano, Italy
26INFN Sezione di Cagliari, Monserrato, Italy
27INFN Sezione di Padova, Padova, Italy
28INFN Sezione di Pisa, Pisa, Italy
29INFN Sezione di Roma Tor Vergata, Roma, Italy
30INFN Sezione di Roma La Sapienza, Roma, Italy
31Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands
32Nikhef National Institute for Subatomic Physics and VU University Amsterdam,
Amsterdam, Netherlands
33Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of
Sciences, Kraków, Poland
34AGH - University of Science and Technology, Faculty of Physics and Applied
Computer Science, Kraków, Poland
35National Center for Nuclear Research (NCBJ), Warsaw, Poland
36Horia Hulubei National Institute of Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
37Petersburg Nuclear Physics Institute NRC Kurchatov Institute (PNPI NRC KI),
Gatchina, Russia
38Institute of Theoretical and Experimental Physics NRC Kurchatov Institute
(ITEP NRC KI), Moscow, Russia, Moscow, Russia
39Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow,
Russia
40Institute for Nuclear Research of the Russian Academy of Sciences (INR RAS),
Moscow, Russia
41Yandex School of Data Analysis, Moscow, Russia
42Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia
43Institute for High Energy Physics NRC Kurchatov Institute (IHEP NRC KI),
Protvino, Russia, Protvino, Russia
44ICCUB, Universitat de Barcelona, Barcelona, Spain
45Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de
Santiago de Compostela, Santiago de Compostela, Spain
46Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia -
CSIC, Valencia, Spain
47European Organization for Nuclear Research (CERN), Geneva, Switzerland
48Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
49Physik-Institut, Universität Zürich, Zürich, Switzerland
50NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
51Institute for Nuclear Research of the National Academy of Sciences (KINR),
Kyiv, Ukraine
52University of Birmingham, Birmingham, United Kingdom
53H.H. Wills Physics Laboratory, University of Bristol, Bristol, United
Kingdom
54Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
55Department of Physics, University of Warwick, Coventry, United Kingdom
56STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
57School of Physics and Astronomy, University of Edinburgh, Edinburgh, United
Kingdom
58School of Physics and Astronomy, University of Glasgow, Glasgow, United
Kingdom
59Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
60Imperial College London, London, United Kingdom
61Department of Physics and Astronomy, University of Manchester, Manchester,
United Kingdom
62Department of Physics, University of Oxford, Oxford, United Kingdom
63Massachusetts Institute of Technology, Cambridge, MA, United States
64University of Cincinnati, Cincinnati, OH, United States
65University of Maryland, College Park, MD, United States
66Los Alamos National Laboratory (LANL), Los Alamos, United States
67Syracuse University, Syracuse, NY, United States
68Laboratory of Mathematical and Subatomic Physics , Constantine, Algeria,
associated to 2
69School of Physics and Astronomy, Monash University, Melbourne, Australia,
associated to 55
70Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de
Janeiro, Brazil, associated to 2
71Guangdong Provencial Key Laboratory of Nuclear Science, Institute of Quantum
Matter, South China Normal University, Guangzhou, China, associated to 3
72School of Physics and Technology, Wuhan University, Wuhan, China, associated
to 3
73Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia,
associated to 12
74Institut für Physik, Universität Rostock, Rostock, Germany, associated to 16
75Van Swinderen Institute, University of Groningen, Groningen, Netherlands,
associated to 31
76National Research Centre Kurchatov Institute, Moscow, Russia, associated to
38
77National University of Science and Technology “MISIS”, Moscow, Russia,
associated to 38
78National Research University Higher School of Economics, Moscow, Russia,
associated to 41
79National Research Tomsk Polytechnic University, Tomsk, Russia, associated to
38
80University of Michigan, Ann Arbor, United States, associated to 67
aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil
bLaboratoire Leprince-Ringuet, Palaiseau, France
cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS),
Moscow, Russia
dUniversità di Bari, Bari, Italy
eUniversità di Bologna, Bologna, Italy
fUniversità di Cagliari, Cagliari, Italy
gUniversità di Ferrara, Ferrara, Italy
hUniversità di Genova, Genova, Italy
iUniversità di Milano Bicocca, Milano, Italy
jUniversità di Roma Tor Vergata, Roma, Italy
kUniversità di Roma La Sapienza, Roma, Italy
lAGH - University of Science and Technology, Faculty of Computer Science,
Electronics and Telecommunications, Kraków, Poland
mDS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain
nHanoi University of Science, Hanoi, Vietnam
oUniversità di Padova, Padova, Italy
pUniversità di Pisa, Pisa, Italy
qUniversità degli Studi di Milano, Milano, Italy
rUniversità di Urbino, Urbino, Italy
sUniversità della Basilicata, Potenza, Italy
tScuola Normale Superiore, Pisa, Italy
uUniversità di Modena e Reggio Emilia, Modena, Italy
vUniversità di Siena, Siena, Italy
wMSU - Iligan Institute of Technology (MSU-IIT), Iligan, Philippines
xNovosibirsk State University, Novosibirsk, Russia
yINFN Sezione di Trieste, Trieste, Italy
zSchool of Physics and Information Technology, Shaanxi Normal University
(SNNU), Xi’an, China
aaPhysics and Micro Electronic College, Hunan University, Changsha City, China
abUniversidad Nacional Autonoma de Honduras, Tegucigalpa, Honduras
|
2024-09-04T02:54:58.521393 | 2020-03-03T20:35:36 | 2003.04360 | {
"authors": "Vaishali Ingale, Pushpender Singh",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26124",
"submitter": "Pushpender Singh",
"url": "https://arxiv.org/abs/2003.04360"
} | arxiv-papers | # GenNet: Reading Comprehension with Multiple Choice Questions using
Generation and Selection model
Vaishali Ingale
Department of Information Technology
Army Institute of Technology, Pune
<EMAIL_ADDRESS>
Pushpender Singh
Department of Information Technology
Army Institute of Technology, Pune
<EMAIL_ADDRESS>
###### Abstract
Multiple-choice machine reading comprehension is difficult task as its
required machines to select the correct option from a set of candidate or
possible options using the given passage and question.Reading Comprehension
with Multiple Choice Questions task, required a human (or machine) to read a
given passage, question pair and select the best one option from n given
options. There are two different ways to select the correct answer from the
given passage. Either by selecting the best match answer to by eliminating the
worst match answer. Here we proposed GenNet model, a neural network-based
model. In this model first we will generate the answer of the question from
the passage and then will matched the generated answer with given answer, the
best matched option will be our answer. For answer generation we used S-net
(Tan et al.,, 2017) model trained on SQuAD and To evaluate our model we used
Large-scale RACE (ReAding Comprehension Dataset From Examinations) (Lai et
al.,, 2017).
## 1 Introduction
Reading comprehension is one of the fundamental skills for human, which one
learn systematically since the elementary school. Reading comprehension give
human the ability of reading texts, understanding their meanings,and with the
help of given context answering questions. When machines are required to
comprehend texts, they first need to understand the unstructured text and do
reasoning based on given text (Chen et al.,, 2016)(Wang et al., 2018b,
).Answering questions based a passage requires an individual unique skill set.
It requires ability to perform basic mathematical operations and logical
ability (e.g. to answer questions like how many times Amit visited sweet
shop?), look-up ability, ability to deduce, ability to gather information
contained in multiple sentences and passages. This diverse and unique skill
set makes question answering a challenging task.There are several variants of
this task, For example, if we have a given passage and a question, the answer
could either (i) be generated from the passage (ii) match some span in the
passage (iii) or could be one of the n number of given candidate answers. The
last variant is mostly used in various high school, quiz , middle school, and
different competitive examinations. This variant of Reading Comprehension
generally referred as Reading Comprehension with Multiple Choice Questions
(RC-MCQ).In the given figure 1 We have a passage and a question and 4
candidate answers. Task here defined is to find the most suitable answer from
the passage for given question. While answering such Multiple Choice Questions
(MCQs) figure 1, humans typically use a combination of option elimination and
option selection or sometimes they find answer from the passage i.e they
generate the answer of the question from passage and match the generated
answer with given options and they choose more close candidate as correct
answer.
Here we proposed model which mimic the answer generation and then matching
human process.First the span where possible answer in the passage is computed.
we first compute a question-aware representation of the passage (which
essentially tries to retain portions of the passage which are only relevant to
the question). Then we use answer generation using state-of-art S-Net model
(Tan et al.,, 2017)which extract and generate answer figure 2. After we have
answer generated from the passage now we weight every given candidate option
and select the best matched option. That best matched option was our answer
figure 3.
Figure 1: An example multiple-choice reading comprehension question. Figure
2: Overview of S-Net.(Tan et al.,, 2017) Figure 3: Overview of option matching
and selection.
## 2 Related Work
Datasets played an important role in machine reading comprehension, there were
different type of datasets designed to solve different variant of machine
reading comprehension. SQuAD dataset(Rajpurkar et al.,, 2016) was designed to
answer simple question answer reading comprehension that aims to answer a
question with exact text spans in a passage. Later MS-MACRO dataset(Nguyen et
al.,, 2016) was designed for multi-passage reading comprehension. CNN/
Dailymail (Chen et al.,, 2016) and Who did what dataset(Onishi et al.,, 2016)
designed for cloze variant reading comprehension. MCtest(Richardson et al.,,
2013) and RACE dataset(Lai et al.,, 2017) are released for Multiple choice
question variant reading comprehension.
Similar work in reading comprehension where Multiple choice variant of
Comprehension considered includes Hierarchical Attention Flow model(Zhu et
al.,, 2018), in this model the candidate options leverage to model the
interaction between question options and passage.This was a option selection
model which select the correct option from the given candidate options. Other
work relatable to this paper was eliminating options model(Parikh et al.,,
2019) which eliminate the wrong answer from the candidate answer.Multi
matching network(Tang et al.,, 2019) models interaction relationship between
passage, questions and candidate answer. It take different paradigm of
matching into account. Option comparison Network (Ran et al.,, 2019) compares
between options at word level and identify correlation to help buildup logic
and reasoning. Co-matching model (Wang et al., 2018a, ) is used to match
between answer and question and passage pair. It match for the relationship
between question and answer with the passage. Bidirectional co-matching based
model (Zhang et al.,, 2019) matched passage and question, answer
bidirectionally. The Convolutional Spatial Attention (CSA) model (Chen et
al.,, 2019) form the enriched representaion by fully extract the mutual
information among the passage, question, and the candidates.
To generate answer several models are there like QANet (Yu et al.,, 2018)
which combined local Convolution with Global Self-Attention and its encoder
consist exclusively of convolution and self-attention.Bidirectional Attention
Flow model (Seo et al.,, 2016) use to focus on large span of passage. BIDAF
network is a multi stage hierarchical process and use bidirection attention
flow to obtain a query-aware context representation. But the reason to use
S-Net model as answer generation model because S-Net not only find the answer
from the passage but it can also synthesise passage when required. Some
questions are tricky and there answer lies in different span of passage. In
such situation S-Net is useful as it remember the past context for longer time
as it have GRU as basic component.
## 3 Proposed model
There are two tasks needs to be performed in this model. First is Answer
extraction and Answer Synthesis/Generation and then option selection. Answer
extraction and Generation will be done using state-of-art S-NET model(Tan et
al.,, 2017). S-Net first pull out evidence snippets by matching the question
and passage respectively, and then generates the answer by filtering the
question, passage, and evidence snippets. consider a passage
$P=[p_{1},p_{2},p_{3},...p_{p}]$ of word length P, Question
$Q=[Q_{1},Q_{2},Q_{3},...Q_{q}]$ of word length Q, and n options
$Z_{n}=[z_{1},z_{2},z_{3},...z_{k}]$ where n > 1 and word length k. We first
convert the words to their word-level embedding and character-level embedding
using GLOVE(Pennington et al.,, 2014).The encoding and embedding layers take
in a series of tokens and represent it as a series of vectors. The character-
level embeddings are cause by taking the final hidden states of a bi-
directional GRU applied to embedding of characters in the token. They then use
a bi-directional Gated Recurrent Unit to give rise to new depiction
$u_{1}^{p},u_{2}^{p},u_{3}^{p},...u_{p}^{p}$ for questions as well as
$u_{1}^{q},u_{2}^{q},u_{3}^{q},...u_{q}^{q}$ for passages too and
$u_{1}^{z},u_{2}^{z},u_{3}^{z},...u_{z}^{z}$ for options as well. The
embedding matrix is boot only once and not trained in the entire learning
process. As shown in Figure 4 S-NET uses the series-to-series model to
incorporate the answer with the extracted evidences as features. They first
produce the depiction It first produce the depiction $h_{p}^{t}$ and
$h_{q}^{t}$ of all words in the question and passage respectively. When giving
out the answer depiction, it merge the basic word embedding $e_{p}^{t}$ with
some added features $f_{s}^{t}$ and $f_{e}^{t}$ to indicate the end and start
place of the evidence snippet given out by evidence extraction model.
$f_{s}^{t}$=1 and $f_{e}^{t}$=1 mean the position t is the start and end of
the evidence span, respectively.
$h_{t}^{p}=BiGRU(h_{t-1}^{p},[e_{t}^{p},f_{t}^{s},f_{t}^{e}])$ (1)
$h_{t}^{q}=BiGRU(h_{t-1}^{q},e_{t}^{q})$ (2)
On top of the encoder, S-Net uses GRU with attention as the decoder to produce
the answer. At each decoding time step t , the GRU reads the previous word
embedding $w_{t-1}$ and previous context vector $c_{t-1}$ and finally produced
answer.
Figure 4: Answer Synthesis/Generation Model(Tan et al.,, 2017)
The produced answer will be stored in Answer vector.
$A_{n}=[a_{1},a_{2},a_{3},...a_{a}]$ where a is length of the answer.Figure 3
shows the overview of selection module. The selection module will take the
refined answer representation $a_{t}$ and computes its bi-linear similarity
with each option representation.
$score(i)=a_{t}W_{att}z_{t_{i}}$ (3)
where i is the number of option, $a_{t}$ is generated answer vector,
$z_{t_{i}}$ is option vector and $W_{att}$ is a matrix which needs to be
learned. We select the option which gives the highest score as computed above.
We train the model using the cross entropy loss by normalizing the above
scores (using softmax) first to obtain a probability distribution.
## 4 Experimental Setup
Here we discussed about the dataset used to evaluate our model, Training
procedure, result comparison and future work.
### 4.1 Dataset
We evaluate our model on RACE dataset(Lai et al.,, 2017) Race is a large-scale
reading comprehension dataset with more than 28,000 passages and nearly
100,000 questions. The dataset is collected from English examinations in
China, which are designed for middle school and high school students. Each
passage is a JSON file. The JSON file contains fields (i) article: A string,
which is the passage (ii) questions: A string list. Each string is a query.
There are two types of questions. First one is an interrogative sentence.
Another one has a placeholder, which is represented by _. (iii)options: A list
of the options list. Each options list contains 4 strings, which are the
candidate option. (iv) answers: A list contains the golden label of each
query.(v) id: Each passage has a unique id in this dataset. RACE has wide
variety of questions like Summarization, Inference, Deduction and Context
matching.
Figure 5: Statistic information about Reasoning type in RACE dataset
### 4.2 Training Procedures and Hyper-parameter
We integrate two different model into once. First we train our model on S-Net.
To train model on S-Net we process dataset differently. We only consider
passage and question and correct option to train model on S-Net. Later we pass
the result on to next stage on our model where we train model using generated
answer and all candidate options. To train the model, we used stochastic
gradient descent with ADAM optimizer.(Kingma and Ba,, 2014) We initialize
learning rate with 0.005. Gradients are clipped in L2-norm to no larger than
10\. To update model parameter per step,we used A mini-batch of 32 samples. We
have created a vocabulary using top 65k words from passage and questions and
if a new out-of-vocabulary(OOV) word encountered we add a special token UNK.
We use the same vocabulary for the passage, question, and options vector
embedding. We tune all our models based on the accuracy achieved on the
validation set. We use 300 dimensional Glove embedding (Pennington et al.,,
2014) for word embedding and word and character encoding.We experiment with
both fine-tuning and not fine-tuning these word embedding. We train all our
models for upto 80 epochs as we do not see any benefit of training beyond 80
epochs as result were starting recurrence.The hidden state size of all GRU
network is 128. We apply dropout (Srivastava et al.,, 2014)to word embeddings
and BiGRU’s outputs with a drop rate of 0.45.
### 4.3 Results and Future Work
Model | RACE-Mid | RACE-High | RACE
---|---|---|---
Random* | 24.6 | 25.0 | 24.9
Sliding Window* | 37.3 | 30.4 | 32.2
GA Reader (100D)* | 43.7 | 44.2 | 44.1
Stanford AR (100D)* | 44.2 | 43.0 | 43.3
Sliding Window* | 37.3 | 30.4 | 32.2
GenNet | 79.6 | 75.4 | 77.3
Table 1: Accuracy on test set of RACE-M, RACE-H and RACE. * indicates the
results from (Lai et al.,, 2017) which are trained with 100D pre-trained Glove
word embeddings
The Human Ceiling Performance reported by CMU on RACE dataset is 94.2. Our
model gives accuracy of 79.6 % on RACE-M 75.4 % on RACE-H and 77.3% on RACE
FULL which outperform several other model. Since in this model first answer
are generated and then option is selected such model can be used to solve such
multiple choice question whose answer option is not present or MCQ with "none
of the above" or "No answer" type multiple choice questions.
## 5 Conclusion
In this paper, we present the GenNet model for multiple-choice reading
comprehension. Specifically, the model uses a combination of Generation and
selection to arrive at the correct option. This is achieved by first
generating the answer for the questions from the passage and then matching
generated answer with the options.At last, the proposed model achieves overall
sate-of-the-art accuracy on RACE and significantly outperforms neural network
baselines on RACE-M, RACE-H and RACE FULL.As future work, we would like to
work towards unanswerable questions or questions where no option matched.
## References
* Chen et al., (2016) Chen, D., Bolton, J., and Manning, C. D. (2016). A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858.
* Chen et al., (2019) Chen, Z., Cui, Y., Ma, W., Wang, S., and Hu, G. (2019). Convolutional spatial attention model for reading comprehension with multiple-choice questions. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6276–6283.
* Kingma and Ba, (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
* Lai et al., (2017) Lai, G., Xie, Q., Liu, H., Yang, Y., and Hovy, E. (2017). Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683.
* Nguyen et al., (2016) Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., and Deng, L. (2016). Ms marco: a human-generated machine reading comprehension dataset.
* Onishi et al., (2016) Onishi, T., Wang, H., Bansal, M., Gimpel, K., and McAllester, D. (2016). Who did what: A large-scale person-centered cloze dataset. arXiv preprint arXiv:1608.05457.
* Parikh et al., (2019) Parikh, S., Sai, A. B., Nema, P., and Khapra, M. M. (2019). Eliminet: A model for eliminating options for reading comprehension with multiple choice questions. arXiv preprint arXiv:1904.02651.
* Pennington et al., (2014) Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
* Rajpurkar et al., (2016) Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.
* Ran et al., (2019) Ran, Q., Li, P., Hu, W., and Zhou, J. (2019). Option comparison network for multiple-choice reading comprehension. arXiv preprint arXiv:1903.03033.
* Richardson et al., (2013) Richardson, M., Burges, C. J., and Renshaw, E. (2013). Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203.
* Seo et al., (2016) Seo, M., Kembhavi, A., Farhadi, A., and Hajishirzi, H. (2016). Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.
* Srivastava et al., (2014) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958.
* Tan et al., (2017) Tan, C., Wei, F., Yang, N., Du, B., Lv, W., and Zhou, M. (2017). S-net: From answer extraction to answer generation for machine reading comprehension. arXiv preprint arXiv:1706.04815.
* Tang et al., (2019) Tang, M., Cai, J., and Zhuo, H. H. (2019). Multi-matching network for multiple choice reading comprehension. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7088–7095.
* (16) Wang, S., Yu, M., Chang, S., and Jiang, J. (2018a). A co-matching model for multi-choice reading comprehension. arXiv preprint arXiv:1806.04068.
* (17) Wang, Y., Li, R., Zhang, H., Tan, H., and Chai, Q. (2018b). Using sentence-level neural network models for multiple-choice reading comprehension tasks. Wireless Communications and Mobile Computing, 2018.
* Yu et al., (2018) Yu, A. W., Dohan, D., Luong, M.-T., Zhao, R., Chen, K., Norouzi, M., and Le, Q. V. (2018). Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541.
* Zhang et al., (2019) Zhang, S., Zhao, H., Wu, Y., Zhang, Z., Zhou, X., and Zhou, X. (2019). Dual co-matching network for multi-choice reading comprehension. arXiv preprint arXiv:1901.09381.
* Zhu et al., (2018) Zhu, H., Wei, F., Qin, B., and Liu, T. (2018). Hierarchical attention flow for multiple-choice reading comprehension. In Thirty-Second AAAI Conference on Artificial Intelligence.
|
2024-09-04T02:54:58.531454 | 2020-03-09T19:28:33 | 2003.04376 | {
"authors": "Kyle M. Whitcomb and Chandralekha Singh",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26125",
"submitter": "Kyle Whitcomb",
"url": "https://arxiv.org/abs/2003.04376"
} | arxiv-papers | # Not all disadvantages are equal: Racial/ethnic minority students have
largest disadvantage of all demographic groups in both STEM and non-STEM GPA
Kyle M. Whitcomb Department of Physics and Astronomy, University of
Pittsburgh, Pittsburgh, PA, 15260 Chandralekha Singh Department of Physics
and Astronomy, University of Pittsburgh, Pittsburgh, PA, 15260
###### Abstract
An analysis of institutional data to understand the outcome of the many
obstacles faced by students from historically disadvantaged backgrounds is
important in order to work towards promoting equity and inclusion for all
students. We use 10 years of institutional data at a large public research
university to investigate the grades earned (both overall and in STEM courses
only) by students categorized on four demographic characteristics: gender,
race/ethnicity, low-income status, and first-generation college student
status. We find that on average across all years of study and for all clusters
of majors, underrepresented minority students experience a larger penalty to
their mean overall and STEM GPA than even the most disadvantaged non-URM
students. Moreover, the underrepresented minority students with additional
disadvantages due to socioeconomic status or parental education level were
even further penalized in their average GPA. Furthermore, we also find that
while women in all demographic groups had a higher average overall GPA, these
gender differences are almost completely non-existent in STEM GPA except among
the most privileged students. These findings suggest that there is need to
provide support to bridge the gaps that emanate from historical disadvantages
to certain groups.
## I Introduction and Theoretical Framework
The importance of evidence-based approaches to improving student learning and
ensuring that all students have the opportunity to excel regardless of their
background is becoming increasingly recognized by Science, Technology,
Engineering, and Mathematics (STEM) departments across the US Johnson (2012);
Johnson _et al._ (2017); Maltese and Tai (2011); Borrego _et al._ (2008);
Borrego and Bernhard (2011); Borrego and Henderson (2014); Henderson and Dancy
(2008); Dancy and Henderson (2010); Henderson _et al._ (2012). With advances
in digital technology in the past few decades, institutions have been keeping
increasingly large digital databases of student records. We have now reached
the point where there is sufficient data available for robust statistical
analyses using data analytics that can provide valuable information useful for
transforming learning for all students Baker and Inventado (2014); Papamitsiou
and Economides (2014). This has lead to many recent studies utilizing many
years of institutional data to perform analyses that were previously limited
by statistical power Lord _et al._ (2009, 2015); Ohland and Long (2016); Matz
_et al._ (2017); Witherspoon and Schunn (2019). Therefore, here we focus on
harnessing institutional data to investigate the obstacles faced by students
with various disadvantages who must overcome obstacles in their pursuit of
higher education.
The theoretical framework for this study has two main foundations: critical
theory and intersectionality. Critical theories of race, gender, etc. identify
historical sources of inequity within society, that is, societal norms that
perpetuate obstacles to the success of certain groups of disadvantaged people
Crenshaw _et al._ (1995); Kellner (2003); Yosso (2005); Gutiérrez (2009);
Taylor _et al._ (2009); Tolbert _et al._ (2018); Schenkel and Calabrese
Barton (2020). Critical theory tells us that the dominant group in a society
perpetuates these norms, which are born out of their interests, and pushes
back against support systems that seek to subvert these norms Crenshaw _et
al._ (1995); Kellner (2003); Yosso (2005). These highly problematic societal
norms are founded in the historical oppression of various groups of people,
and manifest today in many ways including economic disadvantages, stereotypes
about who can succeed in certain career paths, and racist and/or sexist
barriers to opportunity, including educational advancement. While these norms
are, by definition, specific to a particular culture or even country, they are
nonetheless pervasive and oppressive and demand attention to rectify these
historical wrongs.
Much important work has been done on building critical race and/or gender
theories of STEM education Johnson (2012); Johnson _et al._ (2017); Solorzano
_et al._ (2000); Lewis _et al._ (2009); Bang and Medin (2010); Estrada _et
al._ (2018); Ong _et al._ (2018); Tolbert _et al._ (2018); Green _et al._
(2019); Mutegi _et al._ (2019); Sheth (2019); Schenkel and Calabrese Barton
(2020). In one study, Bancroft (2018) lays out a “critical capital theory,”
using varying forms of capital (economic, social, and cultural) to examine
persistence through graduation in STEM doctoral programs and to contextualize
the mechanisms behind racial inequities in STEM education Bancroft (2018). The
idea that race, gender, or another demographic characteristic alone cannot
fully explain the intricacies of the obstacles that students face is rooted in
the framework of intersectionality Crenshaw (1990); Cho _et al._ (2013);
Mitchell _et al._ (2014); Charleston _et al._ (2014); Morton and Parsons
(2018). In particular, the combination of different aspects of an individual’s
social identity (e.g., gender, race, first-generation college status, and
socioeconomic status) leads to unique levels of disadvantages that cannot be
explained by simply adding together the effects of the individual components
of identity Crenshaw (1990). For example, according to the framework of
intersectionality, in many STEM disciplines where the societal norm expects
that students are white men, the experience of a black woman is not a simple
sum of the experiences of white women and black men Charleston _et al._
(2014); Morton and Parsons (2018).
With an eye toward this intersectional approach to critical theory, we seek to
understand the relationship between four different aspects of student identity
that can lead to obstacles in STEM education: race/ethnicity, gender, low-
income status, and first-generation college student status. The students
disadvantaged by low-income or first-generation status are likely to
experience a lack of resources relative to their more privileged peers Lam
_et al._ (2005); Dika and D’Amico (2016); Katrevich and Aruguete (2017). Women
and underrepresented minority students are susceptible to additional stress
and anxiety from stereotype threat (i.e., the fear of confirming stereotypes
pertaining to their identity) which is not experienced by their majority group
peers Lewis _et al._ (2009); Johnson (2012); Green _et al._ (2019); Mutegi
_et al._ (2019); Sheth (2019); Astin (1993); Cross (1993); Felder _et al._
(1995, 1998); Bianchini _et al._ (2002); Britner and Pajares (2006);
Bianchini (2013); Basile and Lopez (2015); Cheryan _et al._ (2017); Hilts
_et al._ (2018). In summary, the different mechanisms by which students
belonging to each demographic characteristic can be disadvantaged are as
follows.
* •
Race/Ethnicity: Students belonging to underrepresented minority (URM) groups
may experience stereotype threat that causes anxiety and robs the students of
their cognitive resources, particularly during high-stakes testing.
* •
Gender: There are pervasive societal biases against women succeeding in many
STEM disciplines which can result in stereotype threat.
* •
Low-Income Status: Low-Income (LI) students are more likely to need to work to
support themselves, reducing their time and energy available to devote to
their studies, in addition to anxiety due to the financial burden of attending
college. These burdens are in addition to other factors that low-income
students may be more likely to face, such as lower quality preparation for
college.
* •
First-Generation Status: First-Generation (FG) students may lack the resources
of encouragement, advice, and support that are available more readily to
students with degree-holding parents. This lack of resources can make FG
students more susceptible to the stress of the unknown in college.
All of these mechanisms can produce an inequitable learning environment
wherein students belonging to any of these groups are forced to work against
obstacles that their peers do not have. The framework of intersectionality
asserts that for students that belong to more than one of these groups,
complex interactions between these different obstacles can result in
compounded disadvantages that are not a simple sum of the individual effects
Crenshaw (1990); Cho _et al._ (2013); Mitchell _et al._ (2014); Charleston
_et al._ (2014); Morton and Parsons (2018).
In order to measure the long-term effects of these systemic disadvantages, we
will investigate the academic achievement of students belonging to these
various demographic groups over the course of their studies at one large
public research university using 10 years of institutional data. By grouping
students according to their demographic background, we will be able to
investigate how different combinations of obstacles affect student grade point
averages.
## II Research Questions
Our research questions regarding the intersectional relationships between
demographic characteristics and academic achievement are as follows.
1. RQ1.
Are there differences in the overall or STEM grades earned by students
belonging to different demographic groups (i.e., underrepresented minority,
low-income status, and first-generation college student status)?
2. RQ2.
Do any patterns observed in RQ1 differ for men and women?
3. RQ3.
Do grades earned in STEM courses alone exhibit similar demographic patterns as
grades earned in all courses?
4. RQ4.
What are the trends over time in the mean GPA of these different demographic
groups among different clusters of majors (i.e., computer science,
engineering, mathematics, and physical science majors, other STEM majors, and
non-STEM majors)?
## III Methodology
### III.1 Sample
Using the Carnegie classification system, the university at which this study
was conducted is a public, high-research doctoral university, with balanced
arts and sciences and professional schools, and a large, primarily residential
undergraduate population that is full-time and reasonably selective with low
transfer-in from other institutions Indiana University Center for
Postsecondary Research (2018).
The university provided for analysis the de-identified institutional data
records of students with Institutional Review Board approval. In this study,
we examined these records for $N=24,567$ undergraduate students enrolled in
three colleges within the university: the colleges of Arts and Sciences,
Computing and Information, and Engineering. This sample of students includes
all of those from ten cohorts who met several selection criteria, namely that
the student had first enrolled at the university in a Fall semester from Fall
2005 to Fall 2014, inclusive, and the institutional data on the student was
not missing or unspecified for any of the following measures: gender,
race/ethnicity, parental education level, and family income. This sample of
students is $50\%$ female and had the following race/ethnicities: 79% White,
9% Asian, 7% Black, 3% Hispanic, and 2% other or multiracial. Further, this
sample is $16\%$ first-generation college students and $21\%$ “low-income”
students (to be defined in the following section).
We acknowledge that gender is not a binary construct, however in self-
reporting their gender to the university students were given the options of
“male” or “female” and so those are the two self-reported genders that we are
able to analyze. There were $39$ students who had met all other selection
criteria but who had not indicated any gender on the survey, these students
were removed from the sample and are not included in the reported sample size
or any analyses.
### III.2 Measures
#### III.2.1 Demographic Characteristics
Four primary measures are the demographic characteristics mentioned in the
previous section, namely gender, race/ethnicity, parental education level, and
family income. All of these were converted into binary categories intended to
distinguish between the most and least privileged students on each measure.
* •
Gender. Gender was reported as a binary category to begin with (either “male”
or “female”), therefore no further steps were required.
* •
First-generation. Students for whom both parents had a highest completed level
of education of high school or lower were grouped together as “first-
generation” (FG) college students and correspondingly students for whom at
least one parent had earned a college degree were labeled non-FG.
* •
Low-income. Students whose reported family Adjusted Gross Income (AGI) was at
or below 200% of the federal U.S. poverty line were categorized as “low-
income” (LI), and those above 200% of the poverty line as non-LI Cauthen and
Fass (2007); Jiang _et al._ .
* •
Underrepresented minority. All students who identified as any race or
ethnicity other than White or Asian were grouped together as “underrepresented
minority” (URM) students, including multiracial students who selected White
and/or Asian in addition to another demographic option. Students who only
identified as White and/or Asian students were categorized as non-URM
students.
#### III.2.2 Academic Performance
Measures of student academic performance were also included in the provided
data. High school GPA was provided by the university on a weighted scale from
0-5 that includes adjustments to the standard 0-4 scale for Advanced Placement
and International Baccalaureate courses. The data also include the grade
points earned by students in each course taken at the university. Grade points
are on a 0-4 scale with $\text{A}=4$, $\text{B}=3$, $\text{C}=2$,
$\text{D}=1$, $\text{F}=0$, where the suffixes “$+$” and “$-$” add or
subtract, respectively, $0.25$ grade points (e.g. $\text{B}-=2.75$), with the
exception of $\text{A}+$ which is reported as the maximum 4 grade points. The
courses were categorized as either STEM or non-STEM courses, with STEM courses
being those courses taken from any of the following departments: biological
sciences, chemistry, computer science, economics, any engineering department,
geology and environmental science, mathematics, neuroscience, physics and
astronomy, and statistics. We note that for the purposes of this paper, “STEM”
does not include the social sciences other than economics, which has been
included due to its mathematics-intensive content.
#### III.2.3 Year of Study
Finally, the year in which the students took each course was calculated from
the students’ starting term and the term in which the course was taken. Since
the sample only includes students who started in fall semesters, each “year”
contains courses taken in the fall and subsequent spring semesters, with
courses taken over the summer omitted from this analysis. For example, if a
student first enrolled in Fall 2007, then their “first year” occurred during
Fall 2007 and Spring 2008, their “second year” during Fall 2008 and Spring
2009, and so on in that fashion. If a student is missing both a fall and
spring semester during a given year but subsequently returns to the
university, the numbering of those post-hiatus years is reduced accordingly.
If instead a student is only missing one semester during a given year, no
corrections are made to the year numbering. In this study we consider up
through the students’ sixth year of study or the end of their enrollment at
the studied institution, whichever comes first.
### III.3 Analysis
The primary method by which we grouped students in this analysis was by their
set of binary demographic categories. This grouping was performed in two
different ways. First, use of all four binary categories (gender, FG, LI, URM)
resulted in sixteen mutually exclusive groups (e.g., “female, FG+URM” or
“male, LI”). Second, use of all categories except gender resulted in eight
mutually exclusive categories.
We calculated each student’s yearly (i.e., not cumulative) grade point average
(GPA) across courses taken in each year of study from the first to sixth
years. In addition, we calculated the student’s yearly STEM GPA, that is, the
GPA in STEM courses alone. Then, using the aforementioned grouping schemes, we
computed the mean GPA in each demographic group as well as the standard error
of the mean separately for each year of study Freedman _et al._ (2007).
Further, in the case of grouping by gender, we computed the effect size of the
gender differences within each demographic group using Cohen’s $d$, which is
typically interpreted using minimum cutoff values for “small” ($d=0.20$),
“medium” ($d=0.50$), and “large” ($d=0.80$) effect sizes Cohen (1988); Neter
_et al._ (2004); Montgomery _et al._ (2012).
All analyses were conducted using R R Core Team (2019), making use of the
package tidyverse Wickham (2017) for data manipulation and plotting.
## IV Results
### IV.1 GPA Trends by Demographic Group: “Dinosaur Plots”
In order to answer RQ1, we plotted in Fig. 1 the mean GPA earned by students
in each demographic group, including gender as a grouping characteristic. We
start with overall GPA, rather than STEM GPA alone, in order to provide
context for the results in STEM GPA and identify trends that may or may not be
present when viewing STEM grades alone. Groups are ordered from left to right
first by the ascending number of selected characteristics and then
alphabetically. Mean GPA is plotted separately (i.e., not cumulatively) for
each year of study from the first to sixth year. Setting aside the gender
differences for a moment, we note that the general GPA trends by demographic
group in Fig. 1 follow a shape resembling the neck, back, and tail of a
sauropod, and so accordingly we refer to the plots in Fig. 1 as “dinosaur
plots.” This shape is clearest in the plots for the first through fourth
years, as the sample size drops significantly in the fifth year as the
majority of students graduate.
Looking more closely at Fig. 1, particularly the first four years, we see that
the “neck” is consistently comprised of the group of students with the most
privileges, namely those students that are non-FG, non-LI, and non-URM.
Following this, the “back” is relatively flat across the next four groups,
namely students that are FG only, LI only, URM only, or FG and LI. Notably,
the URM group of students typically have the lowest mean GPA within this set
of demographic groups. Finally, the “tail” consists of the final three groups,
FG+URM, LI+URM, and FG+LI+URM. The mean GPA in this set of groups tends to
decrease from left to right in the plots. Notably, the four groups that
contain URM students are consistently in the lowest four or five mean GPAs.
Figure 1: Average GPA of each demographic group. Students are binned into
separate demographic groups based on their status as first-generation (FG),
low-income (LI), and/or underrepresented minority (URM) students. The men and
women in each demographic group are plotted separately. The mean GPA in all
courses taken by students in each demographic group is plotted along with the
standard error on the mean, with a separate plot for each of the (a) first,
(b) second, (c) third, (d) fourth, (e) fifth, and (f) sixth years. The sample
size is reported by each point, and Cohen’s $d$ Cohen (1988) measuring the
effect size of the gender difference in each group is reported.
### IV.2 Intersectionality with Gender
We now turn our attention to the differences between men and women in Fig. 1
in order to answer RQ2. We note in particular that across all demographic
groups women’s mean GPA is roughly 0.2 grade points higher than men’s. The
effect sizes (Cohen’s $d$) of this difference range from small to medium Cohen
(1988). This difference in mean GPA earned is substantial enough to indicate a
change in letter grade, given that the grading system at the studied
university uses increments of 0.25 grade points for letter grades containing
“$+$” or “$-$.” Further, this trend holds in the fifth year (Fig. 1e) and
sixth year (Fig. 1f), with some exceptions in demographic groups with
particularly low sample sizes after the fourth year.
### IV.3 STEM GPA Trends
In order to answer RQ3, Figure 2 plots students’ mean STEM GPA in a similar
manner to Fig. 1. We note that the general “dinosaur” pattern discussed in
Fig. 1 also holds at least for the first and second years (Figs. 2a and 2b,
respectively). In the third year and beyond, the general features of the trend
continue to hold, with the most privileged students having the highest mean
GPA, followed by those with one disadvantage as well as the first-generation
and low-income group, followed by the remaining groups of URM students with
one or more additional disadvantages. However, in these later years, the finer
details of the plots noted before fall away in favor of a sharper mean GPA
decrease for URM students with at least one additional disadvantage in the
third year (Fig. 2c) and a more gradual decrease across all groups in the
fourth year (Fig. 2d) and fifth year (Fig. 2e). When restricting the GPA
calculations to STEM courses, the sample size becomes too small in the sixth
year (Fig. 2f) to draw meaningful conclusions.
Figure 2: Average STEM GPA of each demographic group. Students are binned
into separate demographic groups based on their status as first-generation
(FG), low-income (LI), and/or underrepresented minority (URM) students. The
men and women in each demographic group are plotted separately. The mean GPA
in all courses taken by students in each demographic group is plotted along
with the standard error on the mean, with a separate plot for each of the (a)
first, (b) second, (c) third, and (d) fourth, (e) fifth, and (f) sixth years.
The sample size is reported by each point, and Cohen’s $d$ Cohen (1988)
measuring the effect size of the gender difference in each group is reported.
We further observe a trend of students earning higher grades on average in
later years, although the rise from the first to the fourth year is somewhat
lower in STEM GPA than in overall GPA. Notably, while in overall GPA this
trend seemed to be somewhat universal across demographic groups, in Fig. 2 we
see a quicker rise in mean STEM GPA over time for the more privileged students
than the less privileged students, particularly comparing the leftmost and
rightmost groups.
Regarding gender differences, Fig. 2 shows smaller gender differences in STEM
GPA than those observed in overall GPA in Fig. 1. While in overall GPA women
earned roughly 0.2 grade points more than men on average, in STEM GPA that
difference is much less consistent and typically ranges from 0 to 0.1 grade
points. For many demographic groups we see no significant differences between
men and women’s mean STEM GPA. We do see that there is still a consistent STEM
GPA gender difference, albeit smaller than in Fig. 1, among the group of the
most privileged students (i.e., those with “None” of the disadvantages). There
is also a STEM GPA gender difference among first-generation low-income but
non-URM students, however this difference is less consistent and in fact
briefly vanishes in the third year.
### IV.4 GPA Trends By Major Over Time
In order to better understand the trends over time in both overall and STEM
GPA and answer RQ4, we plotted the mean GPA by year in Fig. 3 and mean STEM
GPA by year in Fig. 4. In these plots, we have not separated men and women and
instead focus on the other demographic characteristics while further grouping
students into three different groups of majors in order to understand if these
trends differ for students in different areas of study. Further, since the
sample size becomes quite small in years five and six for many of the
demographic groups of interest, we plot only the mean GPA over the first four
years. In Figs. 3a and 4a, we plot the mean overall and STEM GPA,
respectively, of all students. In the other subfigures, we plot the mean GPA
earned by students majoring in different clusters of majors. In particular, we
plot the mean GPA of engineering (including computer science), mathematics,
and physical science (i.e., chemistry and physics) majors in Figs. 3b and 4b,
the remaining STEM majors in Figs. 3c and 4c, and non-STEM majors in Figs. 3d
and 4d.
Figure 3: Students are binned into separate demographic groups as in Fig. 1,
but not separated by gender. The mean GPA in all courses of each group is
plotted over time from year one to four, along with the standard error of the
mean. The plots show this for four subpopulations: (a) all students; (b)
chemistry, computer science, engineering, mathematics, and physics students;
(c) biology, economics, geology, neuroscience, and statistics students; and
(d) non-STEM students including psychology. Figure 4: Students are binned
into separate demographic groups as in Fig. 2, but not separated by gender.
The mean GPA in STEM courses of each group is plotted over time from year one
to four along with the standard error of the mean. The plots show this for
four subpopulations: (a) all students; (b) chemistry, computer science,
engineering, mathematics, and physics students; (c) biology economics,
geology, neuroscience, and statistics students; and (d) non-STEM students
including psychology.
These plots make clearer some of the trends noted earlier, especially the rise
in mean GPA over time from the first to the fourth year. However, we can now
see that this is not universally true since the first-generation URM students
have a drop in mean GPA in the second year for physical science majors (Fig.
3b), and in the third year for other STEM majors (Fig. 3c). This trend is even
more noticeable in STEM GPA (Fig. 4), where the mean STEM GPA of the group of
first-generation URM students drops in the third year for every subpopulation
by major.
## V Discussion
To start, we consider how much the current system disadvantages students who
are first-generation, low-income, or underrepresented minority but not a
combination of the two. Discussing these groups first is helpful in setting
the stage for a more complex discussion of the intersectionality of these
various demographic characteristics. We find in Figs. 1 and 2 that not all of
these disadvantages are equal. In particular, non-URM students who have one
disadvantage, namely the first-generation (but not low-income) and low-income
(but not first-generation) students, still earn slightly higher grades than
even the URM students who are not low-income or first-generation. Notably,
this trend (the “back” of the dinosaur plots) is similar in both overall
grades (Fig. 1) and in STEM grades alone (Fig. 2). The size of this mean grade
difference varies from year to year, but in STEM grades it can reach as high
as about 0.25 grade points, which at the studied institution is the difference
between, for example, a B and B$+$ or B$-$ grade.
The group with the grades most similar to these non-first-generation, non-low-
income URM students are the first-generation, low-income non-URM students, who
earn both overall (Fig. 1) and STEM (Fig. 2) grades similar to or very
slightly higher than the URM students. One explanation could be that the lack
of resources available due to being first-generation or low-income is not as
severe an obstacle as the stereotype threat experienced by URM students.
Turning then to the “tail” in the dinosaur plots, we find that consistently
the most disadvantaged students in both overall grades (Fig. 1) and STEM
grades (Fig. 2) are the URM students with at least one additional obstacle. In
this case, it appears that the intersection of being low-income and URM is the
most disadvantageous combination, with no notable difference in either Fig. 1
or Fig. 2 among these students whether or not they are also first-generation.
Meanwhile, the first-generation URM students who are not low-income sometimes
have a slightly higher mean GPA than the low-income URM students (Fig. 1).
Another avenue to investigate intersectionality is how gender interacts with
the other demographic groups. Interestingly, in overall GPA (Fig. 1), gender
appears to have about the same effect across all demographic groups. That is,
there does not appear to be an intersectional effect of gender identity with
other identities as measured by overall GPA. However, Fig. 2 shows that this
is a context-dependent effect, with the gender gap substantially and unevenly
reduced across all groups in mean STEM GPA. For most demographic groups in
Fig. 2, the higher overall GPA earned by women in Fig. 1 has vanished
completely in STEM GPA. This is consistent with stereotype threat being the
mechanism of disadvantage for women, where stereotypes surrounding STEM
disciplines unfairly cause stress and anxiety for women Astin (1993); Cross
(1993); Felder _et al._ (1995, 1998); Britner and Pajares (2006); Basile and
Lopez (2015); Cheryan _et al._ (2017); Hilts _et al._ (2018). Notably, while
the gender gap is reduced nearly to zero for most groups in Fig. 2, there does
remain a small consistent gender gap favoring women in the most privileged
group of students. In other groups the gender gap in Fig. 2 is inconsistent
across years. One explanation could be that the wealth of resources available
to them may help to alleviate the stereotype threat.
Taking a more temporal view of these GPA trends, Fig. 3 (overall GPA) and Fig.
4 (STEM GPA) have grouped men and women together in order to focus on the
other demographic characteristics more closely. In these plots, the most
noteworthy trend is again that, with the sole exception of the first year in
Fig. 3b, the four groups with the lowest mean GPA (Fig. 3) and STEM GPA (Fig.
4) across the first four years are always the four groups containing URM
students. Notably, this trend is true regardless of which group of majors we
investigate. The consistency of this result is particularly striking, showing
that the most otherwise disadvantaged non-URM students have fewer obstacles to
success than even the most privileged URM students among all students.
Focusing further on the STEM GPA of STEM majors in Figs. 4b and 4c, we see
that while non-URM students consistently rise in mean GPA over time, the same
is not true for all URM students. In particular, the first-generation URM
students who major in chemistry, computer science, engineering, mathematics,
or physics (Fig. 4b) experience a steady decline in mean STEM GPA from year
one to two and year two to three. While the standard error of those means is
quite large due to a relatively small sample size, that lack of representation
for these students could itself be what is hindering their coursework by
causing a stereotype threat.
Based upon the frameworks of critical theory and intersectionality, the main
implication of these findings is that many students who come from less
privileged backgrounds are not being adequately supported in college in order
to catch up with the privileged students Crenshaw _et al._ (1995); Kellner
(2003); Yosso (2005); Gutiérrez (2009); Taylor _et al._ (2009); Tolbert _et
al._ (2018); Schenkel and Calabrese Barton (2020); Johnson (2012); Johnson
_et al._ (2017); Crenshaw (1990); Cho _et al._ (2013); Mitchell _et al._
(2014); Charleston _et al._ (2014); Morton and Parsons (2018). The
disadvantages of these less privileged students manifest as lower mean overall
and STEM GPA for those demographic groups. In order to promote equity and
inclusion, it is crucial that these students are provided appropriate
mentoring, guidance, scaffolding, and support in college so that these
obstacles can be cleared for students who have been put at a disadvantage
relative to their peers through no fault of their own Birt _et al._ (2019).
We note that these demographic groups with more disadvantages are likely to
consist of students who had K-12 education from schools with fewer resources
and less well-prepared teachers than those of the more privileged students,
with high school being an especially important time for disadvantages related
to STEM learning increasing Bianchini _et al._ (2003); Maltese and Tai
(2011); Means _et al._ (2017); Bottia _et al._ (2018); Daley (2019); Dou
_et al._ (2019). Analyses such as those discussed here can help inform the
allocation of resources to support these students, with efforts to reduce the
classroom stereotype threat of URM students and creating a low-anxiety
environment in which all students have a high sense of belonging and can
participate fully without fear of being judged being clear priorities.
Additional resources to support low-income and/or first-generation students,
e.g., financial support and timely advising pertaining to various academic and
co-curricular opportunities, are also important in order to level the playing
field and work towards a goal of all students succeeding in college,
regardless of their race/ethnicity, socioeconomic status, and parental
education history.
## VI Acknowledgments
This research is supported by the National Science Foundation Grant
DUE-1524575 and the Sloan Foundation Grant G-2018-11183.
## References
* Johnson (2012) A. Johnson, Science Education 96, 960 (2012).
* Johnson _et al._ (2017) A. Johnson, M. Ong, L. T. Ko, J. Smith, and A. Hodari, The Physics Teacher 55, 356 (2017).
* Maltese and Tai (2011) A. V. Maltese and R. H. Tai, Science Education 95, 877 (2011).
* Borrego _et al._ (2008) M. Borrego, R. A. Streveler, R. L. Miller, and K. A. Smith, Journal of Engineering Education 97, 147 (2008).
* Borrego and Bernhard (2011) M. Borrego and J. Bernhard, Journal of Engineering Education 100, 14 (2011).
* Borrego and Henderson (2014) M. Borrego and C. Henderson, Journal of Engineering Education 103, 220 (2014).
* Henderson and Dancy (2008) C. Henderson and M. H. Dancy, American Journal of Physics 76, 79 (2008).
* Dancy and Henderson (2010) M. Dancy and C. Henderson, American Journal of Physics 78, 1056 (2010).
* Henderson _et al._ (2012) C. Henderson, M. Dancy, and M. Niewiadomska-Bugaj, Phys. Rev. ST Phys. Educ. Res. 8, 020104 (2012).
* Baker and Inventado (2014) R. S. Baker and P. S. Inventado, in _Learning Analytics_ (Springer, 2014) pp. 61–75.
* Papamitsiou and Economides (2014) Z. Papamitsiou and A. A. Economides, Journal of Educational Technology & Society 17, 49 (2014).
* Lord _et al._ (2009) S. M. Lord, M. M. Camacho, R. A. Layton, R. A. Long, M. W. Ohland, and M. H. Wasburn, Journal of Women and Minorities in Science and Engineering 15, 167 (2009).
* Lord _et al._ (2015) S. M. Lord, R. A. Layton, and M. W. Ohland, IEEE Transactions on Education 58, 141 (2015).
* Ohland and Long (2016) M. W. Ohland and R. A. Long, Advances in Engineering Education 5, 1 (2016).
* Matz _et al._ (2017) R. L. Matz, B. P. Koester, S. Fiorini, G. Grom, L. Shepard, C. G. Stangor, B. Weiner, and T. A. McKay, AERA Open 3, 1 (2017).
* Witherspoon and Schunn (2019) E. B. Witherspoon and C. D. Schunn, Science Education 104, 144 (2019).
* Crenshaw _et al._ (1995) K. Crenshaw, N. Gotanda, G. Peller, and K. Thomas, _Critical Race Theory: The Key Writings that Formed the Movement_ (The New Press, 1995).
* Kellner (2003) D. Kellner, Democracy & Nature 9, 51 (2003).
* Yosso (2005) T. J. Yosso, Race Ethnicity and Education 8, 69 (2005).
* Gutiérrez (2009) R. Gutiérrez, Teaching for Excellence and Equity in Mathematics 1, 4 (2009).
* Taylor _et al._ (2009) E. Taylor, D. Gillborn, and G. Ladson-Billings, _Foundations of critical race theory in education_ (Routledge, 2009).
* Tolbert _et al._ (2018) S. Tolbert, A. Schindel, and A. J. Rodriguez, Science Education 102, 796 (2018).
* Schenkel and Calabrese Barton (2020) K. Schenkel and A. Calabrese Barton, Science Education (2020).
* Solorzano _et al._ (2000) D. Solorzano, M. Ceja, and T. Yosso, Journal of Negro Education 69, 60 (2000).
* Lewis _et al._ (2009) J. L. Lewis, H. Menzies, E. I. Nájera, and R. N. Page, Science Education 93, 961 (2009).
* Bang and Medin (2010) M. Bang and D. Medin, Science Education 94, 1008 (2010).
* Estrada _et al._ (2018) M. Estrada, A. Eroy-Reveles, and J. Matsui, Social Issues and Policy Review 12, 258 (2018).
* Ong _et al._ (2018) M. Ong, J. M. Smith, and L. T. Ko, Journal of Research in Science Teaching 55, 206 (2018).
* Green _et al._ (2019) A. M. Green, B. R. Brand, and G. E. Glasson, Science Education 103, 241 (2019).
* Mutegi _et al._ (2019) J. W. Mutegi, B. Sorge, G. A. Fore, and G. S. Gibau, Science Education 103, 1456 (2019).
* Sheth (2019) M. J. Sheth, Science Education 103, 37 (2019).
* Bancroft (2018) S. F. Bancroft, Science Education 102, 1319 (2018).
* Crenshaw (1990) K. Crenshaw, Stan. L. Rev. 43, 1241 (1990).
* Cho _et al._ (2013) S. Cho, K. W. Crenshaw, and L. McCall, Signs: Journal of Women in Culture and Society 38, 785 (2013).
* Mitchell _et al._ (2014) J. D. Mitchell, C. Y. Simmons, and L. A. Greyerbiehl, _Intersectionality & Higher Education_ (Peter Lang, 2014).
* Charleston _et al._ (2014) L. Charleston, R. P. Adserias, N. M. Lang, and J. F. Jackson, Journal of Progressive Policy & Practice 2, 273 (2014).
* Morton and Parsons (2018) T. R. Morton and E. C. Parsons, Science Education 102, 1363 (2018).
* Lam _et al._ (2005) P. C. Lam, T. Srivatsan, D. Doverspike, J. Vesalo, and P. R. Mawasha, Journal of STEM Education: Innovations and Research 6, 14 (2005).
* Dika and D’Amico (2016) S. L. Dika and M. M. D’Amico, Journal of Research in Science Teaching 53, 368 (2016).
* Katrevich and Aruguete (2017) A. V. Katrevich and M. S. Aruguete, Journal of STEM Education: Innovations and Research 18, 40 (2017).
* Astin (1993) A. W. Astin, _What Matters in College_ , Vol. 9 (Jossey-Bass, 1993).
* Cross (1993) K. P. Cross, Journal of Engineering Education 82, 9 (1993).
* Felder _et al._ (1995) R. M. Felder, G. N. Felder, M. Mauney, C. E. Hamrin Jr., and E. J. Dietz, Journal of Engineering Education 84, 151 (1995).
* Felder _et al._ (1998) R. M. Felder, G. N. Felder, and E. J. Dietz, Journal of Engineering Education 87, 469 (1998).
* Bianchini _et al._ (2002) J. A. Bianchini, D. J. Whitney, T. D. Breton, and B. A. Hilton-Brown, Science Education 86, 42 (2002).
* Britner and Pajares (2006) S. L. Britner and F. Pajares, Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching 43, 485 (2006).
* Bianchini (2013) J. A. Bianchini, Science Education 97, 163 (2013).
* Basile and Lopez (2015) V. Basile and E. Lopez, Science Education 99, 519 (2015).
* Cheryan _et al._ (2017) S. Cheryan, S. A. Ziegler, A. K. Montoya, and L. Jiang, Psychological Bulletin 143, 1 (2017).
* Hilts _et al._ (2018) A. Hilts, R. Part, and M. L. Bernacki, Science Education 102, 744 (2018).
* Indiana University Center for Postsecondary Research (2018) Indiana University Center for Postsecondary Research, _The Carnegie Classification of Institutions of Higher Education_, Tech. Rep. (Indiana University Center for Postsecondary Research, Bloomington, IN, 2018).
* Cauthen and Fass (2007) N. K. Cauthen and S. Fass, _Measuring Income and Poverty in the United States_ , Tech. Rep. (National Center for Children in Poverty, 2007).
* (53) Y. Jiang, M. Ekono, and C. Skinner, _Basic facts about low-income children_ , Tech. Rep. (National Center for Children in Poverty).
* Freedman _et al._ (2007) D. Freedman, R. Pisani, and R. Purves, _Statistics_ , 4th ed. (W. W. Norton & Co., 2007).
* Cohen (1988) J. Cohen, _Statistical Power Analysis for the Behavioral Sciences_ , 2nd ed. (Lawrence Erlbaum Associates, 1988).
* Neter _et al._ (2004) J. Neter, M. H. Kutner, C. J. Nachtsheim, and W. Wasserman, _Applied Linear Statistical Models_ , 5th ed. (McGraw-Hill/Irwin, 2004).
* Montgomery _et al._ (2012) D. C. Montgomery, E. A. Peck, and G. G. Vining, _Introduction to Linear Regression Analysis_ , 4th ed. (John Wiley & Sons, 2012).
* R Core Team (2019) R Core Team, _R: A Language and Environment for Statistical Computing_, R Foundation for Statistical Computing, Vienna, Austria (2019).
* Wickham (2017) H. Wickham, _tidyverse: Easily Install and Load the ‘tidyverse’_ (2017), R package version 1.2.1.
* Birt _et al._ (2019) J. A. Birt, M. Khajeloo, C. C. Rega-Brodsky, M. A. Siegel, T. S. Hancock, K. Cummings, and P. D. Nguyen, Science Education 103, 770 (2019).
* Bianchini _et al._ (2003) J. A. Bianchini, C. C. Johnston, S. Y. Oram, and L. M. Cavazos, Science Education 87, 419 (2003).
* Means _et al._ (2017) B. Means, H. Wang, X. Wei, S. Lynch, V. Peters, V. Young, and C. Allen, Science Education 101, 681 (2017).
* Bottia _et al._ (2018) M. C. Bottia, E. Stearns, R. A. Mickelson, and S. Moller, Science Education 102, 85 (2018).
* Daley (2019) S. G. Daley, Science Education 103, 1306 (2019).
* Dou _et al._ (2019) R. Dou, Z. Hazari, K. Dabney, G. Sonnert, and P. Sadler, Science Education 103, 623 (2019).
|
2024-09-04T02:54:58.547629 | 2020-03-09T20:06:20 | 2003.04389 | {
"authors": "Adam Gaier, Alexander Asteroth, Jean-Baptiste Mouret",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26126",
"submitter": "Adam Gaier",
"url": "https://arxiv.org/abs/2003.04389"
} | arxiv-papers | # Discovering Representations for Black-box Optimization
Adam Gaier Inria, CNRS, Université de LorraineBonn-Rhein-Sieg University of
Applied Sciences<EMAIL_ADDRESS>, Alexander Asteroth Bonn-Rhein-Sieg
University of Applied SciencesSankt AugustinGermany53757
<EMAIL_ADDRESS>and Jean-Baptiste Mouret Inria, CNRS,Université
de LorraineNancyFrance54000<EMAIL_ADDRESS>
(2020; 2020)
###### Abstract.
The encoding of solutions in black-box optimization is a delicate, handcrafted
balance between expressiveness and domain knowledge — between exploring a wide
variety of solutions, and ensuring that those solutions are useful. Our main
insight is that this process can be automated by generating a dataset of high-
performing solutions with a quality diversity algorithm (here, MAP-Elites),
then learning a representation with a generative model (here, a Variational
Autoencoder) from that dataset. Our second insight is that this representation
can be used to scale quality diversity optimization to higher dimensions — but
only if we carefully mix solutions generated with the learned representation
and those generated with traditional variation operators. We demonstrate these
capabilities by learning an low-dimensional encoding for the inverse
kinematics of a thousand joint planar arm. The results show that learned
representations make it possible to solve high-dimensional problems with
orders of magnitude fewer evaluations than the standard MAP-Elites, and that,
once solved, the produced encoding can be used for rapid optimization of
novel, but similar, tasks. The presented techniques not only scale up quality
diversity algorithms to high dimensions, but show that black-box optimization
encodings can be automatically learned, rather than hand designed.
††copyright: rightsretained††doi: 10.1145/nnnnnnn.nnnnnnn††isbn: 978-x-xxxx-
xxxx-x/YY/MM††conference: the Genetic and Evolutionary Computation Conference
2020; July 8–12, 2020; Cancún, Mexico††journalyear: 2020††price:
15.00††journalyear: 2020††copyright: rightsretained††conference: Genetic and
Evolutionary Computation Conference; July 8–12, 2020; Cancún,
Mexico††booktitle: Genetic and Evolutionary Computation Conference (GECCO
’20), July 8–12, 2020, Cancún, Mexico††doi: 10.1145/3377930.3390221††isbn:
978-1-4503-7128-5/20/07 Figure 1. Data-Driven Encoding MAP-Elites (DDE-
Elites) searches the space of representations to search for solutions. A data-
driven encoding (DDE) is learned by training a VAE on the MAP-Elites archive.
High fitness solutions, which increase the bias of the DDE toward performance,
are found using the DDE. Novel solutions, which increase the range of
solutions which can be expressed, are found using mutation operators. UCB1, a
bandit algorithm, balances the mix of these explorative and exploitative
operators.
## 1\. Introduction
The method of encoding solutions is one of the most critical design decisions
in optimization, as the representation defines the way an algorithm can move
in the search space (Rothlauf, 2006). Work on representations tends to focus
on encoding priors or innate biases: aerodynamic designs evolved with splines
to encourage smooth forms (Olhofer et al., 2001), Compositional Pattern
Producing Networks (CPPNs) with biases for symmetry and repetition in images
and neural network weight patterns (Stanley, 2007; Stanley et al., 2009),
modularity induced in evolved neural networks (Mouret and Doncieux, 2008; Durr
et al., 2010; Doncieux and Meyer, 2004), or neural network structures which
encode strong enough biases to perform without training (Gaier and Ha, 2019).
The best representations balance a bias for high performing solutions, so they
can easily be discovered, and the ability to express a diversity of potential
solutions, so the search space can be widely explored. At the one extreme, a
representation which only encodes the global optimum is easy to search but
useless for finding any other solution. At the other, a representation which
can encode anything presents a difficult and dauntingly vast search space.
Given a large set of example solutions, representations could be learned from
data instead of being hand-tailored by trial-and-error: a learned
representation would replicate the same biases toward performance and the same
range of expressivity as the source data set. For instance, given a dataset of
face images, a Variational Autoencoder (VAE) (Kingma and Welling, 2014) or a
Generative Adversarial Network (GAN) (Goodfellow et al., 2014) can learn a
low-dimensional latent space, or encoding, that makes it possible to explore
the space of face images. In essence, the decoder which maps the latent space
to the phenotypic space learns the “recipe” of faces. Importantly, the
existence of such a low-dimensional latent space is possible because _the
dataset is a very small part of the set of all possible images_.
However, using a dataset of preselected high-performing solutions “traps” the
search within the distribution of solutions that are already known: a VAE
trained on white faces will never generate a black face. This limits the
usefulness of such data-driven representations for discovering _novel_
solutions to hard problems.
In this paper, we propose the use of the MAP-Elites algorithm (Mouret and
Clune, 2015) to automatically generate a dataset for representations using
only a performance function and a diversity space. Quality diversity (QD)
algorithms (Cully and Demiris, 2018; Pugh et al., 2016) like MAP-Elites are a
good fit for representation discovery: creating archives of diverse high-
performing solutions is precisely their purpose. Using the MAP-Elites archive
as a source of example solutions, we can capture the genetic distribution of
the highest performing solutions, or elites, by training a VAE and obtaining a
latent representation. As the VAE is only trained on elites, this learned
representation, or Data-Driven Encoding (DDE), has a strong bias towards
solutions with high fitness; and because the elites have varying phenotypes,
the DDE is able to express a range of solutions. Though the elites vary along
a phenotypic continuum, they commonly have many genotypic similarities
(Vassiliades and Mouret, 2018), making it more likely to find a well-
structured latent space.
Nonetheless, MAP-Elites will struggle to find high-performing solutions
without an adequate representation. Fortunately, the archive is produced by
MAP-Elites in an iterative, any-time fashion, so there is no “end state” to
wait for before a DDE can be trained — a DDE can be trained during
optimization. The DDE can then be used to enhance optimization. By improving
the quality of the archive the DDE improves the quality of its own source
data, establishing a virtuous cycle of archive and encoding improvement.
A DDE based on an archive will encounter the same difficulty as any learned
encoding: the DDE can only represent solutions that are already in the
dataset. How then, can we discover new solutions? Fundamentally, to search for
an encoding we need to both _exploit the best known representation_ , that is,
create better solutions according to the current best “recipes”, and also
_explore new representations_ — solutions which do not follow any “recipe”.
In this paper, we address this challenge by mixing solutions generated with
the DDE with solutions obtained using standard evolutionary operators. Our
algorithm applies classic operators, such as Gaussian mutation, to create
candidates which could not be captured by the current DDE. At the same time we
leverage the DDE to generalize common patterns across the map and create new
solutions that are likely to be high-performing. To avoid introducing new
hyper-parameters, we tune this exploration/exploitation trade-off optimally
using a multi-armed bandit algorithm (Garivier and Moulines, 2011).
This new algorithm, DDE-Elites, reframes optimization as a search for
representations (Figure 1). Integrating MAP-Elites with a VAE makes it
possible to apply quality diversity to high-dimensional search spaces, and to
find effective representations for future uses. We envision application to
domains that have straightforward but expansive low-level representations, for
instance: joints positions at 20Hz for a walking robot ($12\times 100=1200$
joint positions for a 5-second gait of a robot with $12$ degrees of freedom),
3D shapes in which each voxel is encoded individually (1000-dimensional for a
$10\times 10\times 10$ grid), images encoded in the pixel-space, etc.
Ideally, the generated DDE will capture the main regularities of the domain.
In robot locomotion, this could correspond to periodic functions, since we
already know that a $36$-dimensional controller based on periodic functions
can produce the numerous joint commands required every second to effectively
drive a 12-joint walking robot in many different ways (Cully et al., 2015). In
many domains the space of possible solutions can be vast, while the inherent
dimensionality of interesting solutions is still compact. By purposefully
seeking out a space of solutions, rather than the solutions themselves, we can
solve high-dimensional problems in a lower dimensional space.
## 2\. Background
### 2.1. Optimization of Representations
In his 30 year perspective on adaptation in evolutionary algorithms, Kenneth
De Jong identified representation adaptation as ”perhaps the most difficult
and least understood area of EA design.” (De Jong, 2007)
Despite the difficulty of creating adaptive encodings, the potential rewards
have lured researchers for decades. Directly evolving genotypes to increase in
complexity has a tradition going back to the eighties (Goldberg et al., 1989;
Altenberg, 1994). The strategy of optimizing a solution at low complexity and
then adding degrees of freedom has proved effective on problems from optimal
control (Gaier and Asteroth, 2014), to aerodynamic design (Olhofer et al.,
2001), to neural networks (Stanley and Miikkulainen, 2002). Evolving the
genome’s structure is particularly important when the structure itself is the
solution, such as in genetic programming (Koza, [n. d.]) or neural
architecture search (Elsken et al., 2019; Miikkulainen et al., 2019; Gaier and
Ha, 2019).
Recent approaches toward representation evolution have focused on genotype-
phenotype mappings (Bongard and Pfeifer, 2003). Neural networks, which map
between inputs and outputs, are a natural choice for such ‘meta-
representations’. These mappings can evolve with the genome (Scott and
Bassett, 2015; Simões et al., 2014), or fix the genome and evolve only the
mapping (Stanley et al., 2009; Stanley, 2007).
Supervised methods have been previously applied to learn encodings. These
approaches require a set of example solutions for training. Where large, well-
curated data sets are available this strategy has proven effective at creating
representations well suited to optimization (Volz et al., 2018; Bontrager et
al., 2018b; Bontrager et al., 2018a), but where a corpus of solutions does not
exist it must be created. In (Scott and De Jong, 2018; Moreno et al., 2018)
these solutions were collected by saving the champion solutions found after
repeatedly running an optimizer on the problem, with the hope that the learned
representation would then be effective in similar classes of problems.
### 2.2. MAP-Elites
MAP-Elites (Mouret and Clune, 2015) is a QD algorithm which uses a niching
approach to produce high-performing solutions which span a continuum of user-
defined phenotypic dimensions. These phenotypic dimensions, or behavior
descriptors, describe the way the problem is solved, and are often orthogonal
to performance. MAP-Elites has been used in such diverse cases as optimizing
the distance traveled by a walking robot using different legs (Cully et al.,
2015), the drag of aerodynamic designs with varied volumes and curvatures
(Gaier et al., 2017), and the win rate of decks composed of different cards in
deck-building games (Fontaine et al., 2019).
MAP-Elites is a steady-state evolutionary algorithm which maintains a
population in a discretized grid or ‘archive’. This grid divides the
continuous space of possible behaviors into bins, or ‘niches’ with each bin
holding a single individual, or ‘elite’. These elites act as parents, and are
mutated to form new individuals. These child individuals are evaluated and
assigned a niche based on their behavior. If the niche is empty the child is
placed inside; if the niche is already occupied, the individual with higher
fitness is stored in the niche and the other discarded. By repeating this
process, increasingly optimal solutions which cover the range of phenotype
space are found. The MAP-Elites algorithm is summarized in Algorithm 1.
Algorithm 1 MAP-Elites
1:function MAP-Elites($fitness()$, $variation()$, $\mathcal{X}_{initial}$)
2: $\mathcal{X}\leftarrow\emptyset$, $\mathcal{F}\leftarrow\emptyset$
$\triangleright$ Map of genomes $\mathcal{X}$, and fitnesses $\mathcal{F}$
3: $\mathcal{X}\leftarrow\mathcal{X}_{initial}$ $\triangleright$ Place initial
solutions in map
4: $\mathcal{F}\leftarrow fitness(\mathcal{X}_{initial})$
5: for iter = $1\to I$ do
6: $\mathbf{x^{\prime}}\leftarrow variation(\mathcal{X})$ $\triangleright$
Create new solution from elites
7: $\mathbf{p^{\prime}},\mathbf{b^{\prime}}\leftarrow
fitness(\mathbf{x^{\prime}})$ $\triangleright$ Get performance and behavior
8: if $\mathcal{F}(\mathbf{b^{\prime}})=\emptyset$ or
$\mathcal{F}(\mathbf{b^{\prime}})<\mathbf{f^{\prime}}$ then $\triangleright$
Replace if better
9: $\mathcal{F}(\mathbf{b^{\prime}})\leftarrow\mathbf{f^{\prime}}$
10: $\mathcal{X}(\mathbf{b^{\prime}})\leftarrow\mathbf{x^{\prime}}$
11: end if
12: end for
13: return $(\mathcal{X}$, $\mathcal{F})$ $\triangleright$ Return illuminated
map
14:end function
Though phenotypically diverse the elites are often genotypically similar,
existing in an “elite hypervolume”, a high performing region of genotype space
(Vassiliades and Mouret, 2018). Just as in nature, where species as diverse as
fruit flies and humans share nearly 60 percent of their genome (Adams et al.,
2000), the “recipe” for high performance is often composed of many of the same
ingredients.
This insight was leveraged in (Vassiliades and Mouret, 2018) to create a new
variation operator which considers the correlation among elites. Genes which
vary little across the elites, and so are likely common factors that produce
high performance, are also subject to the smallest amount of perturbation —
lowering the chance their children stray from the elite hypervolume. Biasing
mutation in this way ensures that exploration is focused on factors which
induce phenotypic variation without drifting into regions of poor performance.
### 2.3. Variational Autoencoders
Autoencoders (AEs) (Hinton and Salakhutdinov, 2006) are neural networks
designed to perform dimensionality reduction. AEs are composed of two
components: an encoder, which maps the input to a lower dimensional latent
space; and a decoder, which maps the latent space back to the original space.
The decoder is trained to reconstruct the input through this lower dimensional
latent “bottleneck”. The encoder component can be viewed as a generalization
of Principal Component Analysis (Wold et al., 1987), with the latent space
approximating principal components.
Though the AE is able to represent the data at a lower dimensionality, and
reproduce it with minimal loss, it can still be a poor representation for
optimization. An important quality of representations is ‘locality’, that a
small change in the genotype induces a small change in the phenotype
(Rothlauf, 2006). When AEs are trained only to minimize reconstruction error
they may overfit the distribution of the training data and create an irregular
latent space. The low-locality of such latent spaces limits their usefulnesses
in optimization: nearby points in latent space may decode to very different
solutions, meaning even a small mutation could have a large effect.
Variational autoencoders (VAEs) (Kingma and Welling, 2014) are AEs whose
training is regularized to ensure a high-locality latent space. The
architecture is broadly the same: an encoder and decoder mediated by a
bottleneck, but rather than encoding the input as a single point it is encoded
as a normal distribution in the latent space. When training the model a point
from this input distribution is sampled, decoded, and the reconstruction error
computed. By encoding the input as a normal distribution we induce the
distributions produced by the encoder to be closer to normal. VAEs are trained
by minimizing two terms: (1) the reconstruction error, and (2) the Kullback-
Liebler (KL) divergence (Kullback and Leibler, 1951) of the latent space to a
unit Gaussian distribution, giving the loss function:
(1)
$loss=\|x-\hat{x}\|^{2}+KL\left[N\left(\mu_{x},\sigma\right),N(0,1)\right]$
Inducing solutions to be encoded in the form of a normal distribution
structures the latent space in a continuous and overlapping way, creating a
local encoding better suited to optimization.
## 3\. DDE-Elites
Figure 2. DDE-Elites Algorithm
(1) A VAE is trained on the archive, and used to create a ‘reconstructive
crossover’ operator which creates new solutions by averaging the parameters of
an individual with its own reconstruction; (2) the mix of exploitative and
explorative variation operators predicted to have the most success is chosen
by the multi-armed bandit algorithm UCB1 and used to create new solutions; (3)
the new solutions are added to the archive and the success rate of the applied
variation operator is updated.
Every representation biases optimization in some way, improving optimization
by limiting the range of solutions that can be expressed to those which are
valid or high-performing (Rothlauf, 2006). But finding a balance between
expressivity and bias is an arduous task requiring considerable domain
expertise. Our method, DDE-Elites, automates the process of representation
design and learns new encodings in tandem with search — allowing optimization
and representation learning to improve each other in a self-reinforcing cycle.
DDE-Elites learns an encoding from examples of high performing solutions. To
create these examples we use MAP-Elites, which produces a variety of high
performing solutions rather than converging to a single optima. The variety
produced by MAP-Elites is critical — the expressivity of any learned encoding
is limited by the variety of examples. That MAP-Elites not only produces a
variety of solutions, but allows us to define the nature of that variety,
makes it particularly powerful for crafting useful representations. By
defining the type of variety we want to explore we are defining the biases and
expressivity we encode in our representation.
DDE-Elites is a variant of the MAP-Elites algorithm. The core component of
competition within a niched archive is maintained, but novel methods of
producing child solutions are introduced. Child solutions are created using an
encoding learned from the archive. This encoding is refined as the archive
improves, which in turn improves the optimization process. DDE-Elites
optimizes an archive of varied solutions by reframing optimization as a search
for the best representation, rather than the best solution.
The DDE-Elites algorithm proceeds as follows (see Figure 2 and Algorithm 2):
(1) a DDE and reconstructive crossover operator is created by training a VAE
on the archive; (2) the probability of using each variation operator is
determined by the UCB1 bandit algorithm; (3) MAP-Elites is run with the chosen
variation operator probabilities. The success rate of the variation operators
to create solutions is used to update the bandit and the improved archive is
used to create a new DDE and reconstructive crossover operator.
#### Data Driven Encoding
The MAP-Elites archive is a record of the highest-performing solutions yet
found in each bin. When the archive is updated the VAE is trained to
reconstruct the individuals in the archive. Reconstruction is a mapping from
one phenotype to another, mediated through latent space; and the mapping from
latent space to phenotype space analogous to a genotype-phenotype mapping,
which we refer to as a Data-Driven Encoding (DDE).
Features common in high performing solutions will be the most successfully
compressed and reconstructed — and features widely shared by high performing
solutions are likely to lead to high performance. Critically, by training the
encoding only on high-performing solutions we bias the space of solutions the
DDE can express to those with high performance.
#### Reconstructive Crossover
By limiting the range of solutions which can be expressed by a representation,
we are able to bias the solutions found during search. When a solution is
reconstructed with the VAE it is mapped onto the restricted space of solutions
expressible by the DDE — a space characterized by high performance.
Reconstructing individuals with the VAE can create new solutions with higher
fitness than the originals, but cannot create novel solutions. Solutions
created by the DDE are based on those already in the archive, so cannot reach
solutions which lie outside of the encoded distribution. At early stages of
optimization when there are few example solutions, using only reconstruction
to create new solutions would doom our encoding to a small region of
expression.
Rather than completely replacing individuals with their reconstructions we
instead shift them closer to forms expressible by the DDE with a new variation
operator, reconstructive crossover. Child solutions are created by performing
crossover with two parents: a parent chosen from the archive and its
reconstruction. Crossover takes the form of an element-wise mean of the
parameter vectors.
(2)
$\mathbf{x}_{i}^{(t+1)}=\frac{1}{2}*(\mathbf{x}_{i}^{(t)}+VAE.Decode(VAE.Encode(\mathbf{x}_{i}^{(t)})))$
The reconstructive crossover operator slows the loss of diversity by only
moving an individual toward the distribution of solutions encoded by the DDE,
not directly into it. By only shifting solutions rather than replacing them,
we allow exploration outside of the distribution to continue. Even when there
is little gain in fitness, solutions that are the result of reconstructive
crossover have a lower inherent dimensionality, on the account of having
parents pass through the compressive bottleneck of the VAE. In this way the
reconstructive crossover operator not only spreads globally advantageous genes
throughout the archive, but also pulls the archive towards more easily
compressed solutions.
#### Line Mutation
Reconstructive crossover enables effective optimization within the range of
solutions that the DDE can express, but explorative operators are required to
widen the pool of example solutions and improve the DDE. So when creating new
solutions we choose to either produce them through reconstructive crossover,
or through random mutation.
In addition to isometric Gaussian mutation commonly used in MAP-Elites, we
apply the line mutation operator proposed in (Vassiliades and Mouret, 2018).
Line mutation imposes a directional component on the Gaussian perturbations.
During mutation the parent genome is compared to a random genome from the
archive. The variance of mutation in each dimension is then scaled by the
difference in each gene:
(3)
$\mathbf{x}_{i}^{(t+1)}=\mathbf{x}_{i}^{(t)}+\sigma_{1}\mathcal{N}(0,\mathbf{I})+\sigma_{2}\left(\mathbf{x}_{j}^{(t)}-\mathbf{x}_{i}^{(t)}\right)\mathcal{N}(0,1)$
where $\sigma_{1}$ and $\sigma_{2}$ are hyperparameters which define the
relative strength of the isometric and directional mutations. Intuitively,
when two genes have similar values the spread of mutation will be small, when
the values are very different the spread will be large.
In many cases certain parameter values will be correlated to high fitness,
regardless of the individual’s place in behavior space. The line operator is a
simple way of exploiting this similarity, but in contrast to reconstructive
crossover does not limit expressivity – allowing it to be used as a method of
exploring new solutions. Though both the reconstructive crossover and line
mutation operators take advantage of the similarities between high performing
individuals, their differing approaches allow them to be effectively combined
as explorative and exploitative operators.
#### Parameter Control
DDE-Elites explores the space of representations with the exploitative
operator of reconstructive crossover, which finds high performing solutions
similar to those already encoded by the DDE, and explorative operators of
mutation, which expand the space of solutions beyond the range of the DDE.
The optimal ratio to use these operators is not only domain dependent, but
dependent on the stage of the algorithm. When the archive is nearly empty, it
makes little sense to base a representation on a few randomly initialized
solutions; once the behavior space has been explored, it is beneficial to
continue optimization through the lens of the DDE; and when the archive is
full of solutions produced by the DDE it is more useful to expand the range of
possible solutions with mutation. These stages are neither predictable nor
clear cut, complicating the decision of when to use each operator.
Faced with a trade-off between exploration and exploitation we frame the
choice of operators as a multi-armed bandit problem (Auer et al., 2002).
Multi-armed bandits imagine sets of actions as levers on a slot machine, each
with their own probability of reward. The goal of a bandit algorithm is to
balance exploration, trying new actions, and exploitation, repeating actions
that yield good rewards. Bandit approaches are straightforward to implement
and have been previously used successfully to select genetic operators
(DaCosta et al., 2008).
We define a set of possible actions as usage ratios between reconstructive
crossover, line mutation, and isometric mutation. The ratio of
$[\frac{1}{4},\frac{3}{4},0]$, for example, would have solutions created by
reconstructive crossover with a probability of $\frac{1}{4}$, line mutation
with a probability of $\frac{3}{4}$, and never with isometric mutation. Each
action is used to create a batch of child solutions and a reward is assigned
in proportion to the number of children who earned a place in the archive. At
each generation a new action is chosen, and the reward earned for that action
recorded.
Actions are chosen based on UCB1 (Auer et al., 2002), a simple and effective
bandit algorithm which minimizes regret. Actions with the greatest potential
reward are chosen, calculated as:
(4) $Q(a)+\sqrt{(2\log t)/(N_{t}(a))}$
where $Q(a)$ is the reward for an action $a$, $t$ is the total number of
actions that have been performed, and $N_{t}(a)$ the number of times that
action has been performed. UCB1 is an optimistic algorithm which rewards
uncertainty — given two actions with the same mean reward, the action which
has been tried fewer times will be chosen.
Our archive is in constant flux, and so the true reward of each mix of
operators changes from generation to generation. To handle the non-stationary
nature of the problem we use a sliding window (Garivier and Moulines, 2011),
basing our predictions only on the most recent generations.
Algorithm 2 DDE-Elites
1:function DDE-Elites($fitness()$ $\mathcal{X}_{initial}$)
2: $\mathcal{X}\leftarrow\mathcal{X}_{initial}$
3: $\mathcal{V}$: Possible Variation Operator Probabilities (vector)
4: (e.g., [0,0.5,0.5], [0.8,0.0,0.2], [1.0,0.0,0.0] for [xover,line,iso])
5: successes $\leftarrow zeros(len(\mathcal{V}))$ $\triangleright$ # successes
for each option
6: selection $\leftarrow zeros(len(\mathcal{V}))$ $\triangleright$ #
selections for each option
7: for iter = $1\to I$ do
8: — Train VAE on Current Archive —
9: VAE.Train ($\mathcal{X}$)
10: — Choose Variation Based on UCB1 —
11: $i\leftarrow\arg\max\left(\frac{\text{successes}[s]}{\text{ selected
}[s]}+\sqrt{\frac{2\ln(\text{sum}(\text{successes}))}{\text{selected}[s]}}\right)$
12: — Run MAP-Elites Using Chosen Variation —
13: $variation()\leftarrow\mathcal{V}[i]$
14: $\mathcal{X^{\prime}}\leftarrow$MAP-
Elites$(fitness(),variation(),\mathcal{X}$)
15: — Track Performance of Chosen Variation —
16: $selection[i]\leftarrow selection[i]+1$
17: $successes[i]\leftarrow
successes[i]+nImproved(\mathcal{X^{\prime}},\mathcal{X})$
18: end for
19: DDE $\leftarrow$ VAE.Decode()
20: return $\mathcal{~{}X}$, DDE
21:end function
1:function Isometric Mutation($\mathcal{X}$)
2: $\mathbf{x~{}}\leftarrow random\\_selection(\mathcal{X})$
3: return $\mathbf{x}+\sigma\mathcal{N}(0,\mathbf{I})$
4:end function
1:function Line Mutation($\mathcal{X}$)
2: $\mathbf{x~{},y~{}}\leftarrow random\\_selection(\mathcal{X})$
3: return
$\mathbf{x}+\sigma_{1}\mathcal{N}(0,\mathbf{I})+\sigma_{2}(\mathbf{x}-\mathbf{y})\mathcal{N}(0,1)$
4:end function
1:function Reconstructive Crossover($\mathcal{X}$)
2: $\mathbf{x~{}}\leftarrow random\\_selection(\mathcal{X})$
3: $\mathbf{y~{}}\leftarrow VAE.Decode(VAE.Encode(\mathbf{x}))$
$\triangleright$ VAE Reconstruction
4: return $(\mathbf{x}+\mathbf{y})/2$
5:end function
Figure 3. Archive Illumination
Archive illumination performance of MAP-Elites with different variation
operators: standard isometric mutation (MAP-Elites), line mutation (ME-Line),
reconstructive crossover (DDE-XOver) and DDE-Elites, which uses the UCB1
bandit algorithm to choose between the three at every generation. We measure
fitness as the mean fitness of all solutions in the archive; coverage as the
fraction of behavior space bins which contain solutions. Results over 20
replicates with lines indicating medians and quartile bounds shaded. The
median of DDE-Elites, our approach, is additionally noted with black dots. All
final results are significantly different ($p<0.01$ Mann-Whitney U) in fitness
and coverage. Progress is shown in evaluations (0 to 1 million); a batch size
of 100 evaluations per generation was used, so this scale corresponds to
generations from 0 to 10,000.
## 4\. Experiments
#### Planar Arm Inverse Kinematics
111see Figure 5 for a visualization of this domain
We demonstrate the effectiveness of DDEs and DDE-Elites on in the inverse
kinematics (IK) problem of a 2D robot arm, a common QD benchmark problem
(Cully and Demiris, 2018; Vassiliades and Mouret, 2018). Given target
coordinates a configuration of joint angles should be found to place the end
effector at the target. To solve this task, a discretized behavior space is
defined over the x,y plane and MAP-Elites finds a configuration of joint
angles which places the end effector in each bin. The location of the end
effector is derived for an arm with $n$ joints with angles $y$ with using the
forward kinematics equation:
$\mathbf{b}(\mathbf{y})=\left[\begin{array}[]{c}{l_{1}\cos(y_{1})+l_{2}\cos(y_{1}+y_{2})+\cdots+l_{n}\cos(y_{1}+\cdots+y_{n})}\\\
{l_{1}\sin(y_{1})+l_{2}\sin(y_{1}+y_{2})+\cdots+l_{n}\sin(y_{1}+\cdots+y_{n})}\end{array}\right]$
There are many solutions to this IK problem, but solutions with lower joint
variance are preferred to allow for smoother transitions between
configurations. We define fitness as the negative joint variance:
$-\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\mu)^{2}$ where $(\mu=\sum_{i=1}^{n}y_{i})$.
To summarize: the phenotype is the angle of each joint, the behavior is the
x,y coordinates of the end effector, and the fitness the negative variance of
the joint angles. The difficulty of the problem can be easily scaled up by
increasing the number of joints in the arm: we solve this task with 20, 200,
and 1000 joints. When a DDE is used 10 latent dimensions are used for the 20D
arm, and 32 dimensions for the 200 and 1000D arms. The same archive structure
is used for all domains. A unit circle is divided into 1950 bins, with each
bin defined by the Voronoi cell (Vassiliades et al., 2017) with centers placed
in a ring formation222See supplementary material for a visualization of this
structure.
Figure 4. Archive Recreation with Data-Driven Encoding
Performance of MAP-Elites algorithm when run with direct or data-driven
encoding. When using the direct encoding, MAP-Elites was given one order of
magnitude more evaluations (note logarithmic scale of evaluations). Fitness is
measured as the mean fitness of all solutions in the archive, coverage as the
fraction of behavior space bins which contain solutions. Results over 50
replicates with dotted lines indicating medians and quartile bounds shaded.
### 4.1. Archive Illumination
We first demonstrate the ability of DDE-Elites to scale up illumination to
high-dimensional problems. The performance of DDE-Elites is compared to three
algorithmic variants: the canonical MAP-Elites algorithm using isometric
mutation (MAP-Elites); MAP-Elites using line, or directional, mutation (ME-
Line); and MAP-Elites using the reconstructive crossover (DDE-XOver). Our
proposed approach DDE-Elites uses all operators at a ratio determined by the
UCB1 bandit algorithm. These treatments are summarized in Table 1.
| | Isometric
---
Mutation
| Line
---
Mutation
| Reconstructive
---
Crossover
MAP-Elites | X | |
ME-Line | | X |
DDE-XOver | | | X
DDE-Elites | X | X | X
Table 1. Algorithm variants. DDE-Elites is our approach.
These variants are compared based on the quality of the archive at each
generation (Figure 3). Archives are judged based on two metrics: (1) coverage,
the number of bins filled, and (2) performance, the mean fitness of
solutions.333Sixty-four core machines were used to evaluate 100 individuals in
parallel, requiring $\sim$0.2s, $\sim$0.8s, $\sim$1.6s, for the arm at 20d,
200d, and 1000D arm respectively. In every case the VAE required $\sim$2.4s to
train on a single CPU core.
In the 20-dimensional case ME-Line quickly fills the map with high performing
solutions. In only a one hundred thousand evaluations ME-Line creates an
archive unmatched by MAP-Elites even after one million evaluations. When only
the reconstructive crossover operator is used, despite promising early
progress, a chronic lack of exploration results in archives which are worse
than the standard MAP-Elites. DDE-Elites, with access to all operators,
explores as quickly as ME-Line and creates archives of similar quality.
When the dimensionality of the arm is scaled up to 200D, we see the
convergence rate of ME-Line slow down considerably. While still reaching high
levels of performance it does so only after one million evaluations, a tenth
of the evaluations required in in the 20D case — suggesting that the
effectiveness of ME-Line scales linearly with the dimensionality of the
problem. In contrast DDE-Elites is barely affected by a ten-fold increase in
parameters — exploration is only slightly slowed, and high-performing
solutions are found from the very earliest iterations. The effects of scaling
can be observed even more clearly in the 1000D case: ME-Line illuminates the
archive only very slowly, while the performance of DDE-Elites is marked by the
same burst of exploration and consistently high fitness solutions that
characterized its performance in lower dimensions.
The line mutation operator is clearly able to leverage the similarities in
high performing solutions across the archive — in every case performing far
better than the isometric mutation operator. The mechanism for doing this,
adjusting the range of parameter mutations, does not appear to scale well
enough to handle very high dimensional problems. The reconstructive crossover
operator is able to rapidly find high-performing solutions even in high-
dimensional spaces, but is poor at exploring. Search with reconstructive
crossover is confined to the distribution of genes that already exist in the
archive, if used exclusively that distribution of genes is limited to the
initial population. By combining these operators — expanding the range of
genes in the archive with mutation, and spreading high performing genes with
reconstructive crossover — DDE-Elites is able to create high-performing
archives even in high-dimensional problems.
Figure 5. Optimization with Direct and Data-Driven Encodings
CMA-ES is given a set budget to find a solution with a target behavior, and
searches with either a direct encoding or a DDE.
Left: Example solutions for target matching with the direct and data driven
encodings. End effectors in yellow, targets in red.
Top: Optimization over time of median distance (dotted line) to the 18 targets
over 50 replicates (quartiles shaded).
Bottom: The final distance to the targets, and a characteristic of the
solution. These characteristics were not optimized by CMA-ES, but optimized
during the creation of the DDE, biasing the solutions produced.
### 4.2. Archive Recreation
DDE-Elites is as much a method of optimizing representations as solutions. By
learning a representation from the archive, we create an encoding that is
biased towards high performance and has a range of expression matching the
defined behavior space. In these experiments, our DDE encodes smooth joint
configurations which place an arm’s end effector anywhere in its reach. To
demonstrate that DDE-Elites does more than guide search, but learns a
representation, we search the space again, using the found DDE in place of the
direct encoding.
We run the standard MAP-Elites algorithm, with isometric mutation only, using
a learned DDE444The decoder network of the VAE found in the highest coverage
replicate of DDE-Elites. acting as our genome. In the 20D arm this DDE has 10
parameters, in the 200D and 1000D arms the DDE has 32 parameters. No previous
solutions are maintained, only the trained DDE. For reference we compare to
the MAP-Elites algorithm using the direct encoding. An order of magnitude
fewer evaluations were budgeted when using the DDE.
In every case the DDE far outperforms the direct encoding, reaching the same
levels of fitness and coverage with several orders of magnitude fewer
evaluations (Figure 4). The DDE can express the same range of solutions as
were found in the original archive, and finds them rapidly. Archives were
recreated after only 10,000 evaluations — a rate of about 5 evaluations per
bin.55510,000 individuals/1950 bins $\approx$ 5 evaluations/bin discovered.
The found solutions are also high performing.
Such improvement cannot be explained away by the decrease in dimensionality of
the search. In both low and high dimensional cases the bias toward high
performance is also apparent: the mean fitness curve is nearly flat at the
optima, indicating that when new solutions are added to the map they are
already near optimal. The contrast with the direct encoding is stark, with the
direct encoding considerable effort is taken to search for good solutions, the
DDE finds little else. DDE-Elites not only produces solutions, but learns
domain-specific representation.
### 4.3. Optimization with Learned Encodings
Beyond its place in the DDE-Elites optimization loop, the produced DDE is a
powerful representation with high expressivity and built in biases. Though
created by MAP-Elites, the DDE is not tied to it. Once discovered, a DDE can
be used as a representation for any black box optimization algorithm.
We illustrate this generality by using again solving the arm inverse
kinematics problem with the black-box optimizer CMA-ES (Hansen and Ostermeier,
2001). A set of target positions for the end effector is defined (Figure 5,
left), and CMA-ES used to find a joint configuration which reaches each
target. In one case optimization is performed using the DDE; in the other the
direct encoding is used.
When optimizing with the DDE, CMA-ES quickly finds solutions to the target
hitting problems with a precision never matched with the direct encoding
(Figure 5, top). Moreover, a bias for how the problem is solved is built into
the representation (Figure 5, bottom). As the DDE was trained only on
solutions with low joint variance, this same property is found in the
solutions found by CMA-ES with the DDE — even without searching for them. With
the DDE CMA-ES not only finds solutions to the IK problem, the built-in priors
of the DDE ensures we find kind of solutions we want.
## 5\. Discussion
Learning representations by combining quality diversity (here, MAP-Elites) and
generative models (here, a VAE) opens promising research avenues for domains
in which optimizations of the same cost function are launched continuously.
This is, for example, the case of Model Predictive Control (Mayne et al.,
2000), in which the sequence of actions for the next seconds is optimized at
every time-step of the control loop, or the case of shape optimization in
interactive design tools (Hoyer et al., 2019; Bendsøe and Sigmund, 1995), in
which each modification by the user requires a novel optimization.
In preliminary experiments, we searched for an encoding to describe action
sequences for a walking robot. The results show that using MAP-Elites to
generate a diversity of sequences, then using a VAE to learn a representation
leads to an encoding that can accelerate future optimizations by several
orders of magnitude. Nevertheless, using the representation during
optimization, as described in this paper, did not accelerate the quality
diversity optimization as much as in the high-dimensional arm used here. One
hypothesis is that the regularities in action sequences are harder to
recognize than in the arm experiments, especially at the beginning of the
process. For instance, it might help to use an auto-encoder that is especially
designed for sequences (Vaswani et al., 2017; Co-Reyes et al., 2018).
For other tasks, appropriate generative models could be explored, for example
convolutional models for tasks with spatial correlations (Salimans et al.,
2015). In addition, though the latent spaces created by VAEs are easier to
navigate than those created by normal autoencoders, even better models offer
the opportunities for further improvements. Much work has been done to create
VAEs which have even better organized latent spaces (Higgins et al., 2017;
Burgess et al., 2018; Chen et al., 2018; Kim and Mnih, 2018), ideally with
each dimension responsible for a single phenotypic feature such as the
lighting or color of an image.
A second research avenue is to improve the bandit algorithm used to balance
between operators. In theory, it should ensure that adding new operators can
only aid optimization, since useless or detrimental operators would rarely be
selected. However, we observed that it is not always effective: in some cases,
using only the line mutation outperformed DDE-Elites, whereas DDE-Elites could
revert to using only line mutation with a perfect bandit. Our hypothesis is
that this is a sign that “successes” — child solutions which discover new bins
or improve on existing solutions — is not the perfect measure of utility for a
QD algorithm. In the case of our experiments, it may be that reconstructive
crossover consistently improves solutions, but may only do so slightly.
According to the “success” metric, a tiny improvement is worth the same as a
large one. To best utilize the bandit, other methods of judging performance in
QD algorithms should be explored.
Beyond performance advantages, for both the current and future optimizations,
these “disentangled” representations offer even more interesting
opportunities. Reducing the dimensionality of the search space into meaningful
components would allow rapid model-based optimization of single solutions
(Shahriari et al., 2015), or entire archives (Gaier et al., 2018). Engineers
could interactively explore and understand such encodings, laying bare the
underlying properties responsible for performance and variation — and so from
encodings receive, rather than provide, insight and domain knowledge.
## Acknowledgements
This work received funding from the European Research Council (ERC) under the
EU Horizon 2020 research and innovation programme (grant agreement number
637972, project ”ResiBots”) and the German Federal Ministry of Education and
Research (BMBF) under the Forschung an Fachhochschulen mit Unternehmen
programme (grant agreement number 03FH012PX5, project ”Aeromat”).
## Source Code
The source code used to produce the results in this paper is available at
https://github.com/resibots/2020_gaier_gecco
## References
* (1)
* Adams et al. (2000) Mark D Adams, Susan E Celniker, Robert A Holt, Cheryl A Evans, Jeannine D Gocayne, Peter G Amanatides, Steven E Scherer, Peter W Li, Roger A Hoskins, Richard F Galle, et al. 2000\. The genome sequence of Drosophila melanogaster. Science.
* Altenberg (1994) Lee Altenberg. 1994\. Evolving better representations through selective genome growth. In First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence. IEEE.
* Auer et al. (2002) Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. 2002\. Finite-time analysis of the multiarmed bandit problem. Machine learning.
* Bendsøe and Sigmund (1995) Martin P Bendsøe and Ole Sigmund. 1995. Optimization of structural topology, shape, and material. Vol. 414. Springer.
* Bongard and Pfeifer (2003) Josh C Bongard and Rolf Pfeifer. 2003. Evolving complete agents using artificial ontogeny. In Morpho-functional Machines: The new species. Springer, 237–258.
* Bontrager et al. (2018a) Philip Bontrager, Wending Lin, Julian Togelius, and Sebastian Risi. 2018a. Deep interactive evolution. In International Conference on Computational Intelligence in Music, Sound, Art and Design. Springer.
* Bontrager et al. (2018b) Philip Bontrager, Aditi Roy, Julian Togelius, Nasir Memon, and Arun Ross. 2018b. Deepmasterprints: Generating masterprints for dictionary attacks via latent variable evolution. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE, 1–9.
* Burgess et al. (2018) Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. 2018. Understanding disentangling in beta-VAE. arXiv preprint arXiv:1804.03599.
* Chen et al. (2018) Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. 2018\. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems.
* Co-Reyes et al. (2018) John D Co-Reyes, YuXuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, and Sergey Levine. 2018\. Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings. In Proceedings of the International Conference on Machine Learning (ICML).
* Cully et al. (2015) Antoine Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret. 2015. Robots that can adapt like animals. Nature.
* Cully and Demiris (2018) Antoine Cully and Yiannis Demiris. 2018. Quality and diversity optimization: A unifying modular framework. IEEE Trans. on Evolutionary Computation.
* DaCosta et al. (2008) Luis DaCosta, Alvaro Fialho, Marc Schoenauer, and Michèle Sebag. 2008. Adaptive operator selection with dynamic multi-armed bandits. In 10th annual conference on Genetic and evolutionary computation.
* De Jong (2007) Kenneth De Jong. 2007\. Parameter setting in EAs: a 30 year perspective. In Parameter setting in evolutionary algorithms. Springer.
* Doncieux and Meyer (2004) Stephane Doncieux and Jean-Arcady Meyer. 2004. Evolving modular neural networks to solve challenging control problems. In Fourth International ICSC Symposium on engineering of intelligent systems (EIS 2004). ICSC Academic Press Canada.
* Durr et al. (2010) Peter Durr, Dario Floreano, and Claudio Mattiussi. 2010\. Genetic representation and evolvability of modular neural controllers. IEEE Computational Intelligence Magazine.
* Elsken et al. (2019) Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019\. Neural Architecture Search: A Survey. Journal of Machine Learning Research.
* Fontaine et al. (2019) Matthew C Fontaine, Scott Lee, LB Soros, Fernando De Mesentier Silva, Julian Togelius, and Amy K Hoover. 2019. Mapping hearthstone deck spaces through MAP-elites with sliding boundaries. In Proceedings of The Genetic and Evolutionary Computation Conference.
* Gaier and Asteroth (2014) Adam Gaier and Alexander Asteroth. 2014. Evolution of optimal control for energy-efficient transport. In IEEE Intelligent Vehicles Symposium Proceedings.
* Gaier et al. (2017) Adam Gaier, Alexander Asteroth, and Jean-Baptiste Mouret. 2017\. Aerodynamic design exploration through surrogate-assisted illumination. In 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference.
* Gaier et al. (2018) Adam Gaier, Alexander Asteroth, and Jean-Baptiste Mouret. 2018\. Data-efficient design exploration through surrogate-assisted illumination. Evolutionary computation.
* Gaier and Ha (2019) Adam Gaier and David Ha. 2019. Weight agnostic neural networks. In Advances in Neural Information Processing Systems.
* Garivier and Moulines (2011) Aurélien Garivier and Eric Moulines. 2011. On upper-confidence bound policies for switching bandit problems. In International Conference on Algorithmic Learning Theory. Springer.
* Goldberg et al. (1989) David E Goldberg, Bradley Korb, Kalyanmoy Deb, et al. 1989\. Messy genetic algorithms: Motivation, analysis, and first results. Complex systems.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014\. Generative adversarial nets. In Advances in neural information processing systems.
* Hansen and Ostermeier (2001) Nikolaus Hansen and Andreas Ostermeier. 2001. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation.
* Higgins et al. (2017) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. International Conference on Machine Learning.
* Hinton and Salakhutdinov (2006) Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science.
* Hoyer et al. (2019) Stephan Hoyer, Jascha Sohl-Dickstein, and Sam Greydanus. 2019\. Neural reparameterization improves structural optimization. arXiv preprint arXiv:1909.04240.
* Kim and Mnih (2018) Hyunjik Kim and Andriy Mnih. 2018. Disentangling by Factorising. In International Conference on Machine Learning.
* Kingma and Welling (2014) Diederik P. Kingma and Max Welling. 2014. Auto-Encoding Variational Bayes. In International Conference on Learning Representation (ICLR), Yoshua Bengio and Yann LeCun (Eds.).
* Koza ([n. d.]) John R Koza. [n. d.]. Genetic programming: A paradigm for genetically breeding populations of computer programs to solve problems. Stanford University, Department of Computer Science Stanford, CA.
* Kullback and Leibler (1951) Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics.
* Mayne et al. (2000) David Q Mayne, James B Rawlings, Christopher V Rao, and Pierre OM Scokaert. 2000. Constrained model predictive control: Stability and optimality. Automatica.
* Miikkulainen et al. (2019) Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Daniel Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, et al. 2019\. Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing. Elsevier.
* Moreno et al. (2018) Matthew Andres Moreno, Wolfgang Banzhaf, and Charles Ofria. 2018\. Learning an evolvable genotype-phenotype mapping. In Genetic and Evolutionary Computation Conference. ACM.
* Mouret and Clune (2015) Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909.
* Mouret and Doncieux (2008) Jean-Baptiste Mouret and Stéphane Doncieux. 2008. MENNAG: a modular, regular and hierarchical encoding for neural-networks based on attribute grammars. Evolutionary Intelligence.
* Olhofer et al. (2001) Markus Olhofer, Yaochu Jin, and Bernhard Sendhoff. 2001\. Adaptive encoding for aerodynamic shape optimization using evolution strategies. In 2001 Congress on Evolutionary Computation. IEEE.
* Pugh et al. (2016) Justin K Pugh, Lisa B Soros, and Kenneth O Stanley. 2016\. Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI.
* Rothlauf (2006) Franz Rothlauf. 2006\. Representations for genetic and evolutionary algorithms. In Representations for Genetic and Evolutionary Algorithms. Springer.
* Salimans et al. (2015) Tim Salimans, Diederik Kingma, and Max Welling. 2015\. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning. 1218–1226.
* Scott and Bassett (2015) Eric O Scott and Jeffrey K Bassett. 2015. Learning genetic representations for classes of real-valued optimization problems. In Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation. ACM.
* Scott and De Jong (2018) Eric O Scott and Kenneth A De Jong. 2018. Toward learning neural network encodings for continuous optimization problems. In Genetic and Evolutionary Computation Conference Companion. ACM.
* Shahriari et al. (2015) Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. 2015. Taking the human out of the loop: A review of Bayesian optimization. Proc. IEEE (2015).
* Simões et al. (2014) Luís F Simões, Dario Izzo, Evert Haasdijk, and Agoston Endre Eiben. 2014. Self-adaptive genotype-phenotype maps: neural networks as a meta-representation. In International Conference on Parallel Problem Solving from Nature. Springer.
* Stanley (2007) Kenneth O Stanley. 2007\. Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines.
* Stanley et al. (2009) Kenneth O Stanley, David B D’Ambrosio, and Jason Gauci. 2009\. A hypercube-based encoding for evolving large-scale neural networks. Artificial life.
* Stanley and Miikkulainen (2002) Kenneth O Stanley and R Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evolutionary computation.
* Vassiliades et al. (2017) Vassilis Vassiliades, Konstantinos Chatzilygeroudis, and Jean-Baptiste Mouret. 2017. Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. IEEE Transactions on Evolutionary Computation 22, 4, 623–630.
* Vassiliades and Mouret (2018) Vassilis Vassiliades and Jean-Baptiste Mouret. 2018. Discovering the elite hypervolume by leveraging interspecies correlation. In Genetic and Evolutionary Computation Conference.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
* Volz et al. (2018) Vanessa Volz, Jacob Schrum, Jialin Liu, Simon M Lucas, Adam Smith, and Sebastian Risi. 2018\. Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In Genetic and Evolutionary Computation Conference. ACM.
* Wold et al. (1987) Svante Wold, Kim Esbensen, and Paul Geladi. 1987\. Principal component analysis. Chemometrics and intelligent laboratory systems.
## Supplemental Material
### A.. Example Maps
Arm20 | Arm200 | Arm1000
---|---|---
| |
Table 2. Example Maps
Final archives colored by fitness value for each cell for each domain. In the
20D Arm both MAP-Elites and DDE-Elites converge on similar optimal solutions.
In the 200D and 1000D Arm MAP-Elites is unable to reach the levels of
performance of DDE-Elites in any region.
### B.. Hyperparameters of DDE Experiments
Hyperparameter | Value
---|---
Isometric Mutation Strength | 0.003
Line Mutation Strength | 0.1
Batch Size | 100
Bandit Options, | | [0.00:0.00:1.00], [0.25:0.00:0.75],
---
[0.50:0.00:0.50], [0.75:0.00:0.25],
[1.00:0.00:0.00], [0.00:0.25:0.75],
[0.00:0.50:0.50], [0.00:0.75:0.25],
[0.00:1.00:0.00]
Bandit Window Length | 1000
Generations per VAE Training | 1
Epochs per VAE Training | 5
Mutation Strength when Searching DDE | 0.15
Latent Vector Length [Arm20] | 10
Latent Vector Length [Arm200] | 32
Latent Vector Length [Arm1000] | 32
|
2024-09-04T02:54:58.573582 | 2020-03-10T00:10:22 | 2003.04470 | {
"authors": "V.M. Ngo, N.A. Le-Khac, and M.T. Kechadi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26127",
"submitter": "Vuong M. Ngo",
"url": "https://arxiv.org/abs/2003.04470"
} | arxiv-papers | Int. J. Business Process Integration and Management
Ngo, V.M., Le-Khac, N.A. and Kechadi M.T. Int. J. Business Process Integration
and Management 10 1 2020
Vuong M. Ngo E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
Nhien-An Le-Khac E-mail<EMAIL_ADDRESS>
M-Tahar Kechadi E-mail<EMAIL_ADDRESS>
Ho Chi Minh City Open University, HCMC, Vietnam
University College Dublin, Belfield, Dublin 4, Ireland
# Data Warehouse and Decision Support on
Integrated Crop Big Data
###### Abstract
In recent years, precision agriculture is becoming very popular. The
introduction of modern information and communication technologies for
collecting and processing Agricultural data revolutionise the agriculture
practises. This has started a while ago (early 20th century) and it is driven
by the low cost of collecting data about everything; from information on
fields such as seed, soil, fertiliser, pest, to weather data, drones and
satellites images. Specially, the agricultural data mining today is considered
as Big Data application in terms of volume, variety, velocity and veracity.
Hence it leads to challenges in processing vast amounts of complex and diverse
information to extract useful knowledge for the farmer, agronomist, and other
businesses. It is a key foundation to establishing a crop intelligence
platform, which will enable efficient resource management and high quality
agronomy decision making and recommendations. In this paper, we designed and
implemented a continental level agricultural data warehouse (ADW). ADW is
characterised by its (1) flexible schema; (2) data integration from real
agricultural multi datasets; (3) data science and business intelligent
support; (4) high performance; (5) high storage; (6) security; (7) governance
and monitoring; (8) consistency, availability and partition tolerant; (9)
cloud compatibility. We also evaluate the performance of ADW and present some
complex queries to extract and return necessary knowledge about crop
management.
Data warehouse, decision support, crop Big Data, smart agriculture.
to this paper should be made as follows: Ngo, V.M., Le-Khac, N.A. and Kechadi,
M.T. (2020) ‘Data Warehouse and Decision Support on Integrated Crop Big Data’,
Int. J. Business Process Integration and Management, Vol. 10, No. 1, pp.
17–28.
Vuong M. Ngo received the B.E, M.E and PhD degrees in computer science at HCMC
University of Technology in 2004, 2007 and 2013 respectively. He is currently
a Senior Researcher at UCD and HCMC Open University. His research interests
include information retrieval, sentiment analysis, data mining, graph matching
and data
Nhien-An Le-Khac is currently a Lecturer at the School of Computer Science,
UCD and a Programme Director of MSc programme in forensic computing and
cybercrime investigation. He obtained his PhD in computer science in 2006 at
the Institut National Polytechnique Grenoble, France. His research interest
spans the area of cybersecurity and digital forensics, data mining/distributed
data mining for security, grid and high performance computing.
M-Tahar Kechadi was awarded PhD and Master degrees in computer science from
University of Lille 1, France. He joined the UCD School of Computer Science in
1999. He is currently Professor of Computer Science at UCD. His research
interests span the areas of data mining, data analytics, distributed data
mining, heterogeneous distributed systems, grid and cloud Computing,
cybersecurity, and digital forensics. He is a Principal Investigator at
Insight Centre for Data Analytics and CONSUS project. He is a member of IEEE
and ACM.
## 1 Introduction
Annual world cereal productions were $2,608$ million tons and $2,595$ million
tons in $2017$ and $2018$, respectively (USDA report, 2018; FAO-CSDB report,
2018). However, there were also around $124$ million people in $51$ countries
faced food crisis and food insecurity (FAO-FSIN report, 2018). According to
United Nations (UN document, 2017), we need an increase $60\%$ of cereal
production to meet $9.8$ billion people needs by $2050$. To satisfy the huge
increase demand for food, crop yields must be significantly increased using
modern farming approaches, such as smart farming also called precision
agriculture. As highlighted in the European Commission report (EC report,
2016), precision agriculture is vitally important for the future and can make
a significant contribution to food security and safety.
The precision agriculture’s current mission is to use the decision-support
system (DSS) based on Big Data approaches to provide precise information for
more control of waste and farming efficiency, such as soil nutrient (Rogovska
and et al., 2019), early warning (Rembold and et al., 2019), forecasting
(Bendre and et al., 2015), irrigation systems (Huang and et al., 2013),
evapotranspiration prediction (Paredes and et al., 2014), soil and herbicide,
insecticide optimisation (Ngo and Kechadi, 2020), awareness (Lokers and et
al., 2016), supply chain (Protopop and Shanoyan, 2016) and financial services
(Ruan and et al., 2019). Normally, the DSSs implement a knowledge discovery
process also called data mining process, which consists of data collection and
data modelling, data warehousing, data analysis (using machine learning or
statistical techniques), and knowledge deployment (Dicks and et al., 2014).
Hence, designing and implementing an efficient agricultural data warehouse
(ADW) is one of the key steps of this process, as it defines a uniform data
representation through its schema model and stores the derived datasets so
that they can be analysed to extract useful knowledge. However, currently,
this step was not given much attention. Therefore, there are very few reports
in the literature that focus on the design of efficient ADWs with the view to
enable Agricultural Big Data analytics and mining. The design of large scale
ADWs is very challenging. Because, the agricultural data is spatial, temporal,
complex, heterogeneous, non-standardised, high dimensional, collected from
multi-sources, and very large. In particular, it has all the features of Big
Data; volume, variety, velocity and veracity. Moreover, the precision
agriculture system can be used by different kinds of users at the same time,
for instance by farmers, policymakers, agronomists, and so on. Every type of
user needs to analyse different information, sets thus requiring specific
analytics.
Unlike in any other domains; health-care, financial data, etc, the data and
its warehousing in precision agriculture are unique. This is because, there
are very complex relationships between the agricultural data dimensions. The
data sources are very diversified and varying levels of quality. Precision
agriculture (PA) warehousing has many decision-making processes and each needs
different levels of data access and different needs of analysis. Finally,
there are many stakeholders involved in the data ownership and exploitation.
So, the data has significant number of uncertainties. For examples, the
quality of data collected by farmers depends directly on their knowledge,
routines and frequency of information recording, and support tools, etc. All
these issues make the PA data unique when it becomes to its storage, access,
and analysis. These issues may exist in other domains, but not at the same
scale and as in agriculture practices.
In this research, we firstly analyse real-world agricultural Big Data to build
the effective constellation schema. From this schema, some simple questions
can be easily answered directly from the modelled data. These questions
include: (1) For a given field, what kind of crops are suitable to grow? (2)
Which companies can purchase a specific crop with the highest price in the
past season? (3) List the history of soil texture and applied fertilisers for
a given field; (4) List costs of production for wheat and barley in the last 5
years, and so on. Secondly, the proposed ADW has enough main features and
characteristics of Big Data Warehouse (BDW). These are (1) high storage
capacity, high performance and cloud computing compatibility; (2) flexible
schema and integrated storage structure; (3) data ingestion, monitoring, and
security to deal with the data veracity. Besides, an experimental evaluation
is conducted to study the performance of ADW storage.
The rest of this paper is organised as follows: in the next Section, we
reviewed the related work about decision support systems and data warehouses
in agriculture. In Sections 3, 4 and 5, we presented big data aspects of PA,
our ADW architecture and its modules. In Sections 6, 7, 8 and 9, the quality
criteria, implementation, performance analysis and decision-making
applications of the proposed ADW are presented respectively. Section 10 gives
some concluding remarks and future research directions. Finally, a concrete
example about the ADW and its operational average run-times are shown in the
appendix.
## 2 Related Work
In precision agriculture, DSSs are designed to support different stakeholders
such as farmers, advisers and policymakers to optimise resources, farms’
management and improve business practices (Gutierreza and et al., 2019). For
instance, DSSs were built to 1) manage microbial pollution risks in dairy
farming (Oliver and et al., 2017); 2) analyse nitrogen fertilisation from
satellite images (Lundstrom and Lindblom, 2018); 3) control pest and disease
under uncertainty in climate conditions (Devitt and et al., 2017); 4) manage
drip irrigation and its schedule (Friedman and et al., 2016); 5) predict and
adopt climate risks (Han and et al., 2017). However, the datasets that were
used in the mentioned studies are small. Besides, they focused on using
visualisation techniques to assist end-users understand and interpret their
data.
Recently, many papers have been published on how to exploit intelligent
algorithms on sensor data to improve agricultural economics Pantazi (2016),
Park and et al. (2016), Hafezalkotob and et al. (2018), Udiasa and et al.
(2018) and Rupnik and et al. (2019). In Pantazi (2016), the authors predicted
crop yield by using self-organising-maps; namely supervised Kohonen networks,
counter-propagation artificial networks and XY-fusion. In Park and et al.
(2016), one predicted drought conditions by using three rule-based machine
learning; namely random forest, boosted regression trees, and Cubist. To
select the best olive harvesting machine, the authors in Hafezalkotob and et
al. (2018) applied the target-based techniques on the main criteria, which are
cost, vibration, efficiency, suitability, damage, automation, work capacity,
ergonomics, and safety. To provide optimal management of nutrients and water,
the paper Udiasa and et al. (2018) exploited the multi-objective genetic
algorithm to implement an E-Water system. This system enhanced food crop
production at river basin level. Finally, in Rupnik and et al. (2019) the
authors predicted pest population dynamics by using time series clustering and
structural change detection which detected groups of different pest species.
However, the proposed solutions are not scalable enough to handle agricultural
Big Data; they present weaknesses in one of the following aspects: data
integration, data schema, storage capacity, security and performance.
From a Big Data point of view, the papers Kamilaris and et al. (2018) and
Schnase and et al. (2017) have proposed “smart agricultural frameworks”. In
Kamilaris and et al. (2018), the authors used Hive to store and analyse sensor
data about land, water and biodiversity which can help increase food
production with less environmental impact. In Schnase and et al. (2017), the
authors moved toward a notion of climate analytics-as-a-service, by building a
high-performance analytics and scalable data management platform, which is
based on modern cloud infrastructures, such as Amazon web services, Hadoop,
and Cloudera. However, the two papers did not discuss how to build and
implement a DW for a precision agriculture.
The proposed approach, inspired from Schulze and et al. (2007), Schuetz and et
al. (2018), Nilakanta and et al. (2008) and Ngo and et al. (2018), introduces
ways of building agricultural data warehouse (ADW). In Schulze and et al.
(2007), the authors extended entity-relationship concept to model operational
and analytical data; called multi-dimensional entity-relationship model. They
also introduced new representation elements and showed how can be extended to
an analytical schema. In Schuetz and et al. (2018), a relational database and
an RDF triple store were proposed to model the overall datasets. The data is
loaded into the DW in RDF format, and cached in the RDF triple store before
being transformed into relational format. The actual data used for analysis
was contained in the relational database. However, as the schemas used in
Schulze and et al. (2007) and Schuetz and et al. (2018) were based on entity-
relationship models, they cannot deal with high-performance, which is the key
feature of a data warehouse.
In Nilakanta and et al. (2008), a star schema model was used. All data marts
created by the star schemas are connected via some common dimension tables.
However, a star schema is not enough to present complex agricultural
information and it is difficult to create new data marts for data analytics.
The number of dimensions of the DW proposed in Nilakanta and et al. (2008) is
very small; only 3-dimensions – Species, Location, and Time. Moreover, the DW
concerns livestock farming. Overcoming disadvantages of the star schema, the
authors of Ngo and et al. (2018) and Ngo and Kechadi (2020) proposed a
constellation schema for an agricultural DW architecture in order to satisfy
the quality criteria. However, they did not describe how to design and
implement their DW.
## 3 Crop Big Data
### 3.1 Crop Datasets
The datasets were primarily obtained from an agronomy company, which extracted
it from them operational data storage systems, research results, and field
trials. Especially, we were given real-world agricultural datasets on iFarms,
Business-to-Business (B2B) sites, technology centres and demonstration farms.
Theses datasets were collected from several European countries and they are
presented in Figures 1 and 2 (Origin report, 2018). These datasets describe
more than $112$ distribution points, $73$ demonstration farms, $32$
formulation and processing facilities, $12.7$ million hectares of direct farm
customer footprint and $60,000$ trial units.
Figure 1: Data from UK and Ireland. Figure 2: Data in Continental Europe.
There is a total of 29 datasets. On average, each dataset contains $18$ tables
and is about $1.4$ GB in size. Each dataset focuses on a few information that
impact the crop. For instance, the weather dataset includes information on
location of weather stations, temperature, rainfall and wind speed over time.
Meanwhile, soil component information in farm sites, such as mineral, organic
matter, air, water and micro-organisms, were stored in the soil dataset. The
fertiliser dataset contains information about field area and geographic
position, crop name, crop yield, season, fertiliser name and quantity.
### 3.2 Big Data Challenges
Raw and semi-processed agricultural datasets are usually collected through
various sources: Internet of Thing (IoT) devices, sensors, satellites, weather
stations, robots, farm equipment, farmers and agronomists, etc. Besides,
agricultural datasets are very large, complex, unstructured, heterogeneous,
non-standardised, and inconsistent. Hence, it has all the features of Big
Data.
1. 1.
Volume: The amount of agricultural data is increasing rapidly and is
intensively produced by endogenous and exogenous sources. The endogenous data
is collected from operational systems, experimental results, sensors, weather
stations, satellites, and farming equipment. The systems and devices in the
agricultural ecosystem can be connected through IoT. The exogenous data
concerns the external sources, such as government agencies, retail
agronomists, and seed companies. They can help with information about local
pest and disease outbreak tracking, crop monitoring, food security, products,
prices, and knowledge.
2. 2.
Variety: Agricultural data has many different forms and formats, structured
and unstructured data, video, imagery, chart, metrics, geo-spatial, multi-
media, model, equation, text, etc.
3. 3.
Velocity: The collected data increases at very high rate, as sensing and
mobile devices are becoming more efficient and cheaper. The datasets must be
cleaned, aggregated and harmonised in real-time.
4. 4.
Veracity: The tendency of agronomic data is uncertain, inconsistent, ambiguous
and error prone because the data is gathered from heterogeneous sources,
sensors and manual processes.
### 3.3 ADW Schema
Figure 3: A part of ADW schema for Precision Agriculture
The DW uses schema to logically describe the entire datasets. A schema is a
collection of objects, including tables, views, indexes, and synonyms which
consist of some fact and dimension tables (Oracle document, 2017). The DW
schema can be designed based on the model of source data and the user
requirements. There are three kind of models, namely star, snowflake and fact
constellation. With the its various uses, the ADW schema needs to have more
than one fact table and should be flexible. So, the constellation schema, also
known galaxy schema should be used to design the ADW schema.
Figure 4: Field and Crop dimension tables Figure 5: Soil and Pest dimension
tables
We developed a constellation schema for ADW and it is partially described in
Figure 3. It includes few fact tables and many dimension tables. FieldFact
fact table contains data about agricultural operations on fields. Order and
Sale fact tables contain data about farmers’ trading operations. The key
dimension tables are connected to their fact table. There are some dimension
tables connected to more than one fact table, such as Crop and Farmer.
Besides, CropState, Inspection, Site, and Weather Reading dimension tables are
not connected to any fact table. CropState and Inspection tables are used to
support Crop table. While, Site and Weather Reading tables support Field and
WeatherStation tables. FieldFact fact table saves the most important facts
about teh field; yield, water volume, fertiliser quantity, nutrient quantity,
spray quantity and pest number. While, in Order and Sale tables, the important
facts needed by farm management are quantity and price.
Table 1: Descriptions of other dimension tables No. | Dim. tables | Particular attributes
---|---|---
1 | Business | BusinessID, Name, Address, Phone, Mobile, Email
2 | CropState | CropStateID, CropID, StageScale, Height, MajorStage, MinStage, MaxStage, Diameter, MinHeight, MaxHeight, CropCoveragePercent
3 | Farmer | FarmerID, Name, Address, Phone, Mobile, Email
4 | Fertiliser | FertiliserID, Name, Unit, Status, Description, GroupName
5 | Inspection | InspectionID, CropID, Description, ProblemType, Severity, ProblemNotes, AreaValue, AreaUnit, Order, Date, Notes, GrowthStage
6 | Nutrient | NutrientID, NutrientName, Date, Quantity
7 | Operation Time | OperationTimeID, StartDate, EndDate, Season
8 | Plan | PlanID, PName, RegisNo, ProductName, ProductRate, Date, WaterVolume
9 | Product | ProductID, ProductName, GroupName
10 | Site | SiteID, FarmerID, SiteName, Reference, Country, Address, GPS, CreatedBy
11 | Spray | SprayID, SprayProductName, ProductRate, Area,Date, WaterVol, ConfDuration, ConfWindSPeed, ConfDirection, ConfHumidity, ConfTemp, ActivityType
12 | Supplier | SupplierID, Name, ContactName, Address, Phone, Mobile, Email
13 | Task | TaskID, Desc, Status, TaskDate, TaskInterval, CompDate, AppCode
14 | Trans Time | TransTimeID, OrderDate, DeliverDate, ReceivedDate, Season
15 | Treatment | TreatmentID, TreatmentName, FormType, LotCode, Rate, ApplCode, LevlNo, Type, Description, ApplDesc, TreatmentComment
16 | Weather Reading | WeatherReadingID, WeatherStationID, ReadingDate, ReadingTime, AirTemperature, Rainfall, SPLite, RelativeHumidity, WindSpeed, WindDirection, SoilTemperature, LeafWetness
17 | Weather Station | WeatherStationID, StationName, Latitude, Longitude, Region
The dimension tables contain details on each instance of an object involved in
a crop yield or farm management. Figure 4 describes attributes of Field and
Crop dimension tables. Field table contains information about name, area, co-
ordinates (being longitude and latitude of the centre point of the field),
geometric (being a collection of points to show the shape of the field) and
site identify the site that the field it belongs to. While, Crop table
contains information about name, estimated yield of the crop (estYield), BBCH
Growth Stage Index (BbchScale), harvest equipment and its weight. These
provide useful information for crop harvesting.
Figure 5 describes attributes of Soil and Pest dimension tables. Soil table
contains information about PH value (a measure of the acidity and alkalinity),
minerals (nitrogen, phosphorus, potassium, magnesium and calcium), its texture
(texture label and percentage of Silt, Clay and Sand), cation exchange
capacity (CEC) and organic matter. Besides, information about recommended
nutrient and testing dates ware also included in this table. In Pest table
contains name, type, density, coverage and detected dates of pests. For the
remaining dimension tables, their main attributes are described in Table 1.
## 4 ADW Architecture
A DW is a federated repository for all the data that an enterprise can collect
through multiple heterogeneous data sources; internal or external. The authors
in Golfarelli and Rizzi (2009) and Inmon (2005) defined DW as a collection of
methods, techniques, and tools used to conduct data analyses, make decisions
and improve information resources. DW is defined around key subjects and
involves data cleaning, data integration and data consolidations. Besides, it
must show its evolution over time and is not volatile.
The general architecture of a typical DW system includes four separate and
distinct modules; Raw Data, Extraction Transformation Loading (ETL),
Integrated Information and Data Mining (Kimball and Ross, 2013), which is
illustrated in Figure 6. In that, Raw Data (source data) module is originally
stored in various storage systems (e.g. SQL, sheets, flat files, …). The raw
data often requires cleansing, correcting noise and outliers, dealing with
missing values. Then it needs to be integrated and consolidated before loading
it into a DW storage through ETL module.
Figure 6: Agricultural Data Warehouse Architecture.
The Integrated Information module is a logically centralised repository, which
includes the DW storage, data marts, data cubes and OLAP engine. The DW
storage is organised, stored and accessed using a suitable schema defined by
the metadata. It can be either directly accessed or used to create data marts,
which is usually oriented to a particular business function or an enterprise
department. A data mart partially replicates DW storage’s contents and is a
subset of DW storage. Besides, the data is extracted in a form of data cube
before it is analysed in the data mining module. A data cube is a data
structure that allows advanced analysis of data according to multiple
dimensions that define a given problem. The data cubes are manipulated by the
OLAP engine. The DW storage, data mart and data cube are considered as
metadata, which can be applied to the data used to define other data. Finally,
Data Mining module contains a set of techniques, such as machine learning,
heuristic, and statistical methods for data analysis and knowledge extraction
at multiple level of abstraction.
## 5 ETL and OLAP
The ETL module contains Extraction, Transformation, and Loading tools that can
merge heterogeneous schemata, extract, cleanse, validate, filter, transform
and prepare the data to be loaded into a DW. The extraction operation allows
to read, retrieve raw data from multiple and different types of data sources
systems and store it in a temporary staging. During this operation, the data
goes through multiple checks – detect and correct corrupted and/or inaccurate
records, such as duplicate data, missing data, inconsistent values and wrong
values. The transformation operation structures, converts or enriches the
extracted data and presents it in a specific DW format. The loading operation
writes the transformed data into the DW storage. The ETL implementation is
complex, and consuming significant amount of time and resources. Most DW
projects usually use existing ETL tools, which are classified into two groups.
The first is a commercial and well-known group and includes tools such as
Oracle Data Integrator, SAP Data Integrator and IBM InfoSphere DataStage. The
second group is famous for it open source tools, such as Talend, Pentaho and
Apatar.
OLAP is a category of software technology that provides the insight and
understanding of data in multiple dimensions through fast, consistent,
interactive access, management and analysis of the data. By using roll-up
(consolidation), drill-down, slice-dice and pivot (rotation) operations, OLAP
performs multidimensional analysis in a wide variety of possible views of
information that provides complex calculations, trend analysis and
sophisticated data modelling quickly. The OLAP systems are divided into three
categories: 1) Relational OLAP (ROLAP), which uses relational or extended-
relational database management system to store and manage the data warehouse;
2) Multidimensional OLAP (MOLAP), which uses array-based multidimensional
storage engines for multidimensional views of data, rather than in a
relational database. It often requires pre-processing to create data cubes. 3)
Hybrid OLAP (HOLAP), which is a combination of both ROLAP and MOLAP. It uses
both relational and multidimensional techniques to inherit the higher
scalability of ROLAP and the faster computation of MOLAP.
In the context of agricultural Big Data, HOLAP is more suitable than both
ROLAP and MOLAP because: 1) ROLAP has quite slow performance and does not meet
all the users’ needs, especially when performing complex calculations; 2)
MOLAP is not capable of handling detailed data and requires all calculations
to be performed during the data cube construction; 3) HOLAP inherits
advantages of both ROLAP and MOLAP, which allow the user to store large data
volumes of detailed information and perform complex calculations within
reasonable response time.
## 6 Quality Criteria
The accuracy of data mining and analysis techniques depends on the quality of
the DW. As mentioned in Adelman and Moss (2000) and Kimball and Ross (2013),
to build an efficient ADW, the quality of the DW should meet the following
important criteria:
1. 1.
Making information easily accessible.
2. 2.
Presenting consistent information.
3. 3.
Integrating data correctly and completely.
4. 4.
Adapting to change.
5. 5.
Presenting and providing right information at the right time.
6. 6.
Being a secure bastion that protects the information assets.
7. 7.
Serving as the authoritative and trustworthy foundation for improved decision
making. The analytics tools need to provide right information at the right
time.
8. 8.
Achieving benefits, both tangible and intangible.
9. 9.
Being accepted by DW users.
The above criteria must be formulated in a form of measurements. For example,
with the 8th criterion, it needs to determine quality indicators about
benefits, such as improved fertiliser management, cost containment, risk
reduction, better or faster decision, and efficient information transaction.
In the last criterion, a user satisfaction survey should be used to find out
how a given DW satisfies its user’s expectations.
## 7 ADW Implementation
Currently, there are many popular large-scale database types that can
implement DWs. Redshift (Amazon document, 2018), Mesa (Gupta and et al.,
2016), Cassandra (Hewitt and Carpenter, 2016; Neeraj, 2015), MongoDB
(Chodorow, 2013; Hows and et al., 2015) and Hive (Du, 2018; Lam and et al.,
2016). In Ngo and et al. (2019), the authors analysed the most popular no-sql
databases, which fulfil most of the aforementioned criteria. The advantages,
disadvantages, as well as similarities and differences between Cassandra,
MongoDB and Hive were investigated carefully in the context of ADW. It was
reported that Hive is a better choice as it can be paired with MongoDB to
implement the proposed ADW for the following reasons:
1. 1.
Hive is based on Hadoop which is the most powerful cloud computing platform
for Big Data. Besides, HQL is similar to SQL which is popular for the majority
of users. Hive supports well high storage capacity, business intelligent and
data science more than MongoDB or Cassandra. These Hive features are useful to
implement ADW.
2. 2.
Hive does not have real-time performance so it needs to be combined with
MongoDB or Cassandra to improve its performance.
3. 3.
MongoDB is more suitable than Cassandra to complement Hive because: 1) MongoDB
supports joint operation, full text search, ad-hoc query and second index
which are helpful to interact with the users. Cassandra does not support these
features; 2) MongoDB has the same master – slave structure with Hive that is
easy to combine. While the structure of Cassandra is peer - to - peer; 3) Hive
and MongoDB are more reliable and consistent. So the combination of both Hive
and MongoDB adheres to the CAP theorem.
Figure 7: Agricultural Data Warehouse Implementation
The ADW implementation is illustrated in Figure 7 which contains three
modules, namely Integrated Information, Products and Raw Data. The Integrated
Information module includes two components; MongoDB and Hive. MongoDB receives
real-time data; as user data, logs, sensor data or queries from Products
module, such as web application, web portal or mobile app. Besides, some
results which need to be obtained in real-time will be transferred from the
MongoDB to Products. Hive stores the online data and sends the processed data
to MongoDB. Some kinds of queries having complex calculations will be sent
directly to Hive.
In the Raw Data module, almost data in Operational Databases or External Data
components, is loaded into Cassandra. It means that we use Cassandra to
represent raw data storage. Hence, with the diverse formats of raw data;
image, video, natural language and sql data, Cassandra is better to store them
than SQL databases. In the idle times of the system, the updated raw data in
Cassandra will be imported into Hive through the ELT tool. This improves the
performance of ETL and helps us deploy ADW on cloud or distributed systems.
## 8 Performance Analysis
The performance analysis was conducted using MySQL 5.7.22, JDK 1.8.0_171,
Hadoop 2.6.5 and Hive 2.3.3 which run on Bash, on Ubuntu 16.04.2, and on
Windows 10. All experiments were run on a desktop with an Intel Core i7 CPU
(2.40 GHz) and 16 GB memory. We only evaluate the performance of reading
operation as ADW is used for reporting and data analysis. The database of ADW
is duplicated into MySQL to compare performance. By combining popular HQL/SQL
commands, namely Where, Group by, Having, Left (right) Join, Union and Order
by, we created 10 groups for testing. Every group has 5 queries and uses one,
two or more commands (see Table 2). Moreover, every query uses operators; And,
Or, $\geq$, Like, Max, Sum and Count, to express complex queries.
Table 2: Command combinations of queries Group | Commands
---|---
$G_{1}$ | Where
$G_{2}$ | Where, Group by
$G_{3}$ | Where, Left (right) Join
$G_{4}$ | Where, Union
$G_{5}$ | Where, Order by
$G_{6}$ | Where, Left (right) Join, Order by
$G_{7}$ | Where, Group by, Having
$G_{8}$ | Where, Group by, Having, Order by
$G_{9}$ | Where, Group by, Having, Left (right) Join,
| Order by
$G_{10}$ | Where, Group by, Having, Union, Order by
$0$$10$$20$$30$$40$$50$$0$$10$$20$$30$1Queries ($q_{i}$)Different times
($Times_{q_{i}}$)Group 1Group 2Group 3Group 4Group 5Group 6Group 7Group 8Group
9Group 10 Figure 8: Different times between MySQL and
ADW in runtime of every Query
All queries were executed three times and we took the average value of the
their execution timess. The difference in runtime between MySQL and ADW for a
query $q_{i}$ is calculated as
$Times_{q_{i}}=RT^{mysql}_{q_{i}}/RT^{ADW}_{q_{i}}$. Where,
$RT^{mysql}_{q_{i}}$ and $RT^{ADW}_{q_{i}}$ are average runtimes of query
$q_{i}$ on MySQL and ADW, respectively. Moreover, with each group $G_{i}$, the
difference in runtime between MySQL and ADW is
$Times_{G_{i}}=RT^{mysql}_{G_{i}}/RT^{ADW}_{G_{i}}$. Where,
$RT_{G_{i}}=Average(RT_{q_{i}})$ is average runtime of group $G_{i}$ on MySQL
or ADW.
Figure 8 describes the time difference between MySQL and ADW for every query.
Although running on one computer, but with large data volume, ADW is faster
than MySQL on 46 out of 50 queries. MySQL is faster for three queries
$12^{th}$, $13^{th}$ and $18^{th}$ belonging to groups $3^{rd}$ and $4^{th}$.
The two systems returned the same time for query $24^{th}$ from group
$5^{th}$. Within each query group, for fair performance comparison, the
queries combine randomly fact tables and dimensional tables. This makes
complex queries taking more time and the time difference is significant. When
varying the sizes and structures of the tables, the difference is very
significant; see Figure 8.
$0$$2$$4$$6$$8$$10$$2$$4$$6$Mean$6.24$$2.92$$1.22$$2.86$$2.27$$4.66$$3.36$$4.63$$3.16$$1.56$$3.19$Groups
($G_{i}$)Different times ($Times_{G_{i}}$) Figure 9: Different times between
MySQL and
ADW in runtime of every group
Beside comparing runtime in every query, we aslo compare runtime of every
group presented in Figure 9. Comparing to MySQL, ADW is more than at most
(6.24 times) at group $1^{st}$ which uses only Where command, and at least
(1.22 times) at group $3^{rd}$ which uses Where and Joint commands.
12345678910Mean$0$$500$$1{,}000$$1{,}081.5$$599.7$$111.7$$790.4$$776.6$$1{,}109.2$$483$$1{,}057.3$$297.9$$571.1$$687.8$$173.4$$205.2$$91.2$$276.4$$342.8$$238$$143.7$$228.3$$94.2$$366.4$$216.1$Groups
($G_{i}$)Average runtimes (seconds)MySQLADW Figure 10: Average Runtimes of
MySQL and
ADW in every Groups
Figure 10 presents the average runtime of the 10 query groups on MySQL and
ADW. Mean, the run time of a reading query on MySQL and ADW is 687.8 seconds
and 216.1 seconds, respectively. It means that ADW is faster 3.19 times. In
the future, by deploying ADW solution on cloud or distributed systems, we
believe that the performance will be even much better than MySQL.
## 9 Application for Decision Making
The proposed ADW and study its performance on real agricultural data, we
illustrated some queries examples to show how to extract information from ADW.
These queries incorporate inputs on crop, yield, pest, soil, fertiliser,
inspection, farmer, businessman and operation time to reduce labour and
fertiliser inputs, farmer services, disease treatment and also increase
yields. These query information could not be extracted if the Origin’s
separate 29 datasets have not been integrated into ADW. The data integration
through ADW is actually improve the value of a crop management data over time
to better decision-making.
Example 1: List fields, crops in the fields, yield and pest in the field with
conditions: (1) the fields do not used ’urea’ fertilizer; (2) the crops has
’yellow rust’ or ’brown rust’ diseases; (3) the crops were grown in 2015.
select CR.CropName, FI.FieldName, FF.Yield,
PE.CommonName, FF.PestNumber, PE.Description
from FieldFact FF, Crop CR, Field FI, Pest PE,
Fertiliser FE, Inspection INS, OperationTime OP
where FF.CropID = CR.CropID and
FF.FieldID = FI.FieldID and
FF.PestID = PE.PestID and
FF.FertiliserID = FE.FertiliserID and
CR.CropID = INS.CropID and
FF.OperationTimeID = OP.OperationTimeID and
FE.FertiliserName <> ’urea’ and
(INS.Description = ’Yellow Rust’ or
INS.Description = ’Brown Rust’) and
Year(INS.Date) = ’2015’ and
Year(OP.StartDate) = ’2015’ and
Year(OP.EndDate) = ’2015’
Example 2: List farmers and their crop quantities were sold by Ori Agro
company in 08/2016.
select FA.FarmerID, FA.FarmerName, CR.CropName,
SF.Unit, SUM(SF.Quantity)
from Salefact SF, business BU, farmer FA, crop CR
where SF.BusinessID = BU.BusinessID and
SF.FarmerID = FA.FarmerID and
SF.CropID = CR.CropID and
Month(SF.SaleDate) = ’08’ and
Year(SF.SaleDate) = ’2016’ and
BU.BusinessName = ’Ori Agro’
group by CR.CropName
Example 3: List Crops and their fertiliser and treatment information. In that,
crops were cultivated and harvested in 2017, Yield $>$ 10 tons/ha and attached
by ’black twitch’ pest. Besides, the soil in field has PH $>6$ and Silt $<=50$
mg/l.
Select CR.CropName, FE.FertiliserName,
FF.FertiliserQuantity, TR.TreatmentName,
TR.Rate, TR.TreatmentComment
From FieldFact FF, Crop CR, OperationTime OT,
Soil SO, PEST PE, Fertiliser FE, Treatment TR
Where FF.CropID = CR.CropID and
FF.OperationTimeID = OT.OperationTimeID and
FF.SoildID = SO.SoilID and
FF.PestID = PE.PestID and
FF.FertiliserID = FE.FertiliserID and
FF.TreatmentID = TR.TreatmentID and
Year(OT.StartDate) = ’2017’ and
Year(OT.EndDate) = ’2017’ and
FF.Yield > 10 and
SO.PH > 6 and SO.Silt <= 50 and
PE.CommonName = ’Black twitch’
Example 4: List crops, fertilisers, corresponding fertiliser quantities in
spring, 2017 in every field and site of 10 farmers (crop companies) who used
the large amount of $P_{2}O_{5}$ in winter, 2016.
To execute this request, the query needs to exploit data in the FieldFact fact
table and the six dimension tables, namely Crop, Field, Site, Farmer,
Fertiliser and OperationTime. The query consists of two subqueries which
return 10 farmers (crop companies) that used the largest amount of Urea in
spring, 2016.
Select FI.FieldName, SI.SiteName, FA.FarmerName,
CR.CropName, FE.FertiliserName,
FF.FertiliserQuantity, FE.Unit, OT.StartDate
From FieldFact FF, Crop CR, Field FI, Site SI,
Farmer FA, Fertiliser FE, Operationtime OT
Where FF.CropID = CR.CropID and
FF.FieldID = FI.FieldID and
FF.FertiliserID = FE.FertiliserID and
FF.OperationTimeID = OT.OperationTimeID and
FI.SiteID = SI.SiteID and
SI.FarmerID = FA.FarmerID and
OT.Season = ’Spring’ and
YEAR(OT.StartDate) = ’2017’ and
FA.FarmerID IN(
Select FarmerID
From
(Select SI.FarmerID as FarmerID,
SUM(FF.FertiliserQuantity) as SumFertiliser
From FieldFact FF, Field FI, Site SI,
Fertiliser FE, OperationTime OT
Where FF.FieldID = FI.FieldID and
FF.FertiliserID = FE.FertiliserID and
FF.OperationTimeID =
OT.OperationTimeID and
SI.SiteID = FI.SiteID and
FE.FertiliserName = ’SO3’ and
OT.Season = ’Spring’ and
YEAR(OT.StartDate) = ’2016’
Group by SI.FarmerID
Order by SumFertiliser DESC
Limit 10
)AS Table1
)
## 10 Conclusion and Future Work
In this paper, we presented a schema herein optimised for the real
agricultural datasets that were made available to us. The schema been designed
as a constellation so it is flexible to adapt to other agricultural datasets
and quality criteria of agricultural Big Data. Based on some existing popular
open source DWs, We designed and implemented the agricultural DW by combining
Hive, MongoDB and Cassandra DWs to exploit their advantages and overcome their
limitations. ADW includes necessary modules to deal with large scale and
efficient analytics for agricultural Big Data. Moreover, through particular
reading queries using popular HQL/SQL commands, ADW storage outperforms MySQL
by far. Finally, we outlined some complex HQL queries that enabled knowledge
extraction from ADW to optimize of agricultural operations.
In the future work, we shall pursue the deployment of ADW on a cloud system
and implement more functionalities to exploit this DW. The future developments
will include: (1) experimentation and analyzation the performance of MongoDB
and the affectation between MongoDB and Hive; (2) The sophisticated the data
mining and the spreading activation algorithms (Ngo, 2014) to determine crop
data characteristics and combine with expected outputs to extract useful
knowledge; (3) Predictive models based on machine learning algorithms; (4) An
intelligent interface and graph representation (Helmer and et al., 2015) for
data access; (5) Combination with the ontology to extract knowledge (Ngo and
et al., 2011; Cao and et al., 2012).
## Appendix
The followings are HQL/SQL scripts of 10 queries which are representative of
10 query groups. The average runtimes of these queries on MySQL and ADW are
shown in Figure 11.
1) The query $5^{th}$ belongs to the group $1^{st}$:
SELECT fieldfact.FieldID, crop.cropname,
fieldfact.yield
FROM fieldfact, crop
WHERE fieldfact.cropid = crop.cropid and
SprayQuantity = 7 and
(crop.CropName like ’P\%’ or
crop.CropName like ’R\%’ or
crop.CropName like ’G\%’);
2) The query $10^{th}$ belongs to the group $2^{nd}$:
SELECT soil.PH, count(*)
FROM fieldfact, soil
WHERE fieldfact.SoildID = soil.SoilID and
fieldfact.sprayquantity = 2
GROUP by soil.PH;
5101520253035404550$0$$1{,}000$$2{,}000$$97.9$$754.8$$52.7$$2{,}297$$1{,}192$$2{,}188.4$$95.4$$265.9$$439.5$$892.4$$3$$233.2$$3.6$$479$$422.6$$226.7$$5.2$$7.6$$212.3$$472.1$Queries
($q_{i}$)Average runtimes (seconds)MySQLADW Figure 11: Average runtimes of
MySQL and
ADW in 10 typical queries
3) The query $15^{th}$ belongs to the group $3^{rd}$:
SELECT fieldfact.yield,
fertiliser.fertiliserName,
fertiliser.fertiliserGroupName
FROM fieldfact
RIGHT JOIN fertiliser on
fieldfact.fertiliserID = fertiliser.fertiliserID
WHERE fieldfact.fertiliserQuantity = 10 and
fertiliser.fertiliserName like ’%slurry%’;
4) The query $20^{th}$ belongs to the group $4^{th}$:
SELECT sprayproductname
FROM fieldfact, spray
WHERE fieldfact.sprayid = spray.sprayid and
fieldfact.watervolumn > 5 and
fieldfact.watervolumn < 20
UNION
SELECT productname
FROM product, orderfact
WHERE product.ProductID = orderfact.ProductID
and (orderfact.Quantity = 5 or
orderfact.Quantity = 6);
5) The query $25^{th}$ belongs to the group $5^{th}$:
SELECT fieldfact.fieldID, field.FieldName,
field.FieldGPS, spray.SprayProductName
FROM fieldfact, field, spray
WHERE fieldfact.FieldID = field.FieldID and
fieldfact.SprayID = spray.SprayID and
fieldfact.PestNumber = 6
ORDER BY field.FieldName;
6) The query $30^{th}$ belongs to the group $6^{th}$:
SELECT fieldfact.FieldID, nutrient.NutrientName,
nutrient.Quantity, nutrient.‘Year‘
FROM fieldfact
RIGHT JOIN nutrient on
fieldfact.NutrientID = nutrient.NutrientID
WHERE fieldfact.NutrientQuantity = 3 and
fieldfact.fertiliserquantity = 3
ORDER BY nutrient.NutrientName
LIMIT 10000;
7) The query $35^{th}$ belongs to the group $7^{th}$:
SELECT crop.cropname,
sum(fieldfact.watervolumn) as sum1
FROM fieldfact, crop
WHERE fieldfact.cropid = crop.cropid and
fieldfact.sprayquantity = 8 and
crop.EstYield >= 1 and crop.EstYield <=10
GROUP BY crop.cropname
HAVING sum1 > 100;
8) The query $40^{th}$ belongs to the group $8^{th}$:
SELECT crop.cropname,
sum(fieldfact.fertiliserquantity) as sum1
FROM fieldfact, crop
WHERE fieldfact.cropid = crop.cropid and
fieldfact.nutrientquantity= 5 and
crop.EstYield <=1
GROUP by crop.cropname
HAVING sum1 > 30
ORDER BY crop.cropname;
9) The query $45^{th}$ belongs to the group $9^{th}$:
SELECT nutrient.NutrientName,
sum(nutrient.Quantity) as sum1
FROM fieldfact
LEFT JOIN nutrient on
fieldfact.NutrientID = nutrient.NutrientID
WHERE nutrient.nutrientName like ’%tr%’ and
(fieldfact.pestnumber = 16 or
fieldfact.pestnumber = 15)
GROUP by nutrient.NutrientName
HAVING sum1 <300
ORDER BY nutrient.NutrientName;
10) The query $50^{th}$ belongs to the group $10^{th}$:
SELECT sprayproductname as name1,
sum(fieldfact.watervolumn) as sum1
FROM fieldfact, spray
WHERE fieldfact.sprayid = spray.sprayid and
fieldfact.Yield > 4 and fieldfact.Yield < 8
GROUP by sprayproductname
HAVING sum1 > 210
UNION
SELECT productname as name1,
sum(orderfact.Quantity) as sum2
FROM product, orderfact
WHERE product.ProductID = orderfact.ProductID and
(orderfact.Quantity = 5 or
orderfact.Quantity = 6)
GROUP by productname
HAVING sum2 > 50
ORDER BY name1;
## Acknowledgment
This research is an extended work of Ngo and et al. (2019) being part of the
CONSUS research program. It is funded under the SFI Strategic Partnerships
Programme (16/SPP/3296) and is co-funded by Origin Enterprises Plc.
## References
* Adelman and Moss (2000) Adelman, S. and Moss, L. (2000). Data warehouse project management, 1st edition. Addison-Wesley Professional.
* Amazon document (2018) Amazon document (2018). Amazon Redshift database developer guide. Samurai ML.
* Bendre and et al. (2015) Bendre, M. R. and et al. (2015). Big data in precision agriculture: Weather forecasting for future farming. In International Conference on Next Generation Computing Technologies (NGCT). IEEE.
* Cao and et al. (2012) Cao, T. and et al. (2012). Semantic search by latent ontological features. International Journal of New Generation Computing, Springer, SCI, 30(1):53–71.
* Chodorow (2013) Chodorow, K. (2013). MongoDB: The definitive guide, 2nd edition (powerful and scalable data storage). O’Reilly Media.
* Devitt and et al. (2017) Devitt, S. K. and et al. (2017). A cognitive decision tool to optimise integrated weed management. In Proceedings of International Tri-Conference for Precision Agriculture.
* Dicks and et al. (2014) Dicks, L. V. and et al. (2014). Organising evidence for environmental management decisions: a ‘4s’ hierarchy. Trends in Ecology & Evolution, 29(11):607–613.
* Du (2018) Du, D. (2018). Apache Hive essentials, 2nd edition. Packt Publishing.
* EC report (2016) EC report (2016). Europeans, agriculture and the common agricultural policy. Special Eurobarometer 440, The European Commission.
* FAO-CSDB report (2018) FAO-CSDB report (2018). Global cereal production and inventories to decline but overall supplies remain adequate, release date: December 06, 2018. Cereal Supply and Demand Brief, FAO.
* FAO-FSIN report (2018) FAO-FSIN report (2018). Global report on food crises 2018. Food Security Information Network, FAO.
* Friedman and et al. (2016) Friedman, S. P. and et al. (2016). Didas – user-friendly software package for assisting drip irrigation design and scheduling. Computers and Electronics in Agriculture, 120:36–52.
* Golfarelli and Rizzi (2009) Golfarelli, M. and Rizzi, S. (2009). Data warehouse design: modern principles and methodologies. McGraw-Hill Education.
* Gupta and et al. (2016) Gupta, A. and et al. (2016). Mesa: a geo-replicated online data warehouse for google’s advertising system. Communications of the ACM, 59(7):117–125.
* Gutierreza and et al. (2019) Gutierreza, F. and et al. (2019). A review of visualisations in agricultural decision support systems: An HCI perspective. Computers and Electronics in Agriculture, 163.
* Hafezalkotob and et al. (2018) Hafezalkotob, A. and et al. (2018). A decision support system for agricultural machines and equipment selection: A case study on olive harvester machines. Computers and Electronics in Agriculture, 148:207–216.
* Han and et al. (2017) Han, E. and et al. (2017). Climate-agriculture-modeling and decision tool (camdt): a software framework for climate risk management in agriculture. Environmental Modelling & Software, 95:102–114.
* Helmer and et al. (2015) Helmer, S. and et al. (2015). A similarity measure for weaving patterns in textiles. In In the 38th ACM SIGIR Conference on Research and Development in Information Retrieval, pages 163–172.
* Hewitt and Carpenter (2016) Hewitt, E. and Carpenter, J. (2016). Cassandra: the definitive guide, 2nd edition (distributed data at web scale). O’Reilly Media.
* Hows and et al. (2015) Hows, D. and et al. (2015). The definitive guide to MongoDB, 3rd edition (a complete guide to dealing with big data using MongoDB. Apress.
* Huang and et al. (2013) Huang, Y. and et al. (2013). Estimation of cotton yield with varied irrigation and nitrogen treatments using aerial multispectral imagery. International Journal of Agricultural and Biological Engineering, 6(2):37–41.
* Inmon (2005) Inmon, W. H. (2005). Building the data warehouse. Wiley.
* Kamilaris and et al. (2018) Kamilaris, A. and et al. (2018). Estimating the environmental impact of agriculture by means of geospatial and big data analysis: the case of Catalonia, pages 39–48. Springer.
* Kimball and Ross (2013) Kimball, R. and Ross, M. (2013). The data warehouse toolkit: the definitive guide to dimensional modeling (3rd edition). Wiley.
* Lam and et al. (2016) Lam, C. P. and et al. (2016). Hadoop in action, 2nd edition. Manning.
* Lokers and et al. (2016) Lokers, R. and et al. (2016). Analysis of big data technologies for use in agro-environmental science. Environmental Modelling & Software, 48:494–504.
* Lundstrom and Lindblom (2018) Lundstrom, C. and Lindblom, J. (2018). Considering farmers’ situated knowledge of using agricultural decision support systems (agridss) to foster farming practices: the case of cropsat. Agricultural Systems, 159:9–20.
* Neeraj (2015) Neeraj, N. (2015). Mastering Apache Cassandra, 2nd edition. Packt Publishing.
* Ngo (2014) Ngo, V. (2014). Discovering latent information by spreading activation algorithm for document retrieval. International Journal of Artificial Intelligence & Applications, 5(1):23–34.
* Ngo and et al. (2011) Ngo, V. and et al. (2011). Discovering latent concepts and exploiting ontological features for semantic text search. In In the 5th Int. Joint Conference on Natural Languag Processing, ACL, pages 571–579.
* Ngo and et al. (2018) Ngo, V. and et al. (2018). An efficient data warehouse for crop yield prediction. In The 14th International Conference Precision Agriculture (ICPA-2018), pages 3:1–3:12.
* Ngo and et al. (2019) Ngo, V. M. and et al. (2019). Designing and implementing data warehouse for agricultural big data. In The 8th International Congress on BigData (BigData-2019), pages 1–17. Springer-LNCS, Vol. 11514.
* Ngo and Kechadi (2020) Ngo, V. M. and Kechadi, M. T. (2020). Crop knowledge discovery based on agricultural big data integration. In The 4th International Conference on Machine Learning and Soft Computing (ICMLSC), pages 1–5. ACM.
* Nilakanta and et al. (2008) Nilakanta, S. and et al. (2008). Dimensional issues in agricultural data warehouse designs. Computers and Electronics in Agriculture, 60(2):263–278.
* Oliver and et al. (2017) Oliver, D. M. and et al. (2017). Design of a decision support tool for visualising e. coli risk on agricultural land using a stakeholder-driven approach. Land Use Policy, 66:227–234.
* Oracle document (2017) Oracle document (2017). Database data warehousing guide. Oracle12c doc release 1.
* Origin report (2018) Origin report (2018). Annual report and accounts. Origin Enterprises plc.
* Pantazi (2016) Pantazi, X. E. (2016). Wheat yield prediction using machine learning and advanced sensing techniques. Computers and Electronics in Agriculture, 121:57–65.
* Paredes and et al. (2014) Paredes, P. and et al. (2014). Partitioning evapotranspiration, yield prediction and economic returns of maize under various irrigation management strategies. Agricultural Water Management, 135:27–39.
* Park and et al. (2016) Park, S. and et al. (2016). Drought assessment and monitoring through blending of multi-sensor indices using machine learning approaches for different climate regions. Agricultural and Forest Meteorology, 216:157–169.
* Protopop and Shanoyan (2016) Protopop, I. and Shanoyan, A. (2016). Big data and smallholder farmers: Big data applications in the agri-food supply chain in developing countries. International Food and Agribusiness Management Review, IFAMA, 19(A):1–18.
* Rembold and et al. (2019) Rembold, F. and et al. (2019). Asap: A new global early warning system to detect anomaly hot spots of agricultural production for food security analysis. Agricultural Systems, 168:247–257.
* Rogovska and et al. (2019) Rogovska, N. and et al. (2019). Development of field mobile soil nitrate sensor technology to facilitate precision fertilizer management. Precision Agriculture, 20(1):40–55.
* Ruan and et al. (2019) Ruan, J. and et al. (2019). A life cycle framework of green iot-based agriculture and its finance, operation, and management issues. IEEE Communications Magazine, 57(3):90–96.
* Rupnik and et al. (2019) Rupnik, R. and et al. (2019). Agrodss: a decision support system for agriculture and farming. Computers and Electronics in Agriculture, 161:260–271.
* Schnase and et al. (2017) Schnase, J. and et al. (2017). Merra analytic services: meeting the big data challenges of climate science through cloud-enabled climate analytics-as-a-service. Computers, Environment and Urban Systems, 161:198–211.
* Schuetz and et al. (2018) Schuetz, C. G. and et al. (2018). Building an active semantic data warehouse for precision dairy farming. Organizational Computing and Electronic Commerce, 28(2):122–141.
* Schulze and et al. (2007) Schulze, C. and et al. (2007). Data modelling for precision dairy farming within the competitive field of operational and analytical tasks. Computers and Electronics in Agriculture, 59(1-2):39–55.
* Udiasa and et al. (2018) Udiasa, A. and et al. (2018). A decision support tool to enhance agricultural growth in the mékrou river basin (west africa). Computers and Electronics in Agriculture, 154:467––481.
* UN document (2017) UN document (2017). World population projected to reach 9.8 billion in 2050, and 11.2 billion in 2100. Department of Economic and Social Affairs, United Nations.
* USDA report (2018) USDA report (2018). World agricultural supply and demand estimates 08/2018. United States Department of Agriculture.
|
2024-09-04T02:54:58.587189 | 2020-03-10T00:25:25 | 2003.04472 | {
"authors": "Jie Ren, Wen-Long You, and Xiaoqun Wang",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26128",
"submitter": "Jie Ren",
"url": "https://arxiv.org/abs/2003.04472"
} | arxiv-papers | # Entanglements and correlations of one-dimensional quantum spin-1/2 chain
with anisotropic power-law long range interactions
Jie Ren<EMAIL_ADDRESS>Department of Physics, Changshu Institute of
Technology, Changshu 215500, China Wen-Long You College of Science, Nanjing
University of Aeronautics and Astronautics, Nanjing, 211106, China School of
Physical Science and Technology, Soochow University, Suzhou, Jiangsu 215006,
China Xiaoqun Wang<EMAIL_ADDRESS>Key Laboratory of Artificial
Structures and Quantum Control of MOE, Shenyang National Laboratory for
Materials Science, School of Physics and Astronomy,
Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China
Collaborative Innovation Center for Advanced Microstructures, Nanjing
University, Nanjing 210093, China Beijing Computational Science Research
Center, Beijing 100084, China
###### Abstract
The correlations, entanglement entropy, and fidelity susceptibility are
calculated for a one-dimensional spin-1/2 XXZ chain with anisotropic power-law
long range interactions by employing the density matrix renormalization group
method. In particular, this long-range interaction is assigned to
ferromagnetic for transversal components, while it can be either ferro- or
antiferromagnetic for the longitudinal spin component. Two ground-state phase
diagrams are established versus the anisotropy of the interactions which not
only changes the phase boundaries of the counterparts with short-range
interactions, but also leads to the emergence of exotic phases. We found that
the long-range interactions of the $z$-component results in a Wigner crystal
phase, whereas the transversal one may break a continuous symmetry, resulting
in a continuous symmetry breaking phase.
###### pacs:
03.67.-a,05.30.Jp
## I Introduction
The quantum phase transition (QPT) and quantum critical phenomena are
generally important in understanding novel properties involved in strongly
correlated systems, such as quantum magnetic materials. Usually, short-range
interactions, e.g., nearest neighbor and next nearest neighbor interactions,
are considered to be sufficient for appropriate descriptions on the major
magnetic properties of those systems Sachdev ; XWang2000 ; Luo2017 ; You19 ;
WN19 ; Luo2019 . However, there actually exist several types of long range
interactions such as the Coulomb interaction $1/r$ Saffman , the dipole-dipole
interaction $1/r^{3}$ Lahaye ; Deng ; Yan , and the van der Waals interaction
$1/r^{6}$ Saffman in some complicated compounds, where relevant electrons are
in higher orbits of atoms with lower symmetries subject to crystal field
effects. Moreover, in recent years, some long-range interactions have been
generated in ultracold atomic systems with the optical lattices or trapped
ions. For instance, a power-law Ising interaction $1/r^{\alpha}$ with an
adjustable exponent $0<\alpha<3$ has been realized in trapped ions Britton ;
Islam ; Gorshkov ; Jurcevic . This kind of experimental progress has greatly
stimulated theoretical studies on possible novel effects particularly
resulting from long-range interactions W ; Koffel ; Sun01 ; Zhu ; gong16 ;
gong17 ; gong17L ; gong16R ; Frerot ; Vanderstraeten . In particular, a
transition was revealed by the calculation of the entanglement for a long-
range ($\sim r^{-\alpha}$) antiferromagnetic Ising chain Koffel , and is
affirmed further by the fidelity susceptibility, being second-order for all
$\alpha$ Sun01 ; Zhu . Moreover, by combining the linear spin-wave theory,
field theory approach and density-matrix renormalization-group (DMRG) white ;
KWHP ; U01 ; U02 ; McCulloch , effects of the long range interactions on local
correlation functions, entanglement entropy and central charge are
investigated for both spin-1/2 gong17 and spin-1 gong16 to await
experimental observation. In addition, one also finds that long-range
interactions and long-range hopping may lead to drastic effects on the many-
body localization in a one-dimensional (1D) spinless fermion system Nag2019 ,
which essentially corresponds to a $XY$ type of long range spin interaction.
In this regard, the anisotropic long-range spin interaction can be anticipated
to give rise to more effects on quantum transitions.
In this paper, we study a ID spin-1/2 XXZ system with anisotropic power-law
long range interactions in terms of the entanglement entropy, fidelity
susceptibility, and correlation functions by performing DMRG calculations.
Phase diagrams are established with respect to the power exponents and the
anisotropy of interactions. In the following, Sec II presents the Hamiltonian
in our studies. The details on DMRG calculations and the definitions of those
calculated quantities are discussed in Sec. III. Numerical results are shown
in Sec IV with further discussions given in the last section.
## II Hamiltonian
In the paper, we consider the following spin-$1/2$ chain with anisotropic
long-range interactions, and its Hamiltonian is given by:
$\displaystyle
H=\sum_{j>i}\\{\frac{J_{xy}}{|i-j|^{\alpha}}(S^{x}_{i}S^{x}_{j}+S^{y}_{i}S^{y}_{j})+\frac{J_{z}}{|i-j|^{\beta}}S^{z}_{i}S^{z}_{j}\\},$
(1)
where $i$ and $j$ are the sites of one dimensional lattice, and
$S^{\gamma}=\sigma^{\gamma}/2$ with $\gamma=x,y$, or $z$, setting $\hbar=1$
and $\sigma^{\gamma}$ being the Pauli matrices. Interactions between two spins
separated by a distance of $r=|i-j|$ decay as $r^{-\alpha}$ for both $x$ and
$y$ components of spins, but as $r^{-\beta}$ for the $z$ direction. As usual,
the parameters $\alpha,\beta$ are both taken positive, while $J_{xy}=-1$ is
set up for the simplicity so that $J_{z}$ readily stands for an anisotropic
parameter involved in the establishment of the phase diagram.
For this system, in the limit of $\alpha,\beta\rightarrow+\infty$, the
Hamiltonian is reduced to describe a spin-1/2 anisotropic chain with the
nearest-neighbor interaction. It turns out that the system involves a
ferromagnetic (FM) phase for $J_{z}<-1$, whereas a gapful antiferromagnetic
(AFM) phase can be shown for $J_{z}>1$. Furthermore, in the region of
$-1<J_{z}\leq 1$, the system displays an $XY$ phase where quantum fluctuations
exclude the existence of any long-range order but correlation functions behave
as a power-law decay of the distance characterized as in the Luttinger liquid.
For more general values of $\alpha$ and $\beta$, long range interactions may
result in different features for those phases, which are expected also to be
properly characterized by long-distance correlation functions as exploited
below.
## III Measurements and Method
Thanks to the DMRG methodwhite ; KWHP ; U01 , the ground state properties of
quasi-one-dimensional systems can be calculated with very high accuracy. For
the present studies of Hamiltonian (1), we adopt both infinite-size DMRG
(iDMRG) McCulloch and finite-size DMRG, which are based on matrix product
states U02 . The number of eigenstates for the reduced matrix is kept up to
$m=400$ in the truncation of bases, which allows the truncation error to be
smaller than $10^{-9}$. In our calculations where finite-size DMRG algorithm,
we handle the long range interaction with directly using as a summation over
matrix product of operators (MPOs) rather than the summation of finite
exponential terms with MPOs Vidal , which inevitably introduces additional
systematic error otherwise. Our codes are mainly based on iTensor C++ library
tesnor .
Since the $z$-component of the total spins for the present system commutes
with the Hamiltonian (1), the ground-state energy is obtained by comparing the
lowest energies for each subspace of $S^{z}_{t}=\sum_{i=1}^{L}\langle
S^{z}_{i}\rangle$. We found that the ground state resides in the sector of
either $S^{z}_{t}=0$ or $S^{z}_{t}=L/2$. To examine the reliability of our
numerics, we also perform the finite-size DMRG with varying the number of
states in the truncated bases. Once the ground state energy and the
corresponding ground state are identified accurately, the first excited state
and the corresponding energy (gap) can be determined similarly as
orthonormalized to the ground state.
For a quantum many-body system, the entanglement entropy (EE) can be extracted
from the ground state wavefunction $|\psi_{0}\rangle$ properly to characterize
the quantum phase transition induced by the interaction or external fields.
Usually, one may separate a given Hamiltonian into two subsystems $A$ and $B$,
and compute the reduced density matrix for part $A$ by partially tracing over
the degree of freedom of the subsystem $B$, which can be written formally as
$\rho_{A}=\textrm{Tr}_{B}(|\psi_{0}\rangle\langle\psi_{0}|).$
Then, the entanglement entropy measuring the entanglement between parts $A$
and $B$ is given by
$\displaystyle S_{A}=-\textrm{Tr}(\rho_{A}\ln\rho_{A}).$ (2)
which is evaluated in terms of the eigenvalues of $\rho_{A}$ feasibly in DMRG
calculations. For a one-dimensional short-range interacting system with an
open boundary condition (OBC), the conformal field theory (CFT) suggests that
the entanglement entropy for the subsystem $A$ with size $l$ possesses the
following finite-size $L$ scaling behavior Cardy
$\displaystyle S_{l}=\frac{c}{6}\ln[\frac{L}{\pi}\sin(\frac{\pi
l}{L})]+S_{0},$ (3)
where $c$ is the central charge which usually has different values for
different phases and $S_{0}$ is a non-universal constant. This scaling
behavior has been employed to explore the critical entanglement of defects
Zhao2006 and Gaussian transitionHu2011 . In this paper, we will show that
this scaling behavior is applicable to a case associated with long-range
interactions.
## IV Results
### IV.1 $1/\alpha=0$
Now we first consider the case of $\alpha=\infty$, which implies that only the
nearest-neighbor term of $xy-$interaction survives. It turn out that the long-
range interaction for $z-$component governed by $\beta$ may result in novel
properties in competition with the $xy-$components. In this case, Hamiltonian
(1) can be recast to describe a one-dimensional interacting spinless fermionic
chain via the Jordan-Wigner transformation:
$\displaystyle S^{z}_{i}$ $\displaystyle=$
$\displaystyle\frac{1}{2}-c_{i}^{\dagger}c_{i},$ $\displaystyle S^{+}_{i}$
$\displaystyle=$ $\displaystyle
e^{i\pi\sum_{j=1}^{i-1}c_{i}^{\dagger}c_{i}}c_{i},$ $\displaystyle S^{-}_{i}$
$\displaystyle=$ $\displaystyle
e^{i\pi\sum_{j=1}^{i-1}c_{i}^{\dagger}c_{i}}c_{i}^{\dagger},$
where $S^{\pm}_{i}$=$S^{x}_{i}$ $\pm$ $iS^{y}_{i}$ are the raising and
lowering spin operators. Subsequently, the ferromagnetic $J_{xy}-$term thus
simply represents the hopping of fermions, while the $J_{z}-$term stands for
the density-density interactions of fermions, which can be either attractive
for $J_{z}<0$ or repulsive for $J_{z}>0$. One may expect that this density-
density interaction results in quantum transitions for different $\alpha$ and
$\beta$.
To explore this, we compute the correlation functions between two spins at $i$
and $j$ with a distance of $r=|i-j|$ and for $\beta$ = 2 with using the iDMRG
algorithm. Figure 1 shows results for $r=99$. One can see that when
$J_{z}<-0.636$, the transverse correlation $\langle
S^{+}_{i}S^{-}_{i+99}\rangle=0$ and the longitudinal correlation $\langle
S^{z}_{i}S^{z}_{i+99}\rangle=1/4$, implying that the system is in the FM
phase, and then $\langle S^{+}_{i}S^{-}_{i+99}\rangle$ suddenly jumps to a
positive value at $J_{z}=-0.636$ and $\langle S^{z}_{i}S^{z}_{i+99}\rangle$
drops to zero simultaneously. This discontinuity indicates that the ground
state undergoes a first order transition from the FM phase into the $XY$
phase. This discontinuous feature is thus utilized here to determine the
critical values of $\beta$ and $J_{z}$ for the quantum phase transition
between the $XY$ and FM phases.
Figure 1: (Color online) Correlation functions $\langle
S^{+}_{i}S^{-}_{i+r}\rangle$ and $\langle S^{z}_{i}S^{z}_{i+r}\rangle$ are
plotted as a function of $z$-component interaction $J_{z}$ for
$\alpha=\infty$, $\beta=2$ and $r=99$. Inset: a log-log plot for $\langle
S^{+}_{i}S^{-}_{i+r}\rangle$ as a function of $r$ when $J_{z}=\pm 0.5$. Figure
2: (Color online) (a) Entanglement entropies are plotted as a function of
$z$-component interaction $J_{z}$ for various system sizes with
$\alpha=\infty$ and $\beta=2$. (b) The peak positions of $S_{L/2}$ versus
system sizes $L$.
Moreover, as $J_{z}$ further increases, the transverse correlation $\langle
S^{+}_{i}S^{-}_{i+99}\rangle$ gradually reduce to zero, while the longitudinal
correlation $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ turns to negative for
$J_{z}\gtrsim 3/2$, which signals that the system is driven into a AFM phase.
A little scrutiny reveals that the transverse correlation $\langle
S^{+}_{i}S^{-}_{i+r}\rangle$ satisfies a power-law decay with the distance $r$
gong17 , as manifested in the inset of Fig. 1. To determine the critical point
at the transition between the $XY$ phase and AFM phase more precisely, we also
calculate the von Neumann entropy, i.e. entanglement entropy, for the right
part apart from the rest for the chain with using the finite-size DMRG
algorithm. The entanglement entropy is shown in Fig. 2 as a function of
$J_{z}$ with $\beta=2$ for different sizes of the chain. With increasing
$J_{z}$, the EE increases first and then declines. The peak becomes more
pronounced for a larger size $L$ and the location of the peak moves to a lower
value of $J_{z}$, characterizing a transition between the $XY$ phase and the
AFM phase Wang . According to the finite-size scaling theory Fisher ; Barber83
, it is expected that the position of the pseudo-critical point for a finite-
size system approaches the true critical point as $L$ $\to$ $\infty$. For
relevant operators in the driving Hamiltonian on sufficiently large-size
systems, i.e., $\nu$$d$$<$2, where $\nu$ is the critical exponent of the
correlation length and $d$ the dimensionality of the system, the leading term
in the expansion of pseudo-critical point obeys
$\displaystyle|J_{z}^{c}(L)-J_{z}^{c}(\infty)|\propto L^{-1/\nu},$ (4)
where $J_{z}^{c}(\infty)$ is the critical value for the thermodynamic limit.
Such algebraic convergence can be accelerated considerably by some elaborated
strategies Roncaglia . We obtain that $J_{z}^{c}=1.520$ and $\nu=1.695$ for
the present case consistent with the inflection point of the correlations
shown in Fig. 1. We note that the scaling behavior of Eq. (4) with $L$ is also
valid for the maximum of fidelity susceptibility defined in Eq.(6) You2011
(see below).
Figure 3: (Color online) Finite size scaling of the energy gap $\Delta$ with
various $\beta$ and $J_{z}$. Symbols show numerical results obtained by DMRG
calculations and solid lines are fits of the data by quadratic polynomials in
$1/L$. The results for $J_{z}=0$ is also plotted as for comparison.
Low-lying excitation energy often reveals perspective features of different
phases in the quantum many-body interacting systems. As mentioned previously,
the system involves the gapless $XY$ phase for $-1<J_{z}\leq 1$ in the limit
of $\beta=\infty$, which has the central charge $c_{\rm eff}=1$ owing to the
conformal symmetry Vidal03 . In the Jordan-Wigner representation of the
Hamiltonian (1), the spinless interacting fermion has a linear
$1/L-$dependence for the finite-size energy gap as a relativistic spectrum at
the Fermi point or the low-lying property of the spectrum for the Luttinger
liquid. When $\beta\neq\infty$, however, it is clearly of great interest
whether such a $XY$ phase can be robust against a strong long-range repulsive
interaction. For $J_{z}=1$ and $\beta=1$ Schulz ; Li , it was suggested that
the ground state would be a quasi-Wigner crystal (WC), which results from the
dominant long-range repulsive interaction over the kinetic energy. We
calculated the finite-size gap energy $\Delta(L)$ between the ground state
and the first excitation energies as a function of system sizes for various
cases as illustrated in Fig.3, one can see that the energy gap
$\Delta(\beta,J_{z})$ can be either zero, including the case of $J_{z}=1$ and
$\beta=1$, or finite in the thermodynamic limit, which can be assigned to $XY$
and gapped quasi-WC phases, respectively. However, for given $J_{z}$, when
$\beta$ approaches its critical values $\beta_{c}$ from either $XY$ phase or
WC phase where $\Delta(L)=\Delta(\beta,J_{z})+A_{1}/L+O(1/L^{2})$ You14 , it
becomes rather difficult to accurately determine the phase boundary between
these two phases due to limited precisions on tiny values of $\Delta(L)$.
Instead, we adopt the effective center charge $c_{\rm eff}$ deducted from the
scaling behavior of the entanglement entropy given in Eq.(3) which enable us
more accurately to allocate the phase boundary. We note that this scaling
behavior is valid in the presence of the long range interaction as
demonstrated numerically in Fig. 4, although it was originally derived for the
short range interacting cases with conformal symmetries Cardy ; Laflorencie .
Figure 4: (Color online) The Scaling behavior of entanglement entropy versus
$\ln(x)=\ln[L/\pi\sin(\pi l/L)]$ for different values of $\beta^{-1}$ with
$L=300$. Inset shows the fitted coefficients as a function of $\beta^{-1}$ for
system sizes $L=200$ (square) and $L=300$ (circle).
Figure 4 shows the entanglement entropy as a function of $\ln[L/\pi\sin(\pi
l/L)]$ for various values of $\beta$ and positive $J_{z}$. It is instructive
that the entanglement entropy still follows up the scaling behavior of Eq.
(3), although conformal symmetries are not yet known here in general.
Subsequently, the slope of the linear behavior gives rise to an effective
central charge $c_{\rm eff}$ which varies with $\beta$ as illustrated for
system sizes $L=200$ and $300$ at $J_{z}=1$ in the inset of Fig. 4. One can
see that finite-size effects for small $1/\beta$ is small but still visible,
resulting in the correction to $c^{0}_{\rm eff}=1$ for the thermodynamic
limit, but diminishes with increasing $1/\beta$. The curves for these two
sizes cross with a horizontal line corresponding to $c_{eff}=c^{0}_{\rm eff}$
at $1/\beta_{c}=0.756$, where irrelevant corrections vanish to Eq. (3). The
finite-size effect then becomes negligible for $\beta\leq\beta_{c}$. This
provides alternative way with higher accuracy to determine transition points
between the $XY$ (critical) and WC (noncritical) phases Alet ; gong16 ; gong17
.
Figure 5: (Color online) Phase diagram of Hamiltonian (1) as a functions of
the interaction $J_{z}$ and $1/\beta$ with $\alpha\rightarrow+\infty$.
In addition, we note that the FM phase is formed owing to the instability of
effectively attractive density-density interaction for $J_{z}\leq 0$ upon
changing $1/\beta$. Accordingly, the central charge is zero for the FM phase,
but it has the value of 3/2 on its phase boundary with the $XY$ phase for the
thermodynamic limitsChen ; Olalla ; Alba . To this end, the phase diagram is
depicted in Fig. 5 for $\alpha=\infty$. One can see that the critical points
between the $XY$ phase and the FM phase asymptotically approach $J_{z}=0$,
while the critical points between the AFM phase and the $XY$ phase mounts up
with increasing $1/\beta$. Moreover, it is worthwhile to mention that at
$\beta=0$ with $J_{z}>0$, $J_{z}$ term effectively results in one sort of
long-range frustrations and has the same strength for all the sites, among
which diagonal elements cancel each other in the ground state in
correspondence to $S^{z}_{total}=0$ subspaceZerobeta . In this case, the
ground state again becomes gapless and the central charge equals to one.
Particularly, the energy gap $\Delta(L)$ is scaled to zero in the limit of
$L\rightarrow\infty$ independent of $J_{z}$ as illustrated for both $J_{z}=1$
and $J_{z}=2$ in Fig. 3. Moreover, the entanglement entropy behaves as same
between $J_{z}=1,2$, resulting in $c_{\rm eff}\simeq 1.02$, as seen in Fig. 4.
As connected to $J_{z}=0$, it is natural to consider that the system is indeed
in the $XY$ phase, i.e. the transition between the FM and $XY$ phases takes
place at $J_{z}=0$ for $1/\beta=\infty$.
### IV.2 $1/\beta=0$
In this section, we turn to the case of $\beta\rightarrow+\infty$. In this
case, only the nearest neighbor interaction survives in the $J_{z}-$terms of
the Hamiltonian Eq. (1). The exponent $\alpha$ of the $XY-$long range
interaction can be considered a tunable parameter to explore the quantum phase
transition for various values of $J_{z}$.
Figure 6: (Color online) Correlation functions $\langle
S^{+}_{i}S^{-}_{i+99}\rangle$ and $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ are
plotted as a function of the interaction $J_{z}$ for (a) $\alpha=4$ and (b)
$\alpha=2$. Inset: A log-log plot of $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ as
a function of the distance $r$ with $J_{z}=\pm 0.5$.
Figure 6 shows the dependence of two-spin correlations on $J_{z}$ with a
distance of $|i-j|=99$ for different $\alpha$, calculated by using the iDMRG
algorithm. When $J_{z}$ is negatively large enough, $\langle
S^{+}_{i}S^{-}_{i+99}\rangle=0$, $\langle S^{z}_{i}S^{z}_{i+99}\rangle=1/4$,
suggesting that the system is in the FM phase. When $J_{z}$ is sufficiently
large, the transverse correlations remain zero, whereas $\langle
S^{z}_{i}S^{z}_{i+99}\rangle$ becomes negative so that the ground state is a
AFM state. Analogous to the case of $\alpha=\infty$, here we again utilize the
discontinuity of the correlation functions to allocate the critical points for
$\alpha$ and $J_{z}$ at the boundary of the FM phase, while the boundary of
the AFM phase is also determined in terms of the entanglement entropy (see
below).
In an intermediate range of $J_{z}$, one can further see that the transverse
correlations $\langle S^{+}_{i}S^{-}_{i+99}\rangle$ is positive but
longitudinal correlations $\langle S^{z}_{i}S^{z}_{i+99}\rangle$ vanish.
Interestingly, we find that $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ is a concave
function of $J_{z}$ for $\alpha=2$, but becomes a convex one for $\alpha=4$.
Moreover, when $J_{z}=\pm 0.5$, $\langle S^{+}_{i}S^{-}_{i+r}\rangle$ behaves
as a power-law of $1/r$, vanishing in the limit of $r\rightarrow\infty$ as
illustrated for $\alpha=4$ in the inset of Fig. 6(a), but
${\lim_{r\to+\infty}}\langle S^{+}_{i}S^{-}_{i+r}\rangle$ approaches a finite
constant as seen for $\alpha=2$ from the inset of Fig. 6(b). Therefore, the
ground states for $\alpha=2$ in the intermediate range of $J_{z}$ is different
that for $\alpha=4$.
In this range of $J_{z}$, it is natural to assign the large$-\alpha$ phase to
the $XY$ phase, since this phase contains a special case where $\alpha=\infty$
and $J_{z}=0$ such that the Hamiltonian (1) is reduced to describe a standard
$XY$ chain, as already shown Fig. (5). Moreover, when $\alpha$ is small or
even not too large, one can show that a $U(1)$ symmetry in the ground state is
spontaneously broken at $J_{z}=0$ with using the conformal field analysis and
perturbation calculationgong17 . It turns out that one can expect the
emergence of a continuous symmetry breaking (CSB) phase with gapless
excitations for a small$-\alpha$ phase. It has been shown that a Berezinskii-
Kosterlitz-Thouless like transition happens between the CSB phase and the $XY$
phase at $1/\alpha_{c}\simeq 0.34$, at which the central charge is numerically
increased by $4\%$ from unit. However, the criteria of the $4\%$ addition to
the central charge might be invalid for the determination of the critical
points with general values of $J_{z}$. To address this issue, we calculate the
fidelity susceptibility which has been proposed for the identification of the
critical points of continuous quantum phase transitionsGu2010 and even
deconfined quantum critical points Sun19 , and successfully applied to various
strongly correlated systems You15 ; You17 ; Ren18 ; Luo18 .
As a quantum information metric Gu2010 ; You , the fidelity measures the
similarity between the two closest ground states when the parameter $\alpha$
is tuned tiny for the Hamiltonian (1), which is defined as
$F=|\langle\psi_{0}(\alpha)|\psi_{0}(\alpha+\delta\alpha)\rangle|,$ (5)
where $\delta\alpha$ denotes a tiny deviation. Subsequently, we obtain the
derivatives of interactions $\delta
J_{i,j}=-\frac{J_{xy}}{|i-j|^{\alpha}}\ln|i-j|\delta\alpha$, where $J_{i,j}$
is the interaction strength between two spins at sites $i$ and $j$. The
average derivatives of interactions per site are practically considered as an
effective tuning parameter $\delta J=\frac{\sum_{i<j}\delta J_{i,j}}{L}$.
Therefore, the fidelity susceptibility per site can be calculated numerically
by
$\chi=\lim_{\delta J\rightarrow 0}\frac{-2\textrm{ln}F}{L(\delta J)^{2}},$ (6)
whose peak is thus used to identify the critical value of $\alpha$ and to
separate the CSB phase from the $XY$ phase for each $J_{z}$.
In our numerical calculations, we take $\delta\alpha=0.005$. For the case of
$L=100$ and $\alpha=3$, the effective tuning parameter $\delta J\simeq 0.001$.
The ground-state fidelity susceptibility per site $\chi$ is shown for
$J_{z}=0,1$ as a function of the parameter $\alpha$ for different sizes in
Fig. 7 (a) and (b), respectively. For each $J_{z}$, one can see that the peaks
of $\chi$ grow with respect to increasing the system size so that a divergence
peak would be expected for the $L\rightarrow\infty$ limit to signal the
appearance of a quantum phase transition. In order to locate the quantum
critical point $\alpha_{c}$ for the thermodynamic limit, we uses the finite-
size scaling analysis to obtain $\alpha_{c}=2.83$ and $\nu=1$ at $J_{z}=0$ as
seen in the inset of Fig. 7(a). This value of $\alpha_{c}$ is good consistent
with that determined by the central charge and the perturbation theory
calculation gong17 . Similarly, we can determine critical points at other
values of $J_{z}$ for the boundary between the CSB and $XY$ pases. In
particular, the critical value of $\alpha_{c}=2.45$ for $J_{z}=1.0$ is
obtained from the results shown in Fig. 7(b).
Figure 7: (Color online) Fidelity susceptibility per site is plotted as a
function of parameter $\alpha$ for various system sizes with (a) $J_{z}=0$ and
(b) $J_{z}=1.0$. Inset: Scaling behavior of the fidelity susceptibility peak
points with respect to $1/L$.
Now we turn to quantum phase transitions between the intermediate and AFM
phases, which are characterized by the peaks of the entanglement entropies as
demonstrated for $\alpha=2,4$ in Fig. 8. One can see that the peaks for both
cases in (a) and (c) of Fig. 8 move to lower values of $J_{z}$ when $L$
increases. Fitting the locations of peaks with the formula (4) as shown in (b)
and (d) of Fig. 8, one can obtain that $J_{z}^{c}=1.35$ and $2.21$,
respectively. Such fitted results agree very well with the inflexion points of
the correlations shown in Fig. 6. In the same manner, we allocate more
critical values of $J_{z}$ and $\alpha$ for the boundary of the AFM phase with
both $XY$ and CSB phases.
Figure 8: (Color online) Entanglement entropy is plotted as a function of the
interaction $J_{z}$ on different system sizes for (a) $\alpha=4$ and (c)
$\alpha=2$. The peak positions of $S_{L/2}$ versus the system size $L$ for (b)
$\alpha=4$ and (d) $\alpha=2$.
Based on the above analysis on the properties of the correlation functions,
the fidelity susceptibility and the entanglement entropy, we establish the
ground-state phase diagram for the Hamiltonian (1) with $\alpha=\infty$ as
shown in Fig. 9.
Figure 9: (Color online) Phase diagram of Hamiltonian (1) as a functions of
the interaction $J_{z}$ and $\alpha$ with $\beta\rightarrow+\infty$.
## V Discussion
In this paper, we study quantum phase transitions for a quantum spin-$1/2$
chain with anisotropic power-law-decaying long-range interactions, which are
characterized by exponent parameters $\alpha$ for $xy-$term and $\beta$ for
$z-$term, by employing density-matrix renormalization-group method. With
numerically analyzing the effects of $\alpha$ and $\beta$ on the spin-spin
correlation functions, the entanglement entropy and the central charge, and
the fidelity susceptibility, we establish two phase diagrams for
$\alpha=\infty$ and $\beta=\infty$, respectively.
Both cases involve a ferromagnetic phase and an antiferromagnetic phase
corresponding to sufficiently negative and positive $J_{z}$, respectively.
However, in the intermediate regime of $J_{z}$, the former involves not only a
usual $XY$ phase effectively equivalent to a short range repulsive density-
density interaction, but also a Wigner-crystal phase which essentially results
from for a sufficient strong long-range $J_{z}$ term; for the later, the
gapped Wigner crystal phase is replaced by a continuous $U(1)$ symmetry
breaking phase. Moreover, it is interesting to notice that the WC and CSB
phases actually reveal two different mechanisms, which intrinsically result
from either two-body processes of the strong long-range repulsive interaction
or one-body kinetic processes of the long-range hoping in the fermion
representation.
From this study, we found that the entanglement entropy and the central charge
can be used efficiently to extract critical values of the quantum phase
transition between two phases when one of them possesses a well-defined
central charge but another one is gapful Luo2019 . However, when one is
encountered with a quantum phase transition between two gapless phases, the
fidelity susceptibility alternatively provides a more feasible way to allocate
the critical points as applied to the transition between the $XY$ and
continuous $U(1)$ symmetry breaking phases.
We so far focus on the ground state phase diagrams only for $\alpha=\infty$
and $\beta=\infty$. There are actually a couple of important aspects beyond
the above two cases for the Hamiltonian (1), such as ground phase diagrams
with $\alpha=\beta$ and $J_{xy}>0$, extensions to two leg-ladders and even two
dimensions, etc. The emergence of any non-trivial gapless phase, corresponding
novel low-lying excitation spectra or exotic collective excitations with
special symmetries, and thermodynamic and dynamic properties would be very
interesting questions for the presence of long range interactions but are
certainly open for further studies in the future.
###### Acknowledgements.
This work is supported by the National Program on Key Research Project (Grant
No. 2016YFA0300501) and the National Natural Science Foundation of China under
Grants No. 11104021, 11474211, 61674110 and 11974244. W.L.Y is appreciative of
support from the start-up fund of Nanjing University of Aeronautics and
Astronautics. X.W. also acknowledges additional supports from a Shanghai
talent program.
## References
* (1) S. Sachdev, Quantum Phase Transitions (Cambridge University Press, Cambridge, England, 1999).
* (2) Xiaoqun Wang, Mod. Phys. Lett. B 14, 327 (2000).
* (3) Qiang Luo, Shijie Hu, Bin Xi, Jize Zhao, and Xiaoqun Wang, Phys. Rev. B 95, 165110 (2017).
* (4) Qiang Luo, Jize Zhao and Xiaoqun Wang, Phys. Rev. B 100, 121111(R) (2019).
* (5) T. C. Yi, W. L. You, N. Wu and A. M. Oleś, Phys. Rev. B 100, 024423 (2019).
* (6) N. Wu and W. L. You, Phys. Rev. B 100, 085130 (2019).
* (7) M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010).
* (8) T. Lahaye, C. Menotti, L. Santos, M. Lewenstein, and T. Pfau, Rep. Prog. Phys. 72, 126401 (2009).
* (9) X. L. Deng, D. Porras, and J. I. Cirac, Phys. Rev. A 72, 063407 (2005).
* (10) Bo Yan, Steven A. Moses, Bryce Gadway, Jacob P. Covey, Kaden R. A. Hazzard, Ana Maria Rey, Deborah S. Jin, and Jun Ye, Nature 501, 521 (2013).
* (11) J. W. Britton, B. C. Sawyer, A. C. Keith, C.-C. Joseph Wang, J. K. Freericks, H. Uys, M. J. Biercuk, and J. J. Bollinger, Nature(London) 484, 489 (2012).
* (12) R. Islam, C. Senkol, W. C. Campbell, S. Korenblit, J. Smith, A. Lee, E. E. Edwards, C. C. J. Wang, J. K. Freericks, and C. Monroe, Science 340, 583 (2013).
* (13) P. Richerme, Z.-X. Gong, A. Lee, C. Senko, J. Smith, M. FossFeig, S. Michalakis, A. V. Gorshkov, and C. Monroe, Nature (London) 511, 198 (2014).
* (14) P. Jurcevic, B. P. Lanyon, P. Hauke, C. Hempel, P. Zoller, R. Blatt, and C. F. Roos, Nature 511, 202 (2014).
* (15) W. Dür, L. Hartmann, M. Hein, M. Lewenstein, and H. J. Briegel, Phys. Rev. Lett. 94, 097203 (2005).
* (16) T. Koffel, M. Lewenstein, and L. Tagliacozzo, Phys. Rev. Lett. 109, 267203 (2012).
* (17) G. Sun, Phys. Rev. A 96, 043621 (2017).
* (18) Z. Zhu, G. Sun, W. L. You, D. N. Shi, Phys. Rev. A 98 023607 (2018).
* (19) Z. X. Gong, M. F. Maghrebi, A. Hu, M. L. Wall, M. Foss-Feig, and A. V. Gorshkov, Phys. Rev. B 93, 041102(R) (2016).
* (20) M. F. Maghrebi, Z. X. Gong, and Alexey V. Gorshkov, Phys. Rev. Lett. 119, 023001 (2017).
* (21) Z. X. Gong, M. F. Maghrebi, A. Hu, M. Foss-Feig, P. Richerme, C. Monroe, and A. V. Gorshkov, Phys. Rev. B 93, 205115 (2016).
* (22) Z. X. Gong, Michael Foss-Feig, Fernando G. S. L. Brandão, and Alexey V. Gorshkov, Phys. Rev. Lett. 119, 050501 (2017).
* (23) Irénée Frérot, Piero Naldesi, and Tommaso Roscilde, Phys. Rev. B 95, 245111 (2017).
* (24) Laurens Vanderstraeten, Maarten Van Damme, Hans Peter Büchler, and Frank Verstraete, Phys. Rev. Lett. 121, 090603(2018).
* (25) S. R. White, Phys. Rev. B 48, 10345 (1993).
* (26) I. Peschel, X. Q. Wang, M. Kaulke, and K. Hallberg, Density Matrix Renormalization, Lecture Notes in Physics Vol. 528 (Springer, Berlin, 1999).
* (27) U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005).
* (28) I. P. McCulloch, arXiv:0804.2509.
* (29) U. Schollwöck, Ann. Phys. (NY) 326, 96 (2011).
* (30) Sabyasachi Nag and Arti Garg, Phys. Rev. B 99, 224203 (2019).
* (31) G. M. Crosswhite, A. C. Doherty, and G. Vidal, Phys. Rev. B 78, 035116 (2008).
* (32) ITensor library, http://itensor.org/.
* (33) P. Calabrese and J. Cardy, J. Stat. Mech., P06002 (2004). https://doi.org/10.1088/1742-5468/2004/06/P06002.
* (34) Jize Zhao, Ingo Peschel and Xiaoqun Wang, Phys. Rev. B 73, 024417 (2006).
* (35) Shijie Hu, Bruce Normand, Xiaoqun Wang, Lu Yu, Phys. Rev. B 84, 220402(R) (2011).
* (36) B. Wang, M. Feng, Z. Q. Chen, Phys. Rev. A 81, 064301 (2010).
* (37) M. E. Fisher and M. N. Barber, Phys. Rev. Lett. 28, 1516 (1972).
* (38) M. N. Barber, in Phase Transitions and Critical Phenomena, edited by C. Domb and J. L. Lebowitz (Academic, London, 1983), pp. 146-259.
* (39) M Roncaglia, L Campos Venuti and C Degli Esposti Boschi, J. Stat. Mech. (2015) P04005. http://dx.doi.org/10.1088/1742-5468/2015/04/P04005.
* (40) Wen-Long You and Yu-Li Dong, Phys. Rev. B 84, 174426 (2011).
* (41) G. Vidal, J. I. Latorre, E. Rico, A. Kitaev, Rev. Lett. 90, 227902 (2003).
* (42) H. J. Schulz, Phys. Rev. Lett. 71, 1864 (1993).
* (43) Zhi-Hua Li, J. Phys.: Condens. Matter 31, 255601 (2019).
* (44) W. L. You, G. H. Liu, P. Horsch, and A. M. Oleś, Phys. Rev. B 90, 094413 (2014).
* (45) N. Laflorencie, E. S. Sørensen, M. S. Chang and I. Affleck, Phys. Rev. Lett. 96, 100603 (2006).
* (46) F. Alet, I.P. McCulloch, S. Capponi, M. Mambrini, Phys. Rev. B 82, 094452 (2010).
* (47) Pochung Chen, Zhi-long Xue, I. P. McCulloch, Ming-Chiang Chung, Miguel Cazalilla, S.-K. Yip, J. Stat. Mech., P10007 (2013). https://doi.org/10.1088/1742-5468/2013/10/P10007.
* (48) Olalla. A Castro-Alvaredo and Benjamin Doyon., J. Stat. Mech., P02001 (2011). https://doi.org/10.1088/1742-5468/2011/02/P02001.
* (49) Vincenzo Alba, Masudul Haque, and Andreas. M Läuchli., J. Stat. Mech., P08011 (2012) . https://doi.org/10.1088/1742-5468/2012/08/P08011.
* (50) When $\beta=0$, $J_{z}$ term in Eq. (1) can be written as $J_{z}[(S^{z}_{total})^{2}-L/4]$ under the periodic boundary condition. For small values of $\beta$, whether effective cancelling of those diagonal elements remains is to be further explored.
* (51) Shi-Jian Gu, Int. J. Mod. Phys. B 24, 4371 (2010) and more References therein.
* (52) G. Sun, B. B. Wei, and S. P. Kou, Phys. Rev. B 100, 064427 (2019).
* (53) W. L. You and L. He, J. Phys.: Condens. Matter 27, 205601 (2015).
* (54) W. L. You, C. J. Zhang, W. Ni, M. Gong, and A. M. Oleś, Phys. Rev. B 95, 224404 (2017).
* (55) J. Ren, Y. Wang, and W. L. You, Phys. Rev. A 97, 042318 (2018).
* (56) Qiang Luo, Jize Zhao, and Xiaoqun Wang, Phys. Rev. E 98, 022106 (2018).
* (57) W. L. You, Y. W. Li, and S. J. Gu, Phys. Rev. E 76, 022101 (2007).
|
2024-09-04T02:54:58.599700 | 2020-03-10T03:43:55 | 2003.04520 | {
"authors": "Yongtao Li",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26129",
"submitter": "Yongtao Li",
"url": "https://arxiv.org/abs/2003.04520"
} | arxiv-papers | # Extensions of some matrix inequalities related to trace and partial
traces††thanks: This paper was firstly announced in March, 2020, and was later
published on Linear Algebra and its Applications 639 (2022) 205–224. See
https://doi.org/10.1016/j.laa.2022.01.006. E-mail addresses:
<EMAIL_ADDRESS>(Yǒngtāo Lǐ).
Yongtao Li∗
School of Mathematics, Hunan University
Changsha, Hunan, 410082, P.R. China
###### Abstract
We first present a determinant inequality related to partial traces for
positive semidefinite block matrices. Our result extends a result of Lin
[Czech. Math. J. 66 (2016)] and improves a result of Kuai [Linear Multilinear
Algebra 66 (2018)]. Moreover, we provide a unified treatment of a result of
Ando [ILAS Conference (2014)] and a recent result of Li, Liu and Huang
[Operators and Matrices 15 (2021)]. Furthermore, we also extend some
determinant inequalities involving partial traces to a larger class of
matrices whose numerical ranges are contained in a sector. In addition, some
extensions on trace inequalities for positive semidefinite $2\times 2$ block
matrices are also included.
Dedicated to Prof. Weijun Liu on his 60th birthday
Key words: Partial traces; Trace inequalities; Fiedler and Markham; Numerical
range in a sector;
2010 Mathematics Subject Classification. 15A45, 15A60, 47B65.
## 1 Introduction
Throughout the paper, we use the following standard notation. The set of
$n\times n$ complex matrices is denoted by $\mathbb{M}_{n}(\mathbb{C})$, or
simply by $\mathbb{M}_{n}$, and the identity matrix of order $n$ by $I_{n}$,
or $I$ for short. We write $\lambda_{i}(A)$ and $\sigma_{i}(A)$ for the $i$-th
largest eigenvalue and singular value of $A$, respectively. By convention, if
$A\in\mathbb{M}_{n}$ is positive semidefinite, we write $A\geq 0$. For
Hermitian matrices $A$ and $B$ with the same size, $A\geq B$ means that $A-B$
is positive semidefinite, i.e., $A-B\geq 0$. If $A=[a_{i,j}]$ is of order
$m\times n$ and $B$ is of order $s\times t$, the tensor product of $A$ with
$B$, denoted by $A\otimes B$, is an $ms\times nt$ matrix that partitioned into
$m\times n$ block matrices with the $(i,j)$-block being the $s\times t$ matrix
$a_{i,j}B$. In this paper, we are interested in complex block matrices. Let
$\mathbb{M}_{n}(\mathbb{M}_{k})$ be the set of complex matrices partitioned
into $n\times n$ blocks with each block being $k\times k$. The element of
$\mathbb{M}_{n}(\mathbb{M}_{k})$ is usually written as
${H}=[H_{i,j}]_{i,j=1}^{n}$, where $H_{i,j}\in\mathbb{M}_{k}$ for all $i,j$.
Now we introduce the definition of partial traces, which comes from Quantum
Information Theory [32, p. 12]. For $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$, the
first partial trace (map) $H\mapsto\mathrm{tr}_{1}H\in\mathbb{M}_{k}$ is
defined as the adjoint map of the embedding map $X\mapsto I_{n}\otimes
X\in\mathbb{M}_{n}\otimes\mathbb{M}_{k}$. Correspondingly, the second partial
trace (map) $H\mapsto\mathrm{tr}_{2}H\in\mathbb{M}_{n}$ is defined as the
adjoint map of the embedding map $Y\mapsto Y\otimes
I_{k}\in\mathbb{M}_{n}\otimes\mathbb{M}_{k}$. Therefore, we have
$\langle I_{n}\otimes X,H\rangle=\langle
X,\mathrm{tr}_{1}H\rangle,\quad\forall X\in\mathbb{M}_{k},$ (1)
and
$\langle Y\otimes I_{k},H\rangle=\langle
Y,\mathrm{tr}_{2}H\rangle,\quad\forall Y\in\mathbb{M}_{n},$
where $\langle\cdot,\cdot\rangle$ stands for the Hilbert-Schmidt inner
product, i.e., $\langle A,B\rangle={\rm tr}(A^{*}B)$. The above definition of
partial traces is implicit. Assume that $H=[H_{i,j}]_{i,j=1}^{n}$ is an
$n\times n$ block matrix with $H_{i,j}\in\mathbb{M}_{k}$, the visualized
version of the partial traces is equivalently given in [4, pp. 120–123] as
$\mathrm{tr}_{1}{H}=\sum\limits_{i=1}^{n}H_{i,i},$ (2)
and
$\mathrm{tr}_{2}{H}=\bigl{[}\mathrm{tr}H_{i,j}\bigr{]}_{i,j=1}^{n}.$
It is easy to see that both ${\rm tr}_{1}H$ and ${\rm tr}_{2}H$ are positive
semidefinite whenever ${H}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ is positive
semidefinite; see, e.g. [36, p. 237] or [37] for more details. The first or
second partial trace is a source for matrix inequalities and extensively
studied in recent years; see [2, 7, 11, 22, 29] for related topics.
Let $A=[A_{i,j}]_{i,j=1}^{n}$ be an $n\times n$ block matrix with each block
being a $k\times k$ matrix. The usual transpose of $A$ is defined as
$A^{T}=[A_{j,i}^{T}]_{i,j=1}^{n}$. We define the partial transpose of $A$ by
$A^{\tau}=[A_{j,i}]_{i,j=1}^{n}$, that is, the partial transpose of $A$ is the
matrix obtained by transposing blocks of $A$ independently. More precisely,
$A^{T}=\begin{bmatrix}A_{1,1}^{T}&\cdots&A_{n,1}^{T}\\\
\vdots&\ddots&\vdots\\\
A_{1,n}^{T}&\cdots&A_{n,n}^{T}\end{bmatrix}~{}~{}\text{and}~{}~{}A^{\tau}=\begin{bmatrix}A_{1,1}&\cdots&A_{n,1}\\\
\vdots&\ddots&\vdots\\\ A_{1,n}&\cdots&A_{n,n}\end{bmatrix}.$
Although $A$ and $A^{\tau}$ have the same trace, they may have different
eigenvalues, so they are not necessarily similar. Moreover, it is known that
$A\geq 0$ does not necessarily imply $A^{\tau}\geq 0$. For example, taking
$A=\begin{bmatrix}A_{1,1}&A_{1,2}\\\
A_{2,1}&A_{2,2}\end{bmatrix}=\left[\begin{array}[]{cc;{2pt/2pt}cc}1&0&0&1\\\
0&0&0&0\\\ \hdashline[2pt/2pt]0&0&0&0\\\ 1&0&0&1\end{array}\right].$ (3)
We can see from the definition that
$A^{\tau}=\begin{bmatrix}A_{1,1}&A_{2,1}\\\
A_{1,2}&A_{2,2}\end{bmatrix}=\left[\begin{array}[]{cc;{2pt/2pt}cc}1&0&0&0\\\
0&0&1&0\\\ \hdashline[2pt/2pt]0&1&0&0\\\ 0&0&0&1\end{array}\right].$
One could easily observe that $A$ is positive semidefinite, but $A^{\tau}$ is
not positive semidefinite since it contains a principal submatrix
$\left[\begin{smallmatrix}0&1\\\ 1&0\end{smallmatrix}\right]\ngeq 0$.
Moreover, the eigenvalues of $A$ are $2,0,0,0$, and the eigenvalues of
$A^{\tau}$ are $1,1,1,-1$, so $A$ and $A^{\tau}$ are not similar. In addition,
replacing $A_{1,1}$ in the above matrix by $\left[\begin{smallmatrix}1&0\\\
0&1\end{smallmatrix}\right]$ also gives a well example. From this discussion,
we say that $A$ is positive partial transpose (or PPT for short) if both $A$
and $A^{\tau}$ are positive semidefinite. We recommend [10, 19, 26, 27] for
recent progress.
The paper is organized as follows. In Section 2, we shall review some
preliminaries for a class of matrices whose numerical ranges are contained in
a sector (known as the sector matrices). This is a natural extension of the
class of positive definite matrices. In Section 3, we shall study the recent
results involving the Fiedler–Markham inequality. We provide an extension of a
result of Lin [29], and our result is also an improvement of a result of Kuai
[17]; see Theorem 3.5. Moreover, we shall extend a result of Choi [6] to the
so-called sector matrices; see Theorem 3.7. In Section 4, we give a unified
treatment of a result of Ando [2] (or see [30]) as well as a recent result of
Li, Liu and Huang [22]. Our new treatment is more concise than original proof.
Moreover, we also present some Ando type determinant inequalities for partial
traces, and then we extend these inequalities to sector matrices; see Theorems
4.7 and 4.8. In Section 5, we shall prove some inequalities for positive
semidefinite $2\times 2$ block matrices; see Theorems 5.2, 5.3 and 5.4. Our
result extend slightly the recent elegant work on trace inequalities that
proved by Kittaneh and Lin [18] and Lin [26] as well.
## 2 Preliminaries
Recall that $\sigma_{i}(A)$ denotes $i$-th largest singular value of $A$. When
$A$ is Hermitian, we know that all eigenvalues of $A$ are real numbers, and we
write $\lambda_{i}(A)$ for the $i$-th largest eigenvalue. The numerical range
of $A\in\mathbb{M}_{n}$ is defined by
$W(A)=\\{x^{*}Ax:x\in\mathbb{C}^{n},x^{*}x=1\\}.$
For $\alpha\in[0,{\pi}/{2})$, let $S_{\alpha}$ be the sector on complex plane
defined as
$S_{\alpha}=\\{z\in\mathbb{C}:\Re z>0,|\Im z|\leq(\Re
z)\tan\alpha\\}=\\{re^{i\theta}:r>0,|\theta|\leq\alpha\\}.$
For $A\in\mathbb{M}_{n}$, the Cartesian (Toeptliz) decomposition is given as
$A=\Re A+i\cdot\Im A$, where
$\Re A=\frac{1}{2}(A+A^{*})$ and $\Im A=\frac{1}{2i}(A-A^{*})$.
We know from the definition that if $W(A)\subseteq S_{0}$, then $A$ is
positive definite. Moreover, it is easy to verify that if $W(A)\subseteq
S_{\alpha}$ for some $\alpha\in[0,{\pi}/{2})$, then $\Re(A)$ is positive
definite. Such class of matrices whose numerical ranges are contained in a
sector is called the sector matrices class. Clearly, the concept of sector
matrices is an extension of positive definite matrices. Over the past few
years, various studies on sector matrices have been obtained in the
literature; see, e.g., [8, 16, 17, 28, 34, 38].
Before starting our results, we now summarise the following lemmas.
###### Lemma 2.1
[28] Let $0\leq\alpha<{\pi}/{2}$ and $A\in\mathbb{M}_{n}$ with $W(A)\subseteq
S_{\alpha}$. Then
$|\det A|\leq(\sec\alpha)^{n}\det(\Re A).$
###### Lemma 2.2
[14, p. 510] Let $X$ be an $n$-square complex matrix. Then
$\lambda_{i}(\Re X)\leq\sigma_{i}(X),\quad i=1,2,\ldots,n.$
Moreover, if $\Re X$ is positive definite, then
$\det\Re X+|\det\Im X|\leq|\det X|.$
The following lemma is called the Fischer inequality, which gives an upper
bound for the determinant of a positive semidefinite block matrix in terms of
the determinants of its principal diagonal blocks. In particular, when all
blocks have order $1\times 1$, this inequality is also known as the Hadamard
inequality; see, e.g., [14, p. 506] and [36, p. 217].
###### Lemma 2.3
Let $H=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive
semidefinite. Then
$\det H\leq\prod_{i=1}^{n}\det H_{i,i}.$
###### Lemma 2.4
If $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ satisfies $W(H)\\!\subseteq
S_{\alpha}$, then $W({\rm tr}_{1}H)\\!\subseteq S_{\alpha}$ and $W({\rm
tr}_{2}H)\\!\subseteq S_{\alpha}$, i.e., if $H$ is a sector matrix with angle
$\alpha\in[0,\pi/2)$, then so are ${\rm tr}_{1}H$ and ${\rm tr}_{2}H$.
We remark that this lemma was partially proved in [17, Proposition 3.2] for
the case ${\rm tr}_{2}H$. Motivated by [17], we here include a detailed proof
for the remaining case ${\rm tr}_{1}H$.
Proof. Consider the Cartesian decomposition $H=\Re H+i\cdot\Im H$, then
${\rm tr}_{1}H={\rm tr}_{1}(\Re H)+i\cdot{\rm tr}_{1}(\Im H).$
For every $x\in\mathbb{C}^{k}$ with $x^{*}x=1$, as $\Re H$ is positive
definite, we get
$\Re\bigl{(}x^{*}({\rm tr}_{1}H)x\bigr{)}=x^{*}\bigl{(}\Re({\rm
tr}_{1}H)\bigr{)}x=x^{*}\bigl{(}{\rm tr}_{1}(\Re H)\bigr{)}x>0.$
On the other hand, by a direct computation,
$\frac{\left|\Im\bigl{(}x^{*}({\rm
tr}_{1}H)x\bigr{)}\right|}{\Re\bigl{(}x^{*}({\rm
tr}_{1}H)x\bigr{)}}=\frac{\left|x^{*}({\rm tr}_{1}(\Im H))x\right|}{x^{*}({\rm
tr}_{1}(\Re H))x}=\frac{\left|\langle xx^{*},{\rm tr}_{1}(\Im
H)\rangle\right|}{\langle xx^{*},{\rm tr}_{1}(\Re H)\rangle}.$
Note that $I_{n}\otimes(xx^{*})$ is positive semidefinite. We consider the
spectral decomposition
$I_{n}\otimes(xx^{*})=\sum_{i=1}^{nk}\lambda_{i}u_{i}u_{i}^{*},$
where $\lambda_{i}\geq 0$ and $u_{i}$ are unit vectors in $\mathbb{C}^{nk}$.
By the definition in (1), it follows that
$\displaystyle\frac{\left|\langle xx^{*},{\rm tr}_{1}(\Im
H)\rangle\right|}{\langle xx^{*},{\rm tr}_{1}(\Re H)\rangle}$
$\displaystyle=\frac{\left|\langle I_{n}\otimes(xx^{*}),\Im
H\rangle\right|}{\langle I_{n}\otimes(xx^{*}),\Re
H\rangle}=\frac{\left|\sum_{i=1}^{nk}\lambda_{i}\langle u_{i}u_{i}^{*},\Im
H\rangle\right|}{\sum_{i=1}^{nk}\lambda_{i}\langle u_{i}u_{i}^{*},\Re
H\rangle}$
$\displaystyle\leq\frac{\sum_{i=1}^{nk}\lambda_{i}\left|u_{i}^{*}(\Im
H)u_{i}\right|}{\sum_{i=1}^{nk}\lambda_{i}u_{i}^{*}(\Re
H)u_{i}}\leq\max_{1\leq i\leq nk}\frac{\left|u_{i}^{*}(\Im
H)u_{i}\right|}{u_{i}^{*}(\Re H)u_{i}}=\max_{1\leq i\leq
nk}\frac{\left|\Im(u_{i}^{*}Hu_{i})\right|}{\Re(u_{i}^{*}Hu_{i})}.$
This completes the proof.
Remark. Based on the second equivalent definition (2), one could also give
other ways to prove Lemma 2.4. We leave the details for the interested reader.
## 3 Extensions on Fiedler–Markham’s inequality
Let ${H}=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive
semidefinite. Recall that both ${\rm tr}_{1}H$ and ${\rm tr}_{2}H$ are
positive semidefinite; see, e.g., [37]. In 1994, Fiedler and Markham [9,
Corollary 1] proved a celebrated determinant inequality involving the second
partial trace.
###### Theorem 3.1
[9] Let ${H}=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be
positive semidefinite. Then
$\left(\frac{\det\bigl{(}{\rm tr}_{2}H\bigr{)}}{k}\right)^{k}\geq\det{H}.$
In 2016, Lin [29] revisited this inequality using some terminology from
quantum information theory, and gave an alternative proof of Theorem 3.1 by
applying an important identity connecting ${\rm tr}_{2}H$ and $H$. Moreover, a
natural question is that whether an analogous result corresponding to the
Fiedler–Markham inequality holds for ${\rm tr}_{1}H$. Lin [29] answered this
question and proved the following counterpart.
###### Theorem 3.2
[29] Let ${H}=[H_{ij}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be
positive semidefinite. Then
$\left(\frac{\det({\rm tr}_{1}H)}{n}\right)^{n}\geq\det H.$
It is clear that in the proof of both Theorem 3.1 and Theorem 3.2, Fiedler and
Markham, and Lin used the superadditivity of determinant functional, which
states that
$\det\left(\sum_{i=1}^{n}H_{i,i}\right)\geq\sum_{i=1}^{n}\det H_{i,i}\geq
n\left(\prod_{i=1}^{n}\det H_{i,i}\right)^{1/n}.$
This inequality can be improved by the Fan-Ky determinant inequality (see [14,
p. 488]), i.e., the log-concavity of the determinant over the cone of positive
semidefinite matrices:
$\det\left(\frac{1}{n}\sum_{i=1}^{n}H_{i,i}\right)\geq\left(\prod_{i=1}^{n}\det
H_{i,i}\right)^{1/n}.$ (4)
In addition, we mention here that a careful examination of the new proof of
Theorem 3.1 in [29] can also reveal this improvement. This improvement was
also pointed out in [6, 31]. Next, we state the strong version of Theorem 3.1
and Theorem 3.2.
###### Theorem 3.3
Let ${H}=[H_{ij}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive
semidefinite. Then
$\left(\frac{\det\bigl{(}{\rm tr}_{2}H\bigr{)}}{k^{n}}\right)^{k}\geq\det{H},$
and
$\left(\frac{\det({\rm tr}_{1}H)}{n^{k}}\right)^{n}\geq\det H.$
We observe in Theorem 3.3 that the second inequality seems easier to prove
than the first inequality because it is more convenient to build inequalities
on ${\rm tr}_{1}H=\sum_{i=1}^{n}H_{i,i}$. In [20], the authors showed that
both inequalities can be deduced mutually.
In 2018, Kuai [17] (or see [34]) further extended Theorem 3.3 to sector
matrices and showed that if $0\leq\alpha<{\pi}/{2}$ and
$H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ satisfies $W(H)\subseteq S_{\alpha}$,
then
$\left|\frac{\det({\rm tr}_{2}H)}{k^{n}}\right|^{k}\geq(\cos\alpha)^{nk}|\det
H|,$ (5)
and
$\left|\frac{\det({\rm tr}_{1}H)}{n}\right|^{n}\geq(\cos\alpha)^{(3n-2)k}|\det
H|.$ (6)
Our first goal in this section is to improve Kuai’s result (6). The key step
in our improvement is the following identity connecting ${\rm tr}_{1}(H)$ and
$H$, which has been applied to quantum information theory, such as the sub-
additivity of $q$-entropies. This identity can be found in [15, eq.(26)] or
[5, Lemma 2].
###### Lemma 3.4
Let $X$ and $Y$ be generalized Pauli matrices on $\mathbb{C}^{n}$; these
operators act as $Xe_{j}=e_{j+1}$ and $Ye_{j}=e^{2\pi j\sqrt{-1}/n}e_{j}$,
where $e_{j}$ is the $j$-th column of the identity matrix $I_{n}$ and
$e_{n+1}=e_{1}$. Then
$\frac{1}{n}\sum_{l,j=1}^{n}(X^{l}Y^{j}\otimes I_{k})H(X^{l}Y^{j}\otimes
I_{k})^{*}=I_{n}\otimes({\rm tr}_{1}H).$
Remark. The identity in this lemma can yield an alternative proof of Lemma
2.4. Moreover, the analogous identity for ${\rm tr}_{2}H$ can be seen in [15]
or [33, eq.(14)].
Now, we are ready to present an improvement on inequality (6).
###### Theorem 3.5
Let $0\leq\alpha<{\pi}/{2}$ and $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be such
that $W(H)\subseteq S_{\alpha}$. Then
$\left|\frac{\det({\rm tr}_{1}H)}{n^{k}}\right|^{n}\geq(\cos\alpha)^{nk}|\det
H|.$
Proof. Note that both $X$ and $Y$ in Lemma 3.4 are unitary, so are
$X^{l}Y^{j}\otimes I_{k}$ for all $l,j$. Moreover, we have $\Re(UHU^{*})=U(\Re
H)U^{*}$ for every unitary $U$. Thus,
$\displaystyle|\det H|\\!\\!\\!\\!\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\prod_{l,j=1}^{n}\left|\det(X^{l}Y^{j}\otimes
I_{k})H(X^{l}Y^{j}\otimes I_{k})^{*}\right|^{1/n^{2}}$ (7)
$\displaystyle\overset{\text{Lemma \ref{lem22}}}{\leq}$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!(\sec\alpha)^{nk}\prod_{l,j=1}^{n}\left(\det(X^{l}Y^{j}\otimes
I_{k})(\Re H)(X^{l}Y^{j}\otimes I_{k})^{*}\right)^{1/n^{2}}$
$\displaystyle\overset{\text{Fan-Ky ineq.(\ref{eqfk})}}{\leq}$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!(\sec\alpha)^{nk}\det\Bigg{(}\frac{1}{n^{2}}\sum_{l,j=1}^{n}(X^{l}Y^{j}\otimes
I_{k})(\Re H)(X^{l}Y^{j}\otimes I_{k})^{*}\Bigg{)}$
$\displaystyle\overset{\text{Lemma \ref{lem24}}}{=}$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!(\sec\alpha)^{nk}\det\left(\frac{1}{n}\Bigl{(}I_{n}\otimes{\rm
tr}_{1}(\Re H)\Bigr{)}\right)$ $\displaystyle=$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\\!\frac{(\sec\alpha)^{nk}}{n^{nk}}\det\Bigl{(}I_{n}\otimes{\rm
tr}_{1}(\Re H)\Bigr{)}.$
Clearly, we have ${\rm tr}_{1}(\Re H)=\Re({\rm tr}_{1}H)$. For
$X\in\mathbb{M}_{n}$ and $Y\in\mathbb{M}_{k}$, it is well-known that
$\det(X\otimes Y)=(\det X)^{k}(\det Y)^{n}$; see, e.g., [35, Chapter 2]. It
follows that
$\det\Bigl{(}I_{n}\otimes{\rm tr}_{1}(\Re H)\Bigr{)}=(\det
I_{n})^{k}\bigl{(}\det({\rm tr}_{1}\Re H)\bigr{)}^{n}=\bigl{(}\det\Re({\rm
tr}_{1}H)\bigr{)}^{n}.$
By Proposition 2.4, we have $W({\rm tr}_{1}H)\subseteq S_{\alpha}$, which
implies that $\Re({\rm tr}_{1}H)$ is positive definite. Therefore, by Lemma
2.2, we get
$\bigl{(}\det\Re({\rm tr}_{1}H)\bigr{)}^{n}\leq\bigl{(}|\det({\rm
tr}_{1}H)|-|\det\Im({\rm tr}_{1}H)|\bigr{)}^{n}\leq|\det({\rm tr}_{1}H)|^{n},$
which together with (7) yields the desired result.
Remark. By applying the techniques from [20], we know that Kuai’s inequality
(5) can also be deduced from the inequality in Theorem 3.5 and vice versa.
In the sequel, we shall focus our attention on some recent results which are
similar with the Fiedler–Markham inequality. Let
${H}=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be a block matrix
with $H_{i,j}=[h_{l,m}^{i,j}]_{l,m=1}^{k}$. We define an $n\times n$ matrix
$G_{l,m}$ as below.
$G_{l,m}:=\bigl{[}h_{l,m}^{i,j}\bigr{]}_{i,j=1}^{n}\in\mathbb{M}_{n}.$
A direct computation yields
${\rm
tr}_{1}H=\sum_{i=1}^{n}H_{i,i}=\sum_{i=1}^{n}\bigl{[}h_{l,m}^{i,i}\bigr{]}_{l,m=1}^{k}=\left[\begin{matrix}\sum\limits_{i=1}^{n}h_{l,m}^{i,i}\end{matrix}\right]_{l,m=1}^{k}=\bigl{[}{\rm
tr}\,G_{l,m}\bigr{]}_{l,m=1}^{k}.$
For notational convenience, we denote
$\widetilde{H}=\bigl{[}G_{l,m}\bigr{]}_{l,m=1}^{k}\in\mathbb{M}_{k}(\mathbb{M}_{n}).$
We can see that $\widetilde{H}$ is obtained from $H$ by rearranging the
entries in an appropriate order. The above observation yields ${\rm
tr}_{1}H={\rm tr}_{2}\widetilde{H}$. Moreover, it is not hard to check that
$\widetilde{H}$ and $H$ are unitarily similar; see, e.g., [6, Theorem 7] or
[20, Theorem 4]. Motivated by these relations, Choi [6] introduced recently
the definition of partial determinants corresponding to partial traces. For
$H=[H_{i,j}]_{i,j=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{k})$, the partial
determinants are defined as
$\mathrm{det}_{1}H:=\bigl{[}\det G_{l,m}\bigr{]}_{l,m=1}^{k},$
and
$\mathrm{det}_{2}H:=\bigl{[}\det H_{i,j}\bigr{]}_{i,j=1}^{n}.$
To some extent, the partial determinants share some common properties relative
to partial traces. For instance, it is easy to see that if
${H}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ is positive semidefinite, then both
$\mathrm{det}_{1}H$ and $\mathrm{det}_{2}H$ are positive semidefinite; see,
e.g. [36, p. 221]. Moreover, it was proved in [6] that
$\mathrm{det}({\rm tr}_{1}H)\geq{\rm tr}(\mathrm{det}_{2}H),$
and
$\mathrm{det}({\rm tr}_{2}H)\geq{\rm tr}(\mathrm{det}_{1}H).$
Additionally, Choi [6] proved two analogues of Theorem 3.1 and Theorem 3.2 for
partial determinants.
###### Theorem 3.6
[6] Let ${H}\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be positive semidefinite. Then
$\left(\frac{{\rm tr}(\mathrm{det}_{1}H)}{k}\right)^{k}\geq\det H,$
and
$\left(\frac{{\rm tr}(\mathrm{det}_{2}H)}{n}\right)^{n}\geq\det H.$
Next, we will extend Theorem 3.6 to sector matrices. We write $|A|$ for the
nonnegative matrix whose entries are the absolute of the entries of $A$. This
notation is only used in the following theorem.
###### Theorem 3.7
Let $0\leq\alpha<{\pi}/{2}$ and $H\in\mathbb{M}_{n}(\mathbb{M}_{k})$ be such
that $W(H)\subseteq S_{\alpha}$. Then
$\left(\frac{{\rm
tr}|\mathrm{det}_{1}H|}{k}\right)^{k}\geq(\cos\alpha)^{nk}|\det H|,$
and
$\left(\frac{{\rm
tr}|\mathrm{det}_{2}H|}{n}\right)^{n}\geq(\cos\alpha)^{nk}|\det H|.$
Proof. First of all, we shall prove the second inequality. We observe that
$\Re H_{1,1}$, $\ldots,\Re H_{n,n}$ are the diagonal block matrices of $\Re
H$. By Lemma 2.1 and Lemma 2.3, we obtain
$\displaystyle|\det H|$ $\displaystyle\leq(\sec\alpha)^{nk}\det(\Re
H)\leq(\sec\alpha)^{nk}\prod_{i=1}^{n}\det(\Re H_{i,i})$
$\displaystyle\leq(\sec\alpha)^{nk}\prod_{i=1}^{n}|\det
H_{i,i}|\leq(\sec\alpha)^{nk}\left(\frac{1}{n}\sum_{i=1}^{n}|\det
H_{i,i}|\right)^{n},$
where the third inequality follows from Lemma 2.2 and the last one follows
from the arithmetic mean-geometric mean inequality.
We now prove the first desired inequality by employing the relations between
$\mathrm{det}_{1}$ and $\mathrm{det}_{2}$. Recall that
$\widetilde{H}=[G_{l,m}]_{l,m=1}^{k}\in\mathbb{M}_{k}(\mathbb{M}_{n})$ and
$\mathrm{det}_{1}H=\mathrm{det}_{2}\widetilde{H}$. Since $\widetilde{H}$ and
$H$ are unitarily similar, we can get $\det\widetilde{H}=\det H$ and
$W(\widetilde{H})\subseteq S_{\alpha}$. Moreover, $\widetilde{H}$ is also
positive semidefinite. By applying the second inequality to $\widetilde{H}$,
we get
$\left(\frac{{\rm tr}|\mathrm{det}_{1}H|}{k}\right)^{k}=\left(\frac{{\rm
tr}|\mathrm{det}_{2}\widetilde{H}|}{k}\right)^{k}\geq(\cos\alpha)^{kn}|\det\widetilde{H}|=(\cos\alpha)^{kn}|\det
H|.$
This completes the proof.
## 4 Extensions on Ando’s inequality
To make our statements more transparent and compatible with the previous works
in the literature. In this section, we assume that $A$ is an $m\times m$ block
matrix with each block being an $n\times n$ matrix. Let
${A}=[A_{i,j}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive
semidefinite. We know that both ${\rm tr}_{1}A$ and ${\rm tr}_{2}A$ are
positive semidefinite; see, e.g., [36, p. 237] and [37, Theorem 2.1]. To some
degree, these two partial traces are closely related and mutually affect each
other. We write $\lVert
A\lVert_{q}=\left(\sum_{i}\sigma_{i}(A)^{q}\right)^{1/q}$ for the Schatten
$q$-norm of $A$. In 2007, Audenaert [1] proved the following norm inequality,
${\rm tr}\,A+\lVert A\lVert_{q}\geq\lVert{\rm tr}_{1}A\rVert_{q}+\lVert{\rm
tr}_{2}A\rVert_{q}.$ (8)
A straightforward argument exploiting Audenaert’s result leads to a proof of
the subadditivity of $q$-entropies (Tsallis entropies) for finite-dimensional
bipartite quantum states; see [1, 5] and references therein. In 2014, Ando [2]
(or see [30, Proposition 2.2] for an alternative proof) established the
following remarkable inequality in the sense of the Löwner ordering.
###### Theorem 4.1
[2, 30] Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite.
Then
$({\rm tr}A)I_{mn}+A\geq
I_{m}\otimes(\mathrm{tr}_{1}A)+(\mathrm{tr}_{2}A)\otimes I_{n}.$
Ando’s result reveals closely the interplay between the first and second
partial trace. Equivalently, this inequality can be rewritten as
$({\rm tr}A)I_{mn}-(\mathrm{tr}_{2}A)\otimes I_{n}\geq
I_{m}\otimes(\mathrm{tr}_{1}A)-A.$ (9)
We observe that the positivity of $A$, together with the identity ${\rm
tr}\,A=\sum_{i=1}^{m}{\rm tr}A_{i,i}={\rm tr}({\rm tr}_{2}A)$, leads to $({\rm
tr}A)I_{m}\geq\lambda_{\max}({\rm tr}_{2}A)I_{m}\geq{\rm tr}_{2}A$, which
guarantees that in (9) the left hand side $({\rm
tr}A)I_{mn}-(\mathrm{tr}_{2}A)\otimes I_{n}$ is positive semidefinite.
However, the two matrices of the right hand side in (9) might be incomparable.
For instance, the matrix $A$ in (3) gives an example. Motivated by this
observation, Li, Liu and Huang [22] presented a further generalization.
###### Theorem 4.2
[22] Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then
$({\rm tr}A)I_{mn}-({\rm tr}_{2}A)\otimes I_{n}\geq A-I_{m}\otimes({\rm
tr}_{1}A),$
and
$({\rm tr}A)I_{mn}+({\rm tr}_{2}A)\otimes I_{n}\geq A+I_{m}\otimes({\rm
tr}_{1}A).$
A map (not necessarily linear) $\Phi:\mathbb{M}_{n}\to\mathbb{M}_{k}$ is
called positive if it maps positive semidefinite matrices to positive
semidefinite matrices. A map $\Phi:\mathbb{M}_{n}\to\mathbb{M}_{k}$ is said to
be $m$-positive if for every $m\times m$ block matrix
$[A_{i,j}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$,
$[A_{i,j}]_{i,j=1}^{m}\geq 0\Rightarrow[\Phi(A_{i,j})]_{i,j=1}^{m}\geq 0.$
Clearly, being $1$-positive is equivalent to being positive. The map $\Phi$ is
said to be completely positive if it is $m$-positive for every integer $m\geq
1$. It is well-known that both the trace map and determinant map are
completely positive; see, e.g., [36, p. 221, p. 237] or [37]. On the other
hand, a map $\Phi$ is said to be $m$-copositive if for every
$[A_{i,j}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$,
$[A_{i,j}]_{i,j=1}^{m}\geq 0\Rightarrow[\Phi(A_{j,i})]_{i,j=1}^{m}\geq 0,$
and $\Phi$ is said to be completely copositive if it is $m$-copositive for
every integer $m\geq 1$. Furthermore, a map $\Phi$ is called completely PPT if
it is both completely positive and completely copositive; see [26, 10, 39] for
related topics.
Both Theorem 4.1 and Theorem 4.2 illustrated the implicit interaction and
connection between the first trace and second trace. The proof of Theorem 4.1
depends mainly on the 2-copositivity of $\Psi(X)=({\rm tr}X)I-X$; see e.g.,
[2] and [30] for more details. Correspondingly, the proof of Theorem 4.2
relies similarly on the 2-copositivity of $\Phi(X)=({\rm tr}X)I+X$; see [22].
For more application of these two maps, we refer readers to papers [26, 21].
In this section, we give a unified treatment of both Theorem 4.1 and Theorem
4.2. Our treatment is more concise than the original proof. We need to use a
recent result of Choi [6, 7], which investigates more relations between the
partial traces and the partial transpose.
###### Lemma 4.3
[6, 7] Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then
$({\rm tr}_{2}A^{\tau})\otimes I_{n}\geq\pm A^{\tau}$,
and
$I_{m}\otimes{\rm tr}_{1}A^{\tau}\geq\pm A^{\tau}.$
Now, we present a unified treatment of Theorems 4.1 and 4.2 as well.
New proof of Theorem 4.1. We define the map
$\Phi:\mathbb{M}_{m}(\mathbb{M}_{n})\to\mathbb{M}_{m}(\mathbb{M}_{n})$ as
$\Phi_{2}^{-}(X):=({\rm tr}_{2}X^{\tau})\otimes I_{n}-X^{\tau}.$
On the other hand, we define
$\Phi_{1}^{-}(X):=I_{m}\otimes{\rm tr}_{1}X^{\tau}-X^{\tau}.$
Lemma 4.3 implies that both $\Phi_{2}^{-}$ and $\Phi_{1}^{-}$ are positive
linear maps on $\mathbb{M}_{m}(\mathbb{M}_{n})$. Let $A$ be a positive
semidefinite block matrix. Thus, we have
$\Phi_{2}^{-}(A)=({\rm tr}_{2}A^{\tau})\otimes I_{n}-A^{\tau}\geq 0.$
Acting the map $\Phi_{1}^{-}$ to the matrix $\Phi_{2}^{-}(A)$, we can obtain
$\Phi_{1}^{-}\bigl{(}\Phi_{2}^{-}(A)\bigr{)}=I_{m}\otimes{\rm
tr}_{1}{\Phi_{2}^{-}(A)}^{\tau}-{\Phi_{2}^{-}(A)}^{\tau}\geq 0.$ (10)
By a directed computation, we can get ${\Phi_{2}^{-}(A)}^{\tau}=({\rm
tr}_{2}A)\otimes I_{n}-A$ and
${\rm tr}_{1}{\Phi_{2}^{-}(A)}^{\tau}={\rm tr}_{1}\bigl{(}({\rm
tr}_{2}A)\otimes I_{n}-A\bigr{)}=\sum_{i=1}^{m}({\rm tr}A_{i,i})I_{n}-{\rm
tr}_{1}A=({\rm tr}A)I_{n}-{\rm tr}_{1}A.$
Therefore, inequality (10) yields the desired result in Theorem 4.1.
$\blacksquare$
Remarks. In the above proof, we can see that Theorem 4.1 is just a direct
consequence of Lemma 4.3. To our surprise, Theorem 4.1 can also be proved by
using the positivity of $\Phi_{1}^{-}$ first, and then applying the positivity
of $\Phi_{2}^{-}$ later. More precisely, we first derive
${\Phi_{1}^{-}(A)}\geq 0$, and then we have $\Phi_{2}^{-}(\Phi_{1}^{-}(A))\geq
0$. Upon simplification, one can immediately get Theorem 4.1 again. We
summarize this observation as the following proposition.
###### Proposition 4.4
For every $X\in\mathbb{M}_{m}(\mathbb{M}_{n})$, we have
$\Phi_{1}^{-}(\Phi_{2}^{-}(X))=\Phi_{2}^{-}(\Phi_{1}^{-}(X))$.
Correspondingly, we can present an alternative proof of Theorem 4.2 similarly.
New proof of Theorem 4.2. We define the maps $\Phi_{2}^{+}$ and $\Phi_{1}^{+}$
on $\mathbb{M}_{m}(\mathbb{M}_{n})$ as
$\Phi_{2}^{+}(X):=({\rm tr}_{2}X^{\tau})\otimes I_{n}+X^{\tau},$
and
$\Phi_{1}^{+}(X):=I_{m}\otimes{\rm tr}_{1}X^{\tau}+X^{\tau}.$
We can see from Lemma 4.3 that both $\Phi_{2}^{+}$ and $\Phi_{1}^{+}$ are
positive linear maps. Similar to the lines of the previous proof, we get
$\Phi_{1}^{-}(\Phi_{2}^{+}(A))=\Phi_{2}^{+}(\Phi_{1}^{-}(A))\geq 0$, which
leads to
$({\rm tr}A)I_{mn}-({\rm tr}_{2}A)\otimes I_{n}\geq A-I_{m}\otimes({\rm
tr}_{1}A).$
Moreover, we have
$\Phi_{1}^{+}(\Phi_{2}^{-}(A))=\Phi_{2}^{-}(\Phi_{1}^{+}(A))\geq 0$. It
follows that
$({\rm tr}A)I_{mn}+({\rm tr}_{2}A)\otimes I_{n}\geq A+I_{m}\otimes({\rm
tr}_{1}A).$
We mention that the positivity of $\Phi_{1}^{+}(\Phi_{2}^{+}(A))$ yields a
trivial result. $\blacksquare$
In the remaining of this section, we shall pay attention to determinant
inequalities of sector matrices involving partial traces. Motivated by
Audenaert’s result (8), Lin [29] recently obtained a determinantal inequality
for partial traces, which states that if $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$
is positive semidefinite, then
$({\rm tr}A)^{mn}+\det A\geq\det({\rm tr}_{1}A)^{m}+\det({\rm tr}_{2}A)^{n}.$
(11)
We remark here that Fu, Lau and Tam [11, Corollary 2.2] recently improved (11)
when $A$ is a density matrix, i.e., a positive semidefinite matrix with trace
equal to $1$.
The key step in the proof of (11) attributes to Theorem 4.1 together with the
following interesting lemma. It is worth noting that Lemma 4.5 is graceful and
useful in deriving matrix inequalities; see, e.g., [23, 24, 25] for
applications on Oppenheim type inequalities.
###### Lemma 4.5
[30] Let $X,Y,W$ and $Z$ be positive semidefinite matrices of the same order.
If $X\geq W,X\geq Z$ and $X+Y\geq W+Z$, then
$\det X+\det Y\geq\det W+\det Z.$
Remark. We observe that Lemma 4.5 implies the determinant inequality:
$\det(A+B+C)+\det C\geq\det(A+C)+\det(B+C),$
where $A,B$ and $C$ are positive semidefinite matrices.
With the help of Lemma 4.5, we can easily present two analogues of (11).
###### Proposition 4.6
Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then
$({\rm tr}A)^{mn}+\det({\rm tr}_{1}A)^{m}\geq\det A+\det({\rm tr}_{2}A)^{n},$
and
$({\rm tr}A)^{mn}+\det({\rm tr}_{2}A)^{n}\geq\det A+\det({\rm tr}_{1}A)^{m}.$
Proof. We prove the first inequality only, since the second one can be proved
in exactly the same way. Let $X=({\rm tr}A)I_{mn},Y=I_{m}\otimes({\rm
tr}_{1}A),W=A$ and $Z=({\rm tr}_{2}A)\otimes I_{n}$. It is easy to see that
$({\rm tr}A)I_{m}=\sum_{i=1}^{m}({\rm tr}A_{i,i})I_{m}=\bigl{(}{\rm tr}({\rm
tr}_{2}A)\bigr{)}I_{m}\geq\lambda_{\max}({\rm tr}_{2}A)I_{m}\geq{\rm
tr}_{2}A,$
which implies that $X\geq Z\geq 0$, and clearly $X\geq W\geq 0$. Moreover,
Theorem 4.2 says that $X+Y\geq W+Z$. That is, all conditions in Lemma 4.5 are
satisfied. Therefore,
$\displaystyle({\rm tr}A)^{mn}+\det\bigl{(}I_{m}\otimes({\rm
tr}_{1}A)\bigr{)}\geq\det A+\det\bigl{(}({\rm tr}_{2}A)\otimes I_{n}\bigr{)}.$
It is well-known [35, p. 37] that for every $X\in\mathbb{M}_{m}$ and
$Y\in\mathbb{M}_{n}$,
$\det(X\otimes Y)=(\det X)^{n}(\det Y)^{m}.$
Thus, we complete the proof of the required result.
We next give an improvement on Proposition 4.6.
###### Theorem 4.7
Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. Then
$({\rm tr}A)^{mn}+\det({\rm tr}_{1}A)^{m}\geq m^{nm}\bigl{(}\det A+\det({\rm
tr}_{2}A)^{n}\bigr{)},$
and
$({\rm tr}A)^{mn}+\det({\rm tr}_{2}A)^{n}\geq n^{mn}\bigl{(}\det A+\det({\rm
tr}_{1}A)^{m}\bigr{)}.$
Proof. We only prove the second inequality. Invoking Theorem 3.3, we get
$\left(\frac{\det({\rm tr}_{2}A)}{n^{m}}\right)^{n}\geq\det A.$
Equivalently, we have $\det({\rm tr}_{2}A)^{n}\geq n^{mn}\det A$. It suffices
to show that
$({\rm tr}A)^{n}\geq n^{n}\det({\rm tr}_{1}A).$
Note that
${\rm tr}A=\sum_{i=1}^{m}{\rm tr}(A_{i,i})={\rm
tr}\left(\sum_{i=1}^{m}A_{i,i}\right)={\rm tr}({\rm tr}_{1}A).$
We denote $X:={\rm tr}_{1}A$, which is a positive semidefinite matrix of order
$n$. So we need to prove that $({\rm tr}X)^{n}\geq n^{n}\det X$. This is
equivalent to showing
$\left(\sum_{i=1}^{n}\lambda_{i}(X)\right)^{n}\geq
n^{n}\prod_{i=1}^{n}\lambda_{i}(X),$
which is a direct consequence of the AM-GM inequality.
Surprisingly, the proof of Theorem 4.7 seems simpler than that of Proposition
4.6 since it does not rely on Theorem 4.2 and Lemma 4.5. However, it allows us
to provide a great improvement on Proposition 4.6 whenever $m,n$ are large
integers.
In the sequel, we shall denote $|A|=(A^{*}A)^{1/2}$, which is called the
modulus of $A$. We remark that this notation is different from that in Theorem
3.7. Note that $|A|$ is positive semidefinite, and the eigenvalues of $|A|$
are called the singular values of $A$. In 2019, Yang, Lu and Chen [34]
extended (11) to sector matrices.
$({\rm tr}|A|)^{mn}+\det|A|\geq(\cos\alpha)^{mn}|\det({\rm
tr}_{1}A)|^{m}+(\cos\alpha)^{mn}|\det({\rm tr}_{2}A)|^{n}.$
Now, we are ready to present an extension on Theorem 4.7.
###### Theorem 4.8
Let $A\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be such that $W(A)\subseteq
S_{\alpha}$. Then
$({\rm tr}|A|)^{mn}+\left|{\det({\rm
tr}_{1}A)}\right|^{m}\geq(m\cos\alpha)^{mn}\bigl{(}\det|A|+|\det({\rm
tr}_{2}A)|^{n}\bigr{)},$
and
$({\rm tr}|A|)^{mn}+\left|{\det({\rm
tr}_{2}A)}\right|^{n}\geq(n\cos\alpha)^{mn}\bigl{(}\det|A|+|\det({\rm
tr}_{1}A)|^{m}\bigr{)}.$
Proof. We only prove the first inequality. According to the definition of
$S_{\alpha}$, if $W(A)\subseteq S_{\alpha}$, then $\Re A$ is positive definite
and its trace is positive. By Lemma 2.2, we have
${\rm tr}|A|=\sum_{i=1}^{mn}\sigma_{i}(A)\geq\sum_{i=1}^{mn}\lambda_{i}(\Re
A)={\rm tr}(\Re A)\geq 0.$
It is noteworthy by Lemma 2.4 that $W({\rm tr}_{1}A)\subseteq S_{\alpha}$ and
$W({\rm tr}_{2}A)\subseteq S_{\alpha}$. Clearly, we have $\Re({\rm
tr}_{1}A)={\rm tr}_{1}(\Re A)$ and $\Re({\rm tr}_{2}A)={\rm tr}_{2}(\Re A)$.
By setting $X={\rm tr}_{1}A$ in Lemma 2.2, we get
$|\det({\rm tr}_{1}A)|\geq\det\bigl{(}\Re({\rm
tr}_{1}A)\bigr{)}=\det\bigl{(}{\rm tr}_{1}(\Re A)\bigr{)}.$
Note that $\Re A$ is positive semidefinite. By applying Theorem 4.7, we can
obtain
$\displaystyle({\rm tr}|A|)^{mn}+\left|{\det({\rm tr}_{1}A)}\right|^{m}$
$\displaystyle\geq({\rm tr}\,\Re A)^{mn}+\bigl{(}{\det{\rm tr}_{1}(\Re
A)}\bigr{)}^{m}$ $\displaystyle\geq m^{nm}\bigl{(}\det(\Re
A)+\bigl{(}\det\Re({\rm tr}_{2}A)\bigr{)}^{n}\bigr{)}$
$\displaystyle\geq(m\cos\alpha)^{mn}|\det A|+(m\cos\alpha)^{mn}|\det({\rm
tr}_{2}A)|^{n},$
where the last inequality holds from Lemma 2.2 by setting $X=A$ and ${\rm
tr}_{2}A$ respectively.
## 5 Trace inequalities for two by two block matrices
Positive semidefinite $2\times 2$ block matrices are extensively studied, such
a partition yields a great deal of versatile and elegant matrix inequalities;
see, e.g., [13, 18, 21, 12] for details. Recently, Kittaneh and Lin [18] (or
see [26]) proved the following trace inequalities.
###### Theorem 5.1
[18, 26] Let $\begin{bmatrix}A&B\\\
B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive
semidefinite. Then
${\rm tr}A\,{\rm tr}C-{\rm tr}B^{*}\,{\rm tr}B\geq\bigl{|}{\rm tr}AC-{\rm
tr}B^{*}B\bigr{|},$
and
${\rm tr}A\,{\rm tr}C+{\rm tr}B^{*}\,{\rm tr}B\geq{\rm tr}AC+{\rm tr}B^{*}B.$
In this section, we present some inequalities related to trace for $2\times 2$
block matrices, which are slight extensions of the result of Kittaneh and Lin.
We now need to introduce some notations. Let
$\otimes^{r}A:=A\otimes\cdots\otimes A$ be the $r$-fold tensor power of $A$.
###### Theorem 5.2
Let $\begin{bmatrix}A&B\\\
B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive
semidefinite. Then for $r\in\mathbb{N}^{*}$,
$(\mathrm{tr}A\,\mathrm{tr}C)^{r}-({\rm
tr}B^{*}\,\mathrm{tr}B)^{r}\geq\bigl{|}(\mathrm{tr}AC)^{r}-(\mathrm{tr}B^{*}B)^{r}\bigr{|},$
and
$(\mathrm{tr}A\,\mathrm{tr}C)^{r}+({\rm
tr}B^{*}\,\mathrm{tr}B)^{r}\geq(\mathrm{tr}AC)^{r}+(\mathrm{tr}B^{*}B)^{r}.$
Proof. Note that $\begin{bmatrix}\\!\\!\\!\otimes^{r}A&\otimes^{r}B\\\
\otimes^{r}B^{*}&\otimes^{r}C\end{bmatrix}$ is a principal submatrix of
$\otimes^{r}\begin{bmatrix}A&B\\\ B^{*}&C\end{bmatrix}$. Thus
$\begin{bmatrix}\\!\\!\\!\otimes^{r}A&\otimes^{r}B\\\
\otimes^{r}B^{*}&\otimes^{r}C\end{bmatrix}$
is again positive semidefinite. By applying Theorem 5.1 to this block matrix,
we get
$\bigl{|}{\rm tr}(\otimes^{r}A)(\otimes^{r}C)-{\rm
tr}(\otimes^{r}B^{*})(\otimes^{r}B)\bigr{|}\leq{\rm
tr}\otimes^{r}\\!\\!A\,{\rm tr}\otimes^{r}\\!\\!C-{\rm
tr}\otimes^{r}\\!B^{*}{\rm tr}\otimes^{r}\\!\\!B,$
and
${\rm tr}(\otimes^{r}A)(\otimes^{r}C)+{\rm
tr}(\otimes^{r}B^{*})(\otimes^{r}B)\leq{\rm tr}\otimes^{r}\\!\\!A\,{\rm
tr}\otimes^{r}\\!\\!C+{\rm tr}\otimes^{r}\\!B^{*}{\rm tr}\otimes^{r}\\!\\!B.$
Invoking the well-known facts [35, Chapter 2]:
$(\otimes^{r}X)(\otimes^{r}Y)=\otimes^{r}(XY)$ and ${\rm
tr}(\otimes^{r}X)=({\rm tr}\,X)^{r}$, the desired inequalities follow
immediately.
Remark. Theorem 5.2 was proved in the first version of our manuscript
(announced on March 10, 2020, arXiv: 2003.04520v1). We remark that this result
was recently and independently rediscovered by Fu and Gumus in [12] using a
quite different method.
Let $e_{t}(X)$ denote the $t$-th elementary symmetric function of the
eigenvalues of the square matrix $X$.
$e_{t}(X):=\sum_{1\leq i_{1}<i_{2}<\cdots<i_{t}\leq
n}\prod_{j=1}^{t}\lambda_{i_{j}}(X).$
In particular, we know that $e_{1}(X)={\rm tr}(X)$. We can get the following
theorem.
###### Theorem 5.3
Let $\begin{bmatrix}A&B\\\
B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive
semidefinite. Then for $t\in\\{1,2,\ldots,k\\}$,
$e_{t}(A)e_{t}(C)-e_{t}(B^{*})e_{t}(B)\geq|e_{t}(AC)-e_{t}(B^{*}B)|,$
and
$e_{t}(A)e_{t}(C)+e_{t}(B^{*})e_{t}(B)\geq e_{t}(AC)+e_{t}(B^{*}B).$
Proof. The first inequality can be found in [18, Corollary 2.7]. We next give
the outline of the proof of the second one. Note that
$\begin{bmatrix}\\!\\!\\!\otimes^{t}A&\otimes^{t}B\\\
\otimes^{t}B^{*}&\otimes^{t}C\end{bmatrix}$ is positive semidefinite. By
restricting this block matrix to the symmetric class of tensor product (see,
e.g., [3, pp. 16–20]), we know that
$\begin{bmatrix}\\!\\!\\!\wedge^{t}A&\wedge^{t}B\\\
\wedge^{t}B^{*}&\wedge^{t}C\end{bmatrix}$
is still positive semidefinite. Note that $e_{t}(X)={\rm tr}(\wedge^{t}X)$ and
$(\wedge^{t}X)(\wedge^{t}Y)=\wedge^{t}(XY)$. Applying Theorem 5.1 to this
block matrix yields the required result.
Let $s_{t}(X)$ be the $t$-th complete symmetric polynomial of eigenvalues of
$X$, i.e.,
$s_{t}(X):=\sum_{1\leq i_{1}\leq i_{2}\leq\cdots\leq i_{t}\leq
n}\prod_{j=1}^{t}\lambda_{i_{j}}(X).$
Clearly, we have $s_{1}(X)={\rm tr}(X)$. We can get the following slight
extension similarly.
###### Theorem 5.4
Let $\begin{bmatrix}A&B\\\
B^{*}&C\end{bmatrix}\in\mathbb{M}_{2}(\mathbb{M}_{k})$ be positive
semidefinite. Then for $t\in\\{1,2,\ldots,k\\}$,
$s_{t}(A)s_{t}(C)-s_{t}(B^{*})s_{t}(B)\geq|s_{t}(AC)-s_{t}(B^{*}B)|,$
and
$s_{t}(A)s_{t}(C)+s_{t}(B^{*})s_{t}(B)\geq s_{t}(AC)+s_{t}(B^{*}B).$
Proof. Note that $\begin{bmatrix}\\!\\!\\!\otimes^{t}A&\otimes^{t}B\\\
\otimes^{t}B^{*}&\otimes^{t}C\end{bmatrix}$ is positive semidefinite. By
restricting this block matrix to the symmetric class of tensor product (see,
e.g., [3, pp. 16–20]), we know that
$\begin{bmatrix}\\!\\!\\!\vee^{t}A&\vee^{t}B\\\
\vee^{t}B^{*}&\vee^{t}C\end{bmatrix}$
is still positive semidefinite. Similarly, we know that ${\rm
tr}(\vee^{t}X)=s_{t}(X)$ and $(\vee^{t}X)(\vee^{t}Y)=\vee^{t}(XY)$. Applying
Theorem 5.1 to this block matrix leads to the desired result.
## Acknowledgments
This paper is dedicated to Prof. Weijun Liu (Central South University) on his
60th birthday, October 22 of the lunar calendar in 2021. I would like to thank
Prof. Yuejian Peng for reading carefully through an earlier version of this
paper. This work was supported by NSFC (Grant No. 11931002).
## References
* [1] K.M.R. Audenaert, Subadditivity of $q$-entropies for $q>1$, J. Math. Phys. 48 (2007), no. 8, 083507.
* [2] T. Ando, Matrix inequalities involving partial traces, ILAS Conference, 2014.
* [3] R. Bhatia, Matrix Analysis, GTM 169, Springer-Verlag, New York, 1997.
* [4] R. Bhatia, Positive Definite Matrices, Princeton University Press, Princeton, 2007.
* [5] A. Desenyei, D. Petz, Partial subadditivity of entropies, Linear Algebra Appl. 439 (2013) 3297–3305.
* [6] D. Choi, Inequalities related to trace and determinant of positive semidefinite block matrices, Linear Algebra Appl. 532 (2017) 1–7.
* [7] D. Choi, Inequalities about partial transpose and partial traces, Linear Multilinear Algebra 66 (2018) 1619–1625.
* [8] D. Choi, T.-Y. Tam, P. Zhang, Extension of Fischer’s inequality, Linear Algebra Appl. 569 (2019) 311–322.
* [9] M. Fiedler, T.L. Markham, On a theorem of Everitt, Thompson and de Pillis, Math. Slovaca 44 (1994) 441–444.
* [10] X. Fu, P.-S. Lau, T.-Y. Tam, Linear maps of positive partial transpose matrices and singular value inequalities, Math. Inequal. Appl. 23 (4) (2020) 1459–1468.
* [11] X. Fu, P.-S. Lau, T.-Y. Tam, Inequalities on partial traces of positive semidefinite block matrices, Canad. Math. Bull. 64 (4) (2021) 964–969.
* [12] X. Fu, M. Gumus, Trace inequalities involving positive semi-definite block matrices, Linear Multilinear Algebra (2021) https://doi.org/10.1080/03081087.2021.1942418.
* [13] M. Gumus, J. Liu, S. Raouafi, T.-Y. Tam, Positive semi-definite $2\times 2$ block matrices and norm inequalities, Linear Algebra Appl. 551 (2018) 83–91.
* [14] R.A. Horn, C.R. Johnson, Matrix Analysis, 2nd ed., Cambridge University Press, Cambridge, 2013.
* [15] A. Jenčová, M.B. Ruskai, A unified treatment of convexity of relative entropy and related trace functions, with conditions for equality, Rev. Math. Phys. 22 (2010) 1099–1121.
* [16] X. Jiang, Y. Zheng, X. Chen, Extending a refinement of Kotelianskii’s inequality, Linear Algebra Appl. 574 (2019) 252–261.
* [17] L. Kuai, An extension of the Fiedler–Markham determinant inequality, Linear Multilinear Algebra 66 (2018) 547–553.
* [18] F. Kittaneh, M. Lin, Trace inequalities for positive semidefinite block matrices, Linear Algebra Appl. 524 (2017) 153–158.
* [19] E.-Y. Lee, The off-diagonal block of a PPT matrix, Linear Algebra Appl. 486 (2015), 449–453.
* [20] Y. Li, L. Feng, Z. Huang, W. Liu, Inequalities regarding partial trace and partial determinant, Math. Inequal. Appl. 23 (2020) 477–485.
* [21] Y. Li, Y. Huang, L. Feng, W. Liu, Some applications of two completely copositive maps, Linear Algebra Appl. 590 (2020) 124–132.
* [22] Y. Li, W. Liu, Y. Huang, A new matrix inequality involving partial traces, Operators and Matrices 15 (2021), no. 3, 1189–1199.
* [23] Y. Li, L. Feng, An Oppenheim type determinantal inequality for the Khatri–Rao product, Operators and Matrices 15 (2021), no. 2, 693–701.
* [24] Y. Li, Y. Peng, An Oppenheim type inequality for positive definite block matrices, Linear Multilinear Algebra (2021) https://doi.org/10.1080/03081087.2021.1882370.
* [25] M. Lin, An Oppenheim type inequality for a block Hadamard product, Linear Algebra Appl. 452 (2014) 1–6.
* [26] M. Lin, A completely PPT map, Linear Algebra Appl. 459 (2014) 404–410.
* [27] M. Lin, Inequalities related to $2\times 2$ block PPT matrices, Operators and Matrices 9 (2015), no.4, 917–924.
* [28] M. Lin, Extension of a result of Haynsworth and Hartfiel, Arch. Math. 104 (2015) 93–100.
* [29] M. Lin, A treatment of a determinant inequality of Fiedler and Markham, Czech. Math. J. 66 (2016) 737–742.
* [30] M. Lin, A determinantal inequality involving partial traces, Canad. Math. Bull. 59 (2016) 585–591.
* [31] M. Lin, P. Zhang, Unifying a result of Thompson and a result of Fiedler and Markham on block positive definite matrices, Linear Algebra Appl. 533 (2017) 380–385.
* [32] D. Petz, Quantum Information Theory and Quantum Statistics. Theoretical and Mathematical Physics, Springer, Berlin, 2008.
* [33] A.E. Rastegin, Relations for symmetric norms and anti-norms before and after partial trace, J. Stat. Phys. 148 (2012) 1040–1053.
* [34] J. Yang, L. Lu, Z. Chen, Schatten $q$-norms and determinantal inequalities for matrices with numerical ranges in a sector, Linear Multilinear Algebra 67 (2019) 221–227.
* [35] X. Zhan, Matrix Theory, Graduate Studies in Mathematics, vol. 147, Amer. Math. Soc., Providence, RI, 2013.
* [36] F. Zhang, Matrix Theory: Basic Results and Techniques, 2nd ed., Springer, New York, 2011.
* [37] F. Zhang, Positivity of matrices with generalized matrix functions. Acta Math. Sin. (Engl. Ser.) 28 (2012) 1779–1786.
* [38] P. Zhang, Extension of Matic’s results, Linear Algebra Appl. 486 (2015) 328–334.
* [39] P. Zhang, On some inequalities related to positive block matrices, Linear Algebra Appl. 576 (2019) 258–267.
|
2024-09-04T02:54:58.633886 | 2020-03-10T10:46:12 | 2003.04628 | {
"authors": "Ygor Gallina, Florian Boudin, B\\'eatrice Daille",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26130",
"submitter": "Ygor Gallina",
"url": "https://arxiv.org/abs/2003.04628"
} | arxiv-papers | # Large-Scale Evaluation of Keyphrase Extraction Models
Ygor Gallina<EMAIL_ADDRESS>LS2N, Université de
NantesNantesFrance , Florian Boudin<EMAIL_ADDRESS>LS2N,
Université de NantesNantesFrance and Béatrice Daille beatrice.daille@univ-
nantes.fr LS2N, Université de NantesNantesFrance
(2020)
###### Abstract.
Keyphrase extraction models are usually evaluated under different, not
directly comparable, experimental setups. As a result, it remains unclear how
well proposed models actually perform, and how they compare to each other. In
this work, we address this issue by presenting a systematic large-scale
analysis of state-of-the-art keyphrase extraction models involving multiple
benchmark datasets from various sources and domains. Our main results reveal
that state-of-the-art models are in fact still challenged by simple baselines
on some datasets. We also present new insights about the impact of using
author- or reader-assigned keyphrases as a proxy for gold standard, and give
recommendations for strong baselines and reliable benchmark datasets.
Keyphrase generation, natural language processing, evaluation
††copyright: acmcopyright††journalyear: 2020††doi:
10.1145/1122445.1122456††conference: JCDL ’20: ACM/IEEE Joint Conference on
Digital Libraries; August 01–05, 2020; Xi’an, Shaanxi, China††booktitle: JCDL
’20: ACM/IEEE Joint Conference on Digital Libraries, August 01–05, 2020,
Xi’an, Shaanxi, China††isbn: 978-1-4503-XXXX-X/20/06††ccs: Information systems
Digital libraries and archives††ccs: Information systems Information
retrieval††ccs: Computing methodologies Information extraction
## 1\. Introduction
Keyphrases are single or multi-word lexical units that represent the main
concepts in a document (Evans and Zhai, 1996). They are particularly useful
for indexing, searching and browsing digital libraries (Barker et al., 1972;
Zhai, 1997; Gutwin et al., 1999; Witten et al., 2009), and have proven
themselves as effective features in many downstream natural language
processing tasks (Hulth and Megyesi, 2006; Litvak and Last, 2008; Berend,
2011). Still, most documents do not have assigned keyphrases, and manual
annotation is simply not a feasible option (Mao and Lu, 2017). There is
therefore a great need for automated methods to assign relevant keyphrases to
documents.
Automatic keyphrase extraction111Also referred to as keyphrase generation or
keyphrase annotation. – that is, the task of extracting keyphrases either from
the content of the document or from a controlled vocabulary – has received
much attention from the research community (Kim et al., 2010; Gollapalli et
al., 2015; Augenstein et al., 2017). Thus, many keyphrase extraction models
were proposed over the last years, ranging from early statistics-based models
(Witten et al., 1999), to popular graph-based ranking models (Mihalcea and
Tarau, 2004), and recent neural models (Meng et al., 2017). However, because
of the great discrepancies in experimental setups among past studies, it is
very difficult to compare and contrast the effectiveness of these models, and
even more so to assess the progress of the field as a whole.
More specifically, we observe striking differences in how models are
parameterized, evaluated and compared in previous work. To name just a few
examples, experiments are most often conducted on different benchmark
datasets, all of which differ in domain, size, language or quality of the gold
standard (that is, reference keyphrases supplied by authors, readers or
professional indexers). This not only makes the reported results hard to
contrast, but also has a profound impact on trained model performance (Gallina
et al., 2019). In addition, and since there is no consensus as to which
evaluation metric is most reliable for keyphrase extraction (Zesch and
Gurevych, 2009; Hussey et al., 2012; Hasan and Ng, 2014), diverse measures are
commonly seen in the literature, thus preventing any further direct
comparisons. Moreover, the evaluation of missing keyphrases – that is, gold
keyphrases that do not occur in the content of the document – is still an open
question and there is little agreement on whether they should be included or
not (Kim et al., 2010).
We strongly believe that this lack of empirical rigor is a real hindrance to
progress on keyphrase extraction, and that a systematic comparison of existing
models under the same conditions is needed to fully understand how they
actually perform. In this work, we resolve this issue by conducting the first
large-scale study on automatic keyphrase extraction. More precisely, we
present an extensive comparative analysis of state-of-the-art keyphrase
extraction models involving 9 benchmark datasets from various domains. To
ensure controlled, fair and reliable experiments, we embarked upon the
difficult process of re-implementing all of the models presented in this
paper222Link to the code will appear here after the review period. and pre-
processing the datasets in a unified and systematic way333Link to the datasets
will appear here after the review period..
Using these new large-scale experimental results, we seek to better understand
how well state-of-the-art models perform across sources, domains and
languages. We also go further than prior work and investigate the following
research questions:
1. (1)
How much progress have we made on keyphrase extraction since early models?
2. (2)
What is the impact of using non-expert gold standards, that is, author- or
reader-assigned keyphrases, when training and evaluating keyphrase extraction
models?
3. (3)
Which baselines and benchmark datasets should be included in future work for a
better understanding of the pros and cons of a newly proposed model?
## 2\. Benchmark Datasets
Benchmark datasets for evaluating automatic keyphrase extraction cover a wide
range of sources ranging from scientific articles and web pages to twitter and
email messages. We collected 9 of the most widely used datasets which we
believe are representative of the different sources and domains found in
previous work. Detailed statistics for each selected dataset are shown in
Table 2. They are grouped into three categories that are outlined below:
Scientific articles:
Among the selected datasets, three are composed of full-text scientific
publications: ACM (Krapivin et al., 2009) and SemEval (Kim et al., 2010) about
computer science, and PubMed (Schutz, 2008) from the medical domain. Not
surprisingly, they contain only a small number of documents due to copyright
reasons. These datasets provide author-assigned keyphrases which serve as a
reasonable, but far from perfect, proxy for expert annotations. In the case of
SemEval, student annotators were hired to extend gold annotation labels.
Paper abstracts:
Scientific abstracts, often referred to as bibliographic records, are arguably
the most prevalent documents for benchmarking keyphrase extraction. They are
readily available in great quantities and come with author-assigned keyphrases
that can be used as gold standard. We gathered three datasets, all dealing
with the computer science domain: Inspec (Hulth, 2003), WWW (Caragea et al.,
2014) and KP20k (Meng et al., 2017). It is worth noting that with more than
half a million documents, KP20k is the largest dataset to date and one of the
few that is large enough to train neural models.
News articles:
News texts are the last source of documents present among the collected
datasets. Similar to paper abstracts, online news are available in large
quantities and can be easily mined from the internet. We selected the
following three datasets: DUC-2001 (Wan and Xiao, 2008), 500N-KPCrowd (Marujo
et al., 2012) and KPTimes (Gallina et al., 2019). The first two datasets
provide reader-assigned keyphrases, while KPTimes supplies indexer-assigned
key-phrases extracted from metadata and initially intended for search engines.
It is interesting to observe that only two datasets in our study, namely
Inspec and KPTimes, provide gold keyphrases annotated by professional
indexers.
Dataset | Ann. | Train | Test | #words | #kp | %abs
---|---|---|---|---|---|---
5pt. PubMed (Schutz, 2008) | $A$ | - | 1 320 | 5 323 | 5.4 | 16.9
ACM (Krapivin et al., 2009) | $A$ | - | 2 304 | 9 198 | 5.3 | 16.3
SemEval (Kim et al., 2010) | $A\cup R$ | 144 | 100 | 7 961 | 14.7 | 19.7
Scientific articles (avg.) | 7 494 | 8.5 | 17.6
5pt. Inspec (Hulth, 2003) | $I$ | 1 000 | 500 | 135 | 9.8 | 22.4
WWW (Caragea et al., 2014) | $A$ | - | 1 330 | 164 | 4.8 | 52.0
KP20k (Meng et al., 2017) | $A$ | 530K | 20K | 176 | 5.3 | 42.6
Paper abstracts (avg.) | 158 | 6.6 | 39.0
5pt. DUC-2001 (Wan and Xiao, 2008) | $R$ | - | 308 | 847 | 8.1 | 3.7
KPCrowd (Marujo et al., 2012) | $R$ | 450 | 50 | 465 | 46.2 | 11.2
KPTimes (Gallina et al., 2019) | $I$ | 260K | 10K | 921 | 5.0 | 54.7
News articles (avg.) | 744 | 19.8 | 23.2
Table 1. Statistics of the datasets. Gold annotation is supplied by authors
($A$), readers ($R$) or professional indexers ($I$). The number of documents
in the training and testing splits are shown. The average number of keyphrases
(#kp) and words (#words) per document, and the ratio of missing keyphrases
(%abs) are computed on the test set.
Datasets containing scientific articles or abstracts rely primarily on author-
assigned keyphrases as gold standard. They therefore exhibit similar
properties for the average number of ground truth keyphrases per document
($\approx 5$). On the other hand, articles are on average significantly longer
than abstracts ($\approx 7500$ words vs. $\approx 160$ words respectively) and
consequently reveal a much smaller fraction of missing keyphrases ($\approx
18\%$ vs. $\approx 39\%$ respectively). Datasets with reader-assigned
keyphrases exhibit the lowest numbers of missing keyphrases, which can be
explained by the fact that readers appear to produce gold-standard annotations
in an extractive fashion (Wang et al., 2015). We also confirmed this
empirically by computing the ratio of missing keyphrases in the author-
assigned ($24\%$) and reader-assigned ($17.5\%$) gold annotations of the
SemEval dataset.
In contrast, the opposite trend is observed for KPTimes that comes with gold
standards annotated by professional indexers and that shows the highest
percentage of missing keyphrases ($54.7\%$). This indicates the the more
abstractive nature of indexer-assigned keyphrases. Put differently, it is
known that non-expert annotations are less constrained and may include seldom-
used variants or misspellings (Sood et al., 2007), whereas indexers strive to
rely on a consistent terminology and assign the same keyphrase to all
documents for a given topic, even when it does not occur in these documents.
To investigate this further, we looked at how many variants of an index term,
in this case “artificial neural network”, could be found in the author-
assigned keyphrases of KP20k. All in all, we found dozens of variants for this
term, including “neural network”, “neural network (nns)”, “neural net”,
“artificial neural net” or “nn”. This apparent lack of annotation consistency
intuitively has two consequences: 1) it makes it harder for supervised
approaches to learn a good model, 2) it makes automatic evaluation much less
reliable as it is based on exact string matching.
It is important to stress that datasets containing scientific articles may
contain noisy texts. Indeed, most articles were automatically converted from
PDF format to plain text and thus are likely to contain irrelevant pieces of
text (e.g. muddled sentences, equations). Previous work show that noisy inputs
undermine the overall performance of keyphrase extraction models (Boudin et
al., 2016). In this study, we do not insist on a perfect input and we are
aware that reported results may be improved with an increase in pre-processing
effort.
## 3\. Models
Roughly speaking, previous works on keyphrase extraction can be divided into
two groups depending on whether they adopt a supervised learning procedure or
not. This section starts by introducing the baselines we will use in our
experiments, and then proceeds to describe the state-of-the-art keyphrase
extraction models we re-implemented sorted into the aforementioned two groups.
### 3.1. Baselines
Having strong baselines to compare with is a prerequisite for contrasting the
results of proposed models. In previous studies, various baselines were
considered, complicating the analysis and interpretation of the reported
results. Our stance here is to establish three baselines, each associated with
a particular feature that is commonly used in keyphrase extraction models. All
baselines are also unsupervised, allowing their use and performance analysis
on any of the benchmark datasets
Keyphrase position is a strong signal for both unsupervised and supervised
models, simply because texts are usually written so that the most important
ideas go first (Marcu, 1997). In single document summarization for example,
the lead baseline –that is, the first sentences from the document–, while
incredibly simple, is still a competitive baseline (Kedzie et al., 2018).
Similar to the lead baseline, we propose the FirstPhrases baseline that
extracts the first $N$ keyphrase candidates from a document. We are not aware
of any previous work reporting that baseline, yet, as we will see in §5, it
achieves remarkably good results.
Graph-based ranking models for keyphrase extraction are, perhaps, the most
popular models in the literature. Therefore, as a second baseline, we use
TextRank (Mihalcea and Tarau, 2004), which weights keyphrase candidates using
a random walk over a word-graph representation of the document. In a nutshell,
TextRank defines the importance of a word in terms of how it relates to other
words in the document, and ranks candidates according to the words they
contain.
The third baseline, TF$\times$IDF (Salton and Buckley, 1988), have been
repeatedly used in previous comparative studies (Kim et al., 2010; Meng et
al., 2017, inter alia). In contrast with the other two baselines that do no
require any resources whatsoever (beyond the document itself), TF$\times$IDF
makes use of the statistics collected from unlabelled data to weight keyphrase
candidates. As such, it often gives better results, in some cases even on par
with state-of-the-art models (Ye and Wang, 2018).
### 3.2. Unsupervised models
Annotated data are not always available or easy to obtain, which motivates the
further development of unsupervised models for keyphrase extraction. Besides,
looking back at previous work, most attempts to address this problem employ
unsupervised approaches. In this study, we selected three recent state-of-the-
art models based on their reported performance.
The first model we investigate is PositionRank (Florescu and Caragea, 2017), a
graph-based model that incorporates two features (position and frequency) into
a biased PageRank algorithm. This model operates at the word level, and
assigns a score to each candidate using the sum of its individual word scores.
As such, it suffers from over-generation errors444These errors occur when a
model correctly outputs a keyphrase because it contains an important word, but
at the same time erroneously predicts other keyphrases because they contain
the same word. (Hasan and Ng, 2014), but still achieves good performance on
short texts.
The second model we consider, MPRank (Boudin, 2018), relies on a multipartite
graph representation to enforce topical diversity while ranking keyphrase
candidates. It includes a mechanism to incorporate keyphrase selection
preferences in order to introduce a bias towards candidates occurring first in
the document. MultipartiteRank was shown to consistently outperform other
unsupervised graph-based ranking models.
Both aforementioned models only exploit the document itself to extract
keyphrases. The third model we include, EmbedRank (Bennani-Smires et al.,
2018), leverages sentence embeddings for ranking keyphrase candidates.
Candidates are weighted according to their cosine distance to the document
embedding, while diversity in the selected keyphrases is promoted using
Maximal Marginal Relevance (MMR) (Goldstein and Carbonell, 1998). Despite its
simplicity, this model was shown to outperform other unsupervised models on
short texts (abstracts and news).
### 3.3. Supervised models
Supervised models can be further divided into two categories, depending on
whether they rely on a neural network or not.
Traditional supervised models treat the keyphrase extraction problem as a
binary classification task. Here, we include such a model, namely Kea (Witten
et al., 1999), in order to precisely quantify the performance gap with recent
neural-based models. KEA uses a Naive Bayes classifier trained on a set of
only two handcrafted features we have elected as baseline features: the
TF$\times$IDF score of the candidate and the normalized position of its first
occurrence in the document. Previous work has reported confusing and
conflicting results555On SemEval, (Meng et al., 2017) report an F@10 score of
$2.6$ while (Boudin, 2016) report a score of $19.3$. for Kea, raising
questions about how it actually performs.
Neural models for keyphrase extraction rely on an encoder-decoder architecture
(Cho et al., 2014; Sutskever et al., 2014) with an attention mechanism
(Bahdanau et al., 2014; Luong et al., 2015). Training these models require
large amounts of annotated training data, and is therefore only possible on
the KP20k and KPTimes datasets. The second supervised model we include in this
study is CopyRNN (Meng et al., 2017), an encoder-decoder model that
incorporates a copying mechanism (Gu et al., 2016) in order to be able to
predict phrases that rarely occur. When properly trained, this model was shown
to be very effective in extracting keyphrases from scientific abstracts.
The third supervised model we use, CorrRNN (Chen et al., 2018), extends the
aforementioned model by introducing correlation constraints. It employs a
coverage mechanism (Tu et al., 2016) that diversifies attention distributions
to increase topic coverage, and a review mechanism to avoid generating
duplicates. As such, it produces more diverse and less redundant keyphrases.
Note that only neural models have the ability to generate missing keyphrases,
which in theory gives them a clear advantage over the other models.
## 4\. Experimental settings
In addition to the variation in the choice of benchmark datasets and
baselines, there are also major discrepancies in parameter settings and
evaluation metrics between previous studies. For example, there is no point in
contrasting the results in (Meng et al., 2017), (Florescu and Caragea, 2017)
and (Teneva and Cheng, 2017), three papers about keyphrase extraction
published in the same year at ACL, since neither benchmark datasets, parameter
settings nor evaluation metrics are comparable. To address this problem, we
use the same pre-processing tools, parameter settings and evaluation procedure
across all our experiments.
| Scientific articles | Paper abstracts | News articles
---|---|---|---
| PubMed | ACM | SemEval | Inspec | WWW | KP20k | DUC-2001 | KPCrowd | KPTimes
Model | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP | $\text{F}@10$ | MAP
FirstPhrases | 15.4 | 14.7 | 13.6 | 13.5 | 13.8 | 10.5 | 29.3 | 27.9 | 10.2 | 09.8 | 13.5 | 12.6 | 24.6 | 22.3 | 17.1 | 16.5 | 09.2 | 08.4
TextRank | 01.8 | 01.8 | 02.5 | 02.4 | 03.5 | 02.3 | 35.8 | 31.4 | 08.4 | 05.6 | 10.2 | 07.4 | 21.5 | 19.4 | 07.1 | 09.5 | 02.7 | 02.5
TF$\times$IDF | 16.7 | 16.9 | 12.1 | 11.4 | 17.7 | 12.7 | 36.5 | 34.4 | 09.3 | 10.1 | 11.6 | 12.3 | 23.3 | 21.6 | 16.9 | 15.8 | 09.6 | 09.4
PositionRank | 04.9 | 04.6 | 05.7 | 04.9 | 06.8 | 04.1 | 34.2 | 32.2 | 11.6† | 08.4 | 14.1† | 11.2 | 28.6† | 28.0† | 13.4 | 12.7 | 08.5 | 06.6
MPRank | 15.8 | 15.0 | 11.6 | 11.0 | 14.3 | 10.6 | 30.5 | 29.0 | 10.8† | 10.4 | 13.6† | 13.3† | 25.6 | 24.9† | 18.2 | 17.0 | 11.2† | 10.1†
EmbedRank | 03.7 | 03.2 | 02.1 | 02.1 | 02.5 | 02.0 | 35.6 | 32.5 | 10.7† | 07.7 | 12.4 | 10.0 | 29.5† | 27.5† | 12.4 | 12.4 | 04.0 | 03.3
Kea | 18.6† | 18.6† | 14.2† | 13.3 | 19.5† | 14.7† | 34.5 | 33.2 | 11.0† | 10.9† | 14.0† | 13.8† | 26.5† | 24.5† | 17.3 | 16.7 | 11.0† | 10.8†
CopyRNN | 24.2† | 25.4† | 24.4† | 26.3† | 20.3† | 13.8 | 28.2 | 26.4 | 22.2† | 24.9† | 25.4† | 28.7† | 10.5 | 07.2 | 08.4 | 04.2 | 39.3† | 50.9†
CorrRNN | 20.8† | 19.4† | 21.1† | 20.5† | 19.4 | 10.9 | 27.9 | 23.6 | 19.9† | 20.3† | 21.8† | 22.7 | 10.5 | 06.5 | 07.8 | 03.2 | 20.5† | 20.3†
Table 2. Performance of keyphrase extraction models. † indicates significance
over the baselines.
### 4.1. Parameter settings
We pre-process all the texts using the Stanford CoreNLP suite (Manning et al.,
2014) for tokenization, sentence splitting and part-of-speech (POS) tagging.
All non-neural models operate on a set of keyphrase candidates, extracted from
the input document. Selecting appropriate candidates is particularly important
since it determines the upper bound on recall, and the amount of irrelevant
candidates that models will have to deal with. For a fair and meaningful
comparison, we use the same candidate selection heuristic across models. We
follow the recommendation by Wang et al. (2014) and select the sequences of
adjacent nouns with one or more preceding adjectives of length up to five
words. Candidates are further filtered by removing those shorter than 3
characters or containing non-alphanumeric symbols.
We implemented the neural models in PyTorch (Paszke et al., 2017) using
AllenNLP (Gardner et al., 2018), and the non-neural models using the pke
toolkit (Boudin, 2016). As neural models require large amounts of annotated
data to be trained, we trained our models on the KP20k dataset for both
scientific papers and abstracts, and on KPTimes for news texts. We compute
Document Frequency (DF) counts and learn Kea models on training sets. For
datasets without training splits, we apply a leave-one-out cross-validation
procedure on the test sets for calculating DF counts and training models. We
use the optimal parameters suggested by the authors for each model, and
leverage pre-trained sentence embeddings666https://github.com/epfml/sent2vec
for EmbedRank. We also found out that the training set of KP20k contains a
non-negligible number of documents from the test sets of other datasets. We
removed those documents prior to training.
### 4.2. Evaluation metrics
Although there is no consensus as to which metric is the most reliable for
keyphrase extraction, a popular evaluation strategy is to compare the top $k$
extracted keyphrases against the gold standard. We adopt this strategy and
report the f-measure at the top 10 extracted keyphrases. In previous work, we
often see differences in how gold standards are handled during evaluation. For
example, some studies evaluate their models on the present and missing
portions of the gold standard separately (Meng et al., 2017; Ye and Wang,
2018; Chen et al., 2018, inter alia), whereas other work use the entire gold
standard (Florescu and Caragea, 2017; Boudin, 2018, inter alia). We chose the
latter because recent models, in addition to extracting keyphrases from the
content of the document, are able to generate missing keyphrases. Following
common practice, gold standard and output keyphrases are stemmed to reduce the
number of mismatches. One issue with the f-measure is that the ranks of the
correct keyphrases are not taken into account. To evaluate the overall ranking
performance of the models, we also report the Mean Average Precision (MAP)
scores of the ranked lists of keyphrases. We use the Student’s paired t-test
to assess statistical significance at the $0.05$ level.
### 4.3. Replicability of results
In Table 3, we compare the results of our re-implementations against those
reported in the original papers. We note that all models show comparable
results. We observe the largest differences with original scores for CopyRNN
($+2$) and CorrRNN ($-4.3$) that can be easily explained by minor differences
in training parameters.
Model | Dataset (metric) | Orig. | Ours
---|---|---|---
PositionRank | WWW (F$@$8) | 12.3 | 11.7
MPRank | SemEval-2010 (F$@$10) | 14.5 | 14.3
EmbedRank | Inspec (F$@$10) | 37.1 | 35.6
CopyRNN | KP20k (F$@$10 on present) | 26.2 | 28.2
CorrRNN | ACM (F$@$10 on present) | 27.8 | 23.5
Table 3. Original vs. re-implementation scores.
## 5\. Results
Results are presented in Table 2. First of all, we notice that no model
significantly outperforms the baselines on all datasets. This is rather
surprising, as one would expect that neural models would be consistently
better than a simple TF$\times$IDF model for example. Rather, we see that the
TF$\times$IDF baseline is very competitive on long documents, while the
FirstPhrases baseline performs remarkably well, especially on news texts.
Still, overall, CopyRNN achieves the best performance with, in the case of
KPTimes, MAP scores exceeding 50%. When we look at only unsupervised models,
MPRank achieves the best results across datasets. Also, it comes as no
surprise that Kea exhibits strong performance across datasets because it
combines two effective features, as demonstrated by the results of the
TF$\times$IDF and FirstPhrases baselines. Conversely, despite the addition of
mechanisms for promoting diversity in the output, CorrRNN is almost always
outperformed by CopyRNN, suggesting that the added correlation constraints are
not effective at filtering out spurious keyphrases.
In light of the above, we can now answer the following question: “How much
progress have we made since early models?”. It is clear that neural-based
models are the new state-of-the-art for keyphrase extraction, achieving F@10
scores up to three times that of previous models. That being said, CopyRNN,
which is the best overall model, fails to consistently outperform the
baselines on all datasets. One reason for that is the limited generalization
ability of neural-based models (Meng et al., 2017; Chen et al., 2018; Gallina
et al., 2019), which means that their performance degrades on documents that
differ from the ones encountered during training. This is besides confirmed by
the extremely low performance of these models on DUC-2001 and KPCrowd. Much
more work needs to be done in tackling this issue if neural models are to
substitute for older supervised models. Perhaps most disappointing is the fact
that state-of-the-art unsupervised models are still challenged by the
TF$\times$IDF baseline. Here, we suspect the reasons are twofold. First, the
models we have investigated do not use in-domain data which may not only limit
their performance, but also, as in the case of EmbedRank that uses out-of-
domain (Wikipedia) data, be detrimental to their performance. Second, unlike
neural generative models, they are not able to produce keyphrases that do not
occur in the source document, further limiting their potential effectiveness.
As outlined in §2, gold standards provided by lay annotators, such as authors
and readers, exhibit strong inconsistency issues. One might therefore wonder
“What is the impact of non-expert annotations on training and evaluating
keyphrase extraction models?”. Intuitively, models evaluated against these
annotations are likely to receive lower scores because they make training more
difficult (that is, assigning different keyphrases to documents about the same
topic may confuse the model) while increasing the number of false negatives
during evaluation. This is exactly what we observe in Table 2 where the best
scores for Inspec and KPTimes, whose gold standards are provided by
professional indexers, are higher in magnitude than those of the other
datasets. Precisely quantifying how much impact lay annotations have on
performance is no easy task as it implies a double-annotation process by both
expert and non-expert annotators. Luckily enough, a small sample of documents
from Inspec are also found in KP20k, allowing us to compare the performance of
keyphrases models between both annotation types. Results are shown in Table 4.
First, we see that overall performance is nearly cut in half when evaluating
against author-provided gold standard, suggesting that reported scores in
previous studies are arguably underestimated. Second, neural models again do
not show their superiority against indexer-assigned keyphrases, which
advocates the need for more experiments on datasets that include expert
annotations.
| $\text{F}@10$ | MAP
---|---|---
Model | I | A | I | A
FirstPhrases | 25.8 | 13.7 | 26.1 | 13.2
TextRank | 33.4 | 12.2 | 29.6 | 09.3
TF$\times$IDF | 34.6 | 14.2 | 33.3 | 16.1
PositionRank | 32.9 | 15.9 | 31.0 | 13.0
MPRank | 26.4 | 13.8 | 27.6 | 13.6
EmbedRank | 34.3 | 15.3 | 31.3 | 11.5
Kea | 32.5 | 15.2 | 31.9 | 15.9
CopyRNN | 33.7 | 28.9‡ | 29.8 | 33.8‡
CorrRNN | 28.6 | 25.3 | 24.2 | 28.2
Avg. | 31.3 | 17.2 | 29.4 | 17.2
Table 4. Results on a subset of 55 documents from Inspec for indexer (I) and
author (A) gold annotations. ‡ indicates significance over every other model.
Figure 1. Average number of keyphrases in common between model outputs.
The third question we want to address in this study is “Which baselines and
benchmark datasets should be included in future work for a better
understanding of the pros and cons of a newly proposed model?”. Having strong
baselines to compare with is of utmost importance, and our results give an
indication of which model is relevant. When properly trained, neural models
drastically outperform all other models and represent the state-of-the-art.
Since CopyRNN achieve the best results, it should be included in future work
for comparison. In an unsupervised setting, or in a data-sparse scenario where
neural models can not be applied, the picture is less clear. To help us
understand which model is worth investigating, we conducted an additional set
of experiments aimed at comparing the outputs from all models in a pairwise
manner. The motivation behind these experiments is that including multiple
models that behave similarly is of limited interest. Similarities between
model outputs, viewed in terms of the number of keyphrases in common, are
graphed as a heatmap in Figure 1. Overall, we observe different patterns for
each source of documents. The shorter the document is, the more similar
outputs are, which is mostly due to a smaller search space (that is, a smaller
number of keyphrase candidates). We note that the three best unsupervised
models, namely FirstPhrases, MPRank and TF$\times$IDF, generate very similar
keyphrases (up to 42% identical). Considering this, and given their reported
performances (Table 2), we argue that TF$\times$IDF (or KEA if seed training
data is available) should be considered as strong unsupervised baseline in
subsequent work.
These recommendations of baselines also affect the choice of which benchmark
datasets one has to use. As neural models are data-hungry, KP20k and KPTimes
are the default options for paper abstracts and news articles. For scientific
articles, we recommend using SemEval for two reasons: 1) it is widely used by
existing studies; and 2) it provides a double-annotated gold standard (author-
and reader-assigned keyphrases) that alleviates annotation inconsistencies to
some extent.
Our experiments highlight several issues in evaluating keyphrase extraction
models with existing benchmark datasets. Another way of assessing the
effectiveness of these models would be to explore their impact on other tasks
as an extrinsic evaluation. To the best of our knowledge, there is no
previously published research on that matter despite many downstream tasks
that already benefit from keyphrase information such as article recommendation
(Collins and Beel, 2019) or browsing interfaces (Gutwin et al., 1999) in
digital libraries. This points to an interesting future direction that allows
for a deeper understanding of the limitations of current models.
## 6\. Conclusion
This paper presents a large scale evaluation of keyphrase extraction models
conducted on multiple benchmark datasets from different sources and domains.
Results indicate that keyphrase extraction is still an open research question,
with state-of-the-art neural-based models still challenged by simple baselines
on some datasets. We hope that this work will serve as a point of departure
for more rigorous analysis and evaluation of proposed keyphrase extraction
models. We provide all the code and data on a public repository777Link to the
repository will appear here after the review period., as well as a public
leaderboard to facilitate the comparison between models.
## References
* (1)
* Augenstein et al. (2017) Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017\. SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications. In _Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)_. Association for Computational Linguistics, Vancouver, Canada, 546–555. http://www.aclweb.org/anthology/S17-2091
* Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014\. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_ (2014).
* Barker et al. (1972) Frances H Barker, Douglas C Veal, and Barry K Wyatt. 1972\. Comparative efficiency of searching titles, abstracts, and index terms in a free-text data base. _Journal of Documentation_ 28, 1 (1972), 22–36.
* Bennani-Smires et al. (2018) Kamil Bennani-Smires, Claudiu Musat, Andreea Hossmann, Michael Baeriswyl, and Martin Jaggi. 2018\. Simple Unsupervised Keyphrase Extraction using Sentence Embeddings. In _Proceedings of the 22nd Conference on Computational Natural Language Learning_. Association for Computational Linguistics, Brussels, Belgium, 221–229. http://www.aclweb.org/anthology/K18-1022
* Berend (2011) Gábor Berend. 2011\. Opinion Expression Mining by Exploiting Keyphrase Extraction. In _Proceedings of 5th International Joint Conference on Natural Language Processing_. Asian Federation of Natural Language Processing, Chiang Mai, Thailand, 1162–1170. http://www.aclweb.org/anthology/I11-1130
* Boudin (2016) Florian Boudin. 2016\. pke: an open source python-based keyphrase extraction toolkit. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations_. The COLING 2016 Organizing Committee, Osaka, Japan, 69–73. http://aclweb.org/anthology/C16-2015
* Boudin (2018) Florian Boudin. 2018\. Unsupervised Keyphrase Extraction with Multipartite Graphs. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_. Association for Computational Linguistics, New Orleans, Louisiana, 667–672. http://www.aclweb.org/anthology/N18-2105
* Boudin et al. (2016) Florian Boudin, Hugo Mougard, and Damien Cram. 2016\. How Document Pre-processing affects Keyphrase Extraction Performance. In _Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)_. The COLING 2016 Organizing Committee, Osaka, Japan, 121–128. http://aclweb.org/anthology/W16-3917
* Caragea et al. (2014) Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014\. Citation-Enhanced Keyphrase Extraction from Research Papers: A Supervised Approach. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. Association for Computational Linguistics, Doha, Qatar, 1435–1446. http://www.aclweb.org/anthology/D14-1150
* Chen et al. (2018) Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018. Keyphrase Generation with Correlation Constraints. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 4057–4066. http://www.aclweb.org/anthology/D18-1439
* Cho et al. (2014) Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014\. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_. Association for Computational Linguistics, Doha, Qatar, 1724–1734. http://www.aclweb.org/anthology/D14-1179
* Collins and Beel (2019) Andrew Collins and Jöran Beel. 2019. Document Embeddings vs. Keyphrases vs. Terms for Recommender Systems: A Large-Scale Online Evaluation. In _19th ACM/IEEE Joint Conference on Digital Libraries, JCDL 2019, Champaign, IL, USA, June 2-6, 2019_. 130–133. https://doi.org/10.1109/JCDL.2019.00027
* Evans and Zhai (1996) David A. Evans and Chengxiang Zhai. 1996. Noun Phrase Analysis in Large Unrestricted Text for Information Retrieval. In _Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Santa Cruz, California, USA, 17–24. https://doi.org/10.3115/981863.981866
* Florescu and Caragea (2017) Corina Florescu and Cornelia Caragea. 2017. PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Vancouver, Canada, 1105–1115. http://aclweb.org/anthology/P17-1102
* Gallina et al. (2019) Ygor Gallina, Florian Boudin, and Beatrice Daille. 2019\. KPTimes: A Large-Scale Dataset for Keyphrase Generation on News Documents. In _Proceedings of the 12th International Conference on Natural Language Generation_. Association for Computational Linguistics, Tokyo, Japan, 130–135. https://doi.org/10.18653/v1/W19-8617
* Gardner et al. (2018) Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A Deep Semantic Natural Language Processing Platform. In _Proceedings of Workshop for NLP Open Source Software (NLP-OSS)_. Association for Computational Linguistics, Melbourne, Australia, 1–6. https://doi.org/10.18653/v1/W18-2501
* Goldstein and Carbonell (1998) Jade Goldstein and Jaime Carbonell. 1998. SUMMARIZATION: (1) USING MMR FOR DIVERSITY- BASED RERANKING AND (2) EVALUATING SUMMARIES. In _Proceedings of the TIPSTER Text Program: Phase III_. Association for Computational Linguistics, Baltimore, Maryland, USA, 181–195. https://doi.org/10.3115/1119089.1119120
* Gollapalli et al. (2015) Sujatha Das Gollapalli, Cornelia Caragea, Xiaoli Li, and C. Lee Giles (Eds.). 2015. _Proceedings of the ACL 2015 Workshop on Novel Computational Approaches to Keyphrase Extraction_. Association for Computational Linguistics, Beijing, China. http://www.aclweb.org/anthology/W15-36
* Gu et al. (2016) Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016\. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Berlin, Germany, 1631–1640. http://www.aclweb.org/anthology/P16-1154
* Gutwin et al. (1999) Carl Gutwin, Gordon Paynter, Ian Witten, Craig Nevill-Manning, and Eibe Frank. 1999\. Improving Browsing in Digital Libraries with Keyphrase Indexes. _Decis. Support Syst._ 27, 1-2 (Nov. 1999), 81–104. https://doi.org/10.1016/S0167-9236(99)00038-X
* Hasan and Ng (2014) Kazi Saidul Hasan and Vincent Ng. 2014. Automatic Keyphrase Extraction: A Survey of the State of the Art. In _Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Baltimore, Maryland, 1262–1273. http://www.aclweb.org/anthology/P14-1119
* Hulth (2003) Anette Hulth. 2003\. Improved Automatic Keyword Extraction Given More Linguistic Knowledge. In _Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing_ , Michael Collins and Mark Steedman (Eds.). 216–223. http://www.aclweb.org/anthology/W03-1028.pdf
* Hulth and Megyesi (2006) Anette Hulth and Beáta B. Megyesi. 2006. A Study on Automatically Extracted Keywords in Text Categorization. In _Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Sydney, Australia, 537–544. https://doi.org/10.3115/1220175.1220243
* Hussey et al. (2012) Richard Hussey, Shirley Williams, Richard Mitchell, and Ian Field. 2012. A comparison of automated keyphrase extraction techniques and of automatic evaluation vs. human evaluation. _International Journal on Advances in Life Sciences_ 4, 3 and 4 (2012), 136–153. http://centaur.reading.ac.uk/32266/
* Kedzie et al. (2018) Chris Kedzie, Kathleen McKeown, and Hal Daume III. 2018\. Content Selection in Deep Learning Models of Summarization. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 1818–1828. http://www.aclweb.org/anthology/D18-1208
* Kim et al. (2010) Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. SemEval-2010 Task 5 : Automatic Keyphrase Extraction from Scientific Articles. In _Proceedings of the 5th International Workshop on Semantic Evaluation_. Association for Computational Linguistics, Uppsala, Sweden, 21–26. http://www.aclweb.org/anthology/S10-1004
* Krapivin et al. (2009) Mikalai Krapivin, Aliaksandr Autaeu, and Maurizio Marchese. 2009. _Large dataset for keyphrases extraction_. Technical Report. University of Trento.
* Litvak and Last (2008) Marina Litvak and Mark Last. 2008. Graph-Based Keyword Extraction for Single-Document Summarization. In _Coling 2008: Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization_. Coling 2008 Organizing Committee, Manchester, UK, 17–24. http://www.aclweb.org/anthology/W08-1404
* Luong et al. (2015) Thang Luong, Hieu Pham, and Christopher D. Manning. 2015\. Effective Approaches to Attention-based Neural Machine Translation. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Lisbon, Portugal, 1412–1421. http://aclweb.org/anthology/D15-1166
* Manning et al. (2014) Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In _Association for Computational Linguistics (ACL) System Demonstrations_. 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010
* Mao and Lu (2017) Yuqing Mao and Zhiyong Lu. 2017. MeSH Now: automatic MeSH indexing at PubMed scale via learning to rank. _Journal of Biomedical Semantics_ 8, 1 (17 Apr 2017), 15. https://doi.org/10.1186/s13326-017-0123-3
* Marcu (1997) Daniel Marcu. 1997\. The Rhetorical Parsing of Unrestricted Natural Language Texts. In _Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Madrid, Spain, 96–103. https://doi.org/10.3115/976909.979630
* Marujo et al. (2012) Luís Marujo, Anatole Gershman, Jaime Carbonell, Robert Frederking, and JoaÌfo P. Neto. 2012\. Supervised Topical Key Phrase Extraction of News Stories using Crowdsourcing, Light Filtering and Co-reference Normalization. In _Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12)_ (23-25), Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Istanbul, Turkey.
* Meng et al. (2017) Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017\. Deep Keyphrase Generation. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, 582–592. https://doi.org/10.18653/v1/P17-1054
* Mihalcea and Tarau (2004) Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Texts. In _Proceedings of EMNLP 2004_ , Dekang Lin and Dekai Wu (Eds.). Association for Computational Linguistics, Barcelona, Spain, 404–411.
* Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017\. Automatic differentiation in pytorch. In _NIPS 2017 Workshop Autodiff_.
* Salton and Buckley (1988) Gerard Salton and Christopher Buckley. 1988. Term-weighting approaches in automatic text retrieval. _Information Processing & Management_ 24, 5 (1988), 513 – 523. https://doi.org/10.1016/0306-4573(88)90021-0
* Schutz (2008) Alexander Thorsten Schutz. 2008\. Keyphrase extraction from single documents in the open domain exploiting linguistic and statistical methods. _Master’s thesis, National University of Ireland_ (2008).
* Sood et al. (2007) Sanjay Sood, Sara Owsley, Kristian J. Hammond, and Larry Birnbaum. 2007. TagAssist: Automatic Tag Suggestion for Blog Posts. In _Proceedings of the First International Conference on Weblogs and Social Media, ICWSM 2007, Boulder, Colorado, USA, March 26-28, 2007_. http://www.icwsm.org/papers/paper10.html
* Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014\. Sequence to Sequence Learning with Neural Networks. In _Advances in Neural Information Processing Systems 27_ , Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 3104–3112. http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf
* Teneva and Cheng (2017) Nedelina Teneva and Weiwei Cheng. 2017. Salience Rank: Efficient Keyphrase Extraction with Topic Modeling. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_. Association for Computational Linguistics, Vancouver, Canada, 530–535. http://aclweb.org/anthology/P17-2084
* Tu et al. (2016) Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling Coverage for Neural Machine Translation. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Berlin, Germany, 76–85. https://doi.org/10.18653/v1/P16-1008
* Wan and Xiao (2008) Xiaojun Wan and Jianguo Xiao. 2008. Single Document Keyphrase Extraction Using Neighborhood Knowledge. In _Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 2_ _(AAAI’08)_. AAAI Press, 855–860. http://dl.acm.org/citation.cfm?id=1620163.1620205
* Wang et al. (2014) Rui Wang, Wei Liu, and Chris McDonald. 2014. How Preprocessing Affects Unsupervised Keyphrase Extraction. In _Computational Linguistics and Intelligent Text Processing_ , Alexander Gelbukh (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 163–176.
* Wang et al. (2015) Rui Wang, Wei Liu, and Chris McDonald. 2015. Using Word Embeddings to Enhance Keyword Identification for Scientific Publications. In _Databases Theory and Applications_ , Mohamed A. Sharaf, Muhammad Aamir Cheema, and Jianzhong Qi (Eds.). Springer International Publishing, Cham, 257–268.
* Witten et al. (2009) Ian H Witten, David Bainbridge, and David M Nichols. 2009\. _How to build a digital library_. Morgan Kaufmann.
* Witten et al. (1999) Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-Manning. 1999. KEA: Practical Automatic Keyphrase Extraction. In _Proceedings of the Fourth ACM Conference on Digital Libraries_ _(DL ’99)_. ACM, New York, NY, USA, 254–255. https://doi.org/10.1145/313238.313437
* Ye and Wang (2018) Hai Ye and Lu Wang. 2018\. Semi-Supervised Learning for Neural Keyphrase Generation. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics, Brussels, Belgium, 4142–4153. http://www.aclweb.org/anthology/D18-1447
* Zesch and Gurevych (2009) Torsten Zesch and Iryna Gurevych. 2009. Approximate Matching for Evaluating Keyphrase Extraction. In _Proceedings of the International Conference RANLP-2009_. Association for Computational Linguistics, Borovets, Bulgaria, 484–489. http://www.aclweb.org/anthology/R09-1086
* Zhai (1997) Chengxiang Zhai. 1997\. Fast Statistical Parsing of Noun Phrases for Document Indexing. In _Fifth Conference on Applied Natural Language Processing_. Association for Computational Linguistics, Washington, DC, USA, 312–319. https://doi.org/10.3115/974557.974603
|
2024-09-04T02:54:58.652265 | 2020-03-10T11:56:55 | 2003.04654 | {
"authors": "O.V. Kancheli",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26131",
"submitter": "O. V. Kancheli",
"url": "https://arxiv.org/abs/2003.04654"
} | arxiv-papers | Parton models and frame independence of high-energy cross-sections.
O.V. Kancheli 111 Email<EMAIL_ADDRESS>
Institute for Theoretical and Experimental Physics,
117218, Moscow, Russia
###### Abstract
We describe some ambiguities which take place when on calculates the cross-
sections in parton models at high energies and the connected limitations on
the asymptotic of high energy amplitudes that follows from the conditions of
boost-invariance of cross-sections.
It turns out that the resulting constraints are of the same type as the
following from the t-channel unitarity conditions. So that on can suppose that
this similarity, by their nature, has much more general grounds.
## 1 Introduction
There are two main theoretical approaches to a study of the behavior of high
energy amplitudes and cross-sections.
In one approach, we directly calculate the amplitudes by summing the
contributions of the Feynman diagrams of the corresponding field theory or use
some effective theory like reggeon diagrams or various string-like dual
models.
In the other - parton like approach to high-energy collisions 222 A few useful
reviews ( [1]\- [7]) we usually consider separately three main stages of the
system evolution in the process of particles collision. Firstly, one
constructs the quantum states $\Psi(P)$ of high energy particle with momentum
$P\gg m$ in terms of superposition
$\Psi(P)~{}=~{}\sum_{n}\int_{\\{k_{i}\\}}~{}f_{n}(P,\\{k_{i}\\})~{}|n,\\{k_{i}\\}>$
(1)
of the n-particle states $|n,\\{k_{i}\\}>$ of some “primary” constituents -
partons with 3-momenta $\\{k_{i}\\}$. The “choice” of these partons is not
unique, and partons can be bare point like particles, particles with varying
virtuality, QCD color dipoles, fast string configurations, distributions of
the Coulomb-like fields, etc.
The state $\Psi(\vec{P})$ must fulfill the Schroedinger equation
$\hat{H}~{}\Psi(\vec{P})=\sqrt{\vec{P}^{2}+m^{2}}~{}~{}\Psi(\vec{P})~{},$
where the Hamiltonian $\hat{H}$ is the function of parton fields, so that
$\Psi(\vec{P})$ is the eigenfunction of $\hat{H}$ with eigenvalues defining
the particles physical mass $m$.
After that one can use such a state $\Psi(P_{1})$ to calculate its interaction
with some low energy target or with other fast particle in the state
$\Psi(P_{2})$ in terms of “simple” amplitudes of parton interaction.
There is also the third stage corresponding to an evolution in the final state
when moving away partons transform and combine into physical particles
(hadronization…). But often this stage is not very restrictive, especially
when we calculate various integrated cross-sections. And we will not consider
it in this article.
It is essential that with the energy growing in most parton descriptions the
structure of the parton state becomes more and more complicated for all the
theories containing vector (like QCD) and tensor fields (gravity) and mean
parton number in states $\Psi(P)$and the average transverse size of the region
they occupy grow with $P$.
When we consider the collision of two fast particles in the parton states
$\Psi(P_{1})$ and $\Psi(P_{2})$ at some large $s=(P_{1}+P_{2})^{2}\gg m^{2}$
we can choose for this any longitudinal Lorentz system. But the resulting
values of cross-sections of various processes must not depend from this choice
of frame. And this is nontrivial condition in parton approach, because in
different longitudinal systems (that is for various $P_{1}$ and $P_{2}$ at the
same value of $s$) the different parton configurations firstly meet one
another at the moment of particles collision. And, moreover, by choosing a
different system we also can move the dynamics, from stage one to two and vice
versa.
If we make all calculation precisely - with hermitian Hamiltonian we probably
can be sure that all restrictions coming from Lorentz-invariance and the
unitarity conditions will be satisfied. But if we make some approximations,
especially dictated by phenomenological or pictorial arguments, the unitarity
conditions itself can probably be the only general way to check that the
results are not contradictory.
Various restrictions from the t-channel unitarity are very essential for the
amplitudes describing high energy hadron interactions, and they are directly
taken into account in reggeon amplitudes [10]. But in parton approaches it is
not evident how to take them into account.
In the reggeon field theory and in the dual (string) models the t-unitarity
conditions are automatically fulfilled. But at high reggeon (pomeron) density
such un approach can become unreliable. The parton approach has no problems
with high parton density, but here there is no direct way how to control
possible restrictions coming from the t-unitarity.
One can hope that the longitudinal Lorentz (boost) invariance of all cross-
sections calculated in a parton approach is in some sense equivalent to the
mean form of the t-unitarity for multiparticle amplitudes. So, if we calculate
any cross-section using the partonic wave functions $\Psi(P_{a})$ and
$\Psi(P_{b})$ of fast colliding hadrons with momenta $P_{a},$ $P_{b}$ then we
expect that this cross-section must be the same in all longitudinal Lorentz
frames \- that is if we calculate the cross-sections using
$\Psi(L(\vartheta)P_{a})$ and $\Psi(L^{-1}(\vartheta)P_{b})$, where
$L(\vartheta)$ is a longitudinal boost. It is essential, that in a parton
picture such boosts $L(\vartheta)$ act on hadrons Fock state very nontrivial
changing the number of partons, etc.
No precise arguments for such general propositions (the boost invariance for
parton cross-sections $\simeq$ t-unitarity) are known. Although it is by
itself natural that the calculations of cross-sections in the parton picture
must give a frame independent answer. Also this is, in particular, confirmed
in if we give the partonic interpretation to reggeon diagrams, by t-cutting
them at various intermediate rapidities, as if we calculate various
multiparticle inclusive cross-sections.
In this article 333The material of this paper partially intercepts with the
article of the author [8]. we consider some examples illustrating how the
requirement of boost-invariance essentially restricts the structure of high
energy collision dynamics. We see that it restricts in the same way as it
follows from the conditions of t-unitarity.
## 2 Restrictions on a parton states from the boost invariance of high-energy
collision cross-sections.
Simple Examples
In this section we illustrate how the requirement of the frame independence
(boost-invariance - BI) restricts the behavior of high-energy cross-sections
calculated in the parton approach.
We suppose that partons are point like particles with perturbative interaction
and consider here some examples which show how BI condition works. Also we
choose very high energy interactions, where the mean number of parton in HE
state is large, so one can consider firstly only states with mean number of
partons and only after that take into account corrections from other
components of the Fock wave function of a fast particle. So, the picture of
interaction is almost quasiclassical.
We consider the behavior at a boost-transformation of the inelastic cross-
sections $\sigma_{in}$ or of the connected quantity - the transparency
$T=1-\sigma_{in}=|S|^{2}$, which is often more sensitive to the breaking of
BI. We choose some frame where the colliding particles have rapidities
$y_{1}=y$ and $y_{2}=Y-y$, where $Y=\ln(s/m^{2})$, and require that calculated
cross-sections do not depend on $y$
We begin from the simplest parton models of a fast hadron - the rare parton
gas state and of the black disk state.
### 2.1 Collision of a rare gas like parton states
Let us consider the collision of two particles which can be represented as the
partonic clouds that are in a state of a very rare gas. This is the case
usually described by reggeon diagrams, that, by their construction, include
t-unitarity requirements. Let the mean number of partons in colliding hadrons
be $n(y)$, $n(Y-y)$ and the mean transverse radii of regions occupied by these
partons are $R(y)$, $R(Y-y)$, respectively. Then the total inelastic cross-
section can be expressed as:
$\displaystyle\sigma_{in}(Y)~{}=~{}\sigma_{0}~{}n(y)~{}n(Y-y)~{}~{}-~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle~{}~{}~{}-~{}c_{1}~{}\sigma_{0}~{}n(y)n(Y-y)~{}\Big{(}~{}\frac{\sigma_{0}~{}n(y)}{R^{2}(y)}~{}+~{}\frac{\sigma_{0}~{}n(Y-y)}{R^{2}(Y-y)}~{}+~{}$
(2)
$\displaystyle~{}+~{}\frac{\sigma_{0}~{}n(y)~{}n(Y-y)}{R^{2}(y)+R^{2}(Y-y)}~{}\Big{)}~{}+...~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
where $\sigma_{0}$ is the parton-parton cross-section, $c_{1}\sim 1$. The
first term in (2.1) corresponds to a collision of at least one pair of
partons. The next terms describe corrections from screening and multiple
collisions 444The cross-sections of local interactions of point-like particles
decrease as a function of their relative energy as $\sigma_{0}(s)\sim 1/s$. As
a result in (2.1) enter, in fact, only the numbers of low energy partons
$n(y),n(Y-y)$ of the colliding particles in this coordinate system. .
For the rare parton gas one can at first approximation neglect multiple
collisions and screening, that is to leave only the first term in (2.1). Then,
from the requirement of the independence of $\sigma_{0}~{}n(y)n(Y-y)$ on $y$
follows the unique solution for
$n(y)=~{}n_{0}e^{y\Delta_{0}}$ (3)
with some real constants $n_{0}$, $\Delta_{0}$. The following from (3)
behavior of
$\sigma_{in}(Y)=\sigma_{0}~{}n_{0}^{2}~{}e^{Y\Delta_{0}}$ (4)
in the elastic amplitude corresponds to a regge pole in the complex angular
momentum plane (and not to a cut or some more complicated regge singularity ).
And this condition follows [9] in a relativistic Regge approach only from the
2-particle t-unitarity of the elastic amplitude.
Note that the coefficient in (4) is in fact factorized - for the collision of
different particles a+b one must $n^{2}\rightarrow n_{a}n_{b}$ . This
factorization in regge approach also follows from t-unitarity.
Moreover, it is interesting to consider [6] the behavior cross-section
$\sigma_{in}$ with the definite impact parameter $B$, normalized so, that
$\sigma_{in}(Y)~{}=~{}\int~{}d^{2}B~{}\sigma_{in}(Y,y,B)~{}~{}.$ (5)
In this case the analog of the first term in (2.1) can be represented as
$\sigma_{in}(Y,y,B)~{}=~{}\sigma_{0}\int
d^{2}x_{\bot}~{}\rho(y,|x_{\bot}|)~{}\rho(Y-y,|B-x_{\bot}|)~{}~{}~{}~{},~{}~{}~{}~{}$
(6)
where $\rho(y,x_{\bot})$ is the transverse parton density $(~{}n(y)=\int
d^{2}x_{\bot}\rho(y,x_{\bot})~{})$ . Then from the frame independence of the
$\sigma_{in}(Y,y,B)$ the form of transverse parton density $n(y,x_{\perp})$
can be essentially restricted. The condition of y-independence can be writhen
as
$\frac{\partial}{\partial y}~{}\sigma_{in}(Y,y,B)~{}=~{}0~{}.$ (7)
Going here to conjugate to $x_{\bot}$ variable
$\rho(y,x_{\bot})=\int d^{2}k\cdot e^{ikx_{\bot}}~{}\tilde{\rho}(y,k)$
we come from (7) to the equation
$\frac{\partial}{\partial
y}~{}\big{(}~{}\tilde{\rho}(y,k)~{}\tilde{\rho}(Y-y,k)~{}\big{)}~{}=~{}0~{},$
which has the solution
$\tilde{\rho}(y,k)=f_{1}(k)\cdot e^{yf_{2}(k)}$
and then as a result
$\rho(y,x_{\bot})\sim\int
d^{2}k~{}e^{ikx_{\bot}}~{}f_{1}(k)~{}e^{yf_{2}(k)}~{},$ (8)
where $f_{1}$, $f_{2}$ are arbitrary functions of $k$. For
$y\rightarrow\infty$ the integral in (8) can be taken by the steepest decent
method, so that only the neighborhoods of zeros of $\partial f_{2}(k)/\partial
k$ are essential. Then from the positivity of the parton density $\rho$ it
follows that $f_{2}$ is positive and so the dominant contribution must come
from the region $k\sim 0$, otherwise $\rho(y,x_{\bot})$ will oscillate in
$x_{\bot}$. So in the essential region $f_{2}(k)\simeq
c_{1}-c_{2}k^{2},~{}~{}c_{2}>0$, and estimating the integral (8) we come to
the expression for the density of low energy partons
$\rho(y,x_{\bot})~{}\sim~{}y^{-1}~{}e^{\big{(}c_{1}y-x_{\bot}^{2}/4c_{2}r_{0}^{2}y\big{)}}~{}~{},~{}~{}~{}c_{2}>0~{}.$
(9)
The expression (9) corresponds to the Gauss form of parton distribution in
$x_{\bot}$ which usually results from the diffusion of partons in $x_{\bot}$
plane during the parton cascading. The mean radius of a low energy parton
cloud $R(y)~{}\sim r_{0}\sqrt{~{}y}$ is also fixed here only from the
condition of the frame independence. In the elastic amplitude the Eq. (9)
corresponds to the contribution of the regge pole with the trajectory
$\alpha(t)=1+\Delta+\alpha^{\prime}t$ , where
$\Delta=c_{1},~{}\alpha^{\prime}=c_{2}$.
If we make the next step and impose the condition of $y$ independence on the
sum of two terms in the right side of (2.1) and assume that the correction to
(3) is small, we become instead of (3) the corrected expression
$n(y)=n_{0}~{}e^{\Delta_{0}y}-~{}a_{2}~{}n_{0}^{2}\frac{\sigma_{0}}{R^{2}(y)}~{}e^{2\Delta_{0}y}~{}+...~{}~{}~{}~{}$
(10)
From here it is simple to conclude that
$\sigma_{in}(Y)=n_{0}^{2}~{}\sigma_{0}e^{(\Delta_{0}Y)}~{}-~{}~{}n_{0}^{4}\frac{a_{2}\sigma_{0}^{2}}{R^{2}(Y)}~{}e^{2(\Delta_{0}Y)}~{}~{}.$
(11)
The second term in (11) corresponds to the the contribution of two reggeon
cuts, whose structure is almost complectly fixed here from the boost-
invariance. The arbitrary coefficient $a_{2}>1$, depends on the weight of the
diffractive amplitudes entering in the two regeon emission vertex. The
possible next terms in (10), corresponding to higher regge cuts, can be found
in the same way by iterative applying the boost-invariance condition to the
combinations of screening terms in the expression (2.1) for $\sigma_{in}$.
Thus, it can be seen that for rare parton states we come to the restrictions
on there structure that arise from the reggeon diagrams and are defined by the
t-unitarity.
At the end of this section note that at at all currently available energies
the dominant high energy hadron interactions are well described by the regge
approach with the soft pomeron exchange and the respective cuts. This directly
corresponds to the Gauss-like parton distribution consistent with the parton
frame independence.
### 2.2 Collision of a black disks
Now let us consider the opposite limiting case of colliding parton clouds,
when the mean parton density is very high and partons fill a transverse disk
with the radius $R(y)$ depending on particles energy $E=me^{y}$. Then the
total inelastic cross-section can be determined from purely geometrical
conditions - it is defined by the area of an impact parameter space,
corresponding to the overlapping of the colliding black disks :
$\sigma_{in}(Y)~{}=~{}\pi~{}\Big{(}R(y)+R(Y-y)\Big{)}^{2}~{}.$ (12)
From the condition of independence of the right side of Eq.(12) on $y$
evidently follows the unique solution for
$R(y)~{}=~{}r_{0}\cdot y+r_{1}$ (13)
It is interesting that in this case we immediately come directly to
asymptotically constant cross-sections (when $r_{1}=0$), or to the Froissart
type behavior of cross-sections
$\sigma_{in}(Y)\simeq\pi r_{0}^{2}Y^{2}+\pi r_{0}r_{1}Y+\pi r_{1}^{2}~{}~{}.$
(14)
Here, in the Froissart case the elastic cross-section is diffractive and
$\sigma_{el}=\sigma_{in}$. Also the terms $~{}\pi r_{0}r_{1}Y$ in (14)
correspond to a diffraction generation as is natural in the Froissart case.
### 2.3 Collision of the parton grey disks
The real parton disk (even at $Y\gg 1$) cannot be absolutely black because the
parton density at every particles energy is finite. Besides that the local
parton density fluctuations also can lower the parton density in the
individual events and this leads to the grow of the locale transparency of
such disks. For such parton disks the conditions of BI can lead to rather
strong restrictions on the structure of “grey” parton states and their
interactions.
Firstly, consider the collisions of grey disks with some constant grayness -
when the mean transverse parton density is stabilized at some fixed value and
do not grow with energy i.e. the local disk transparency also does not change
with energy 555One can expect this type of the behavior in the (2+1)D QCD,
which is soft and if here the parton saturation takes place [8].. Then it is
easy to see that the condition of the boost invariance can at all not be
fulfilled for such models.
In the lab.frame of one particle the transparency
$T_{lab}(Y,B)=const(Y)~{}~{}~{}~{}~{}at~{}~{}~{}~{}~{}Y\rightarrow\infty~{}~{},$
because at all $B$ only a finite number ($\sim 1$) of partons must penetrate
through the grey parton disk of the other fast particle.
And in the center of the mass system at the same impact parameter the large
number of partons $N_{12}$ must penetrate. For the grey disk $N_{12}\sim
S_{12}(y,Y,B)$ \- the transverse area of two disks overlapping region. And for
growing with $Y$ disk radius the $S_{12}$ also grows. For the Froisart type
growth we have $S_{12}\sim Y^{2}$ at $B\ll R(Y)$. Then in systems close to the
center of mass
$T_{scm}(Y,B)\sim e^{(-cN_{12})}\sim\exp{(-cY^{2})}\rightarrow
0~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}~{}~{}c\sim 1~{}~{}~{}.$
Therefore, the case of a grey disk with a constant (or a slowly ($\sim 1$)
varying ) parton density should be probably excluded.
In all more or less realistic situations, such, for example that one can
expect in the QCD, the parton disk can have a grey parton border, even when
the inner parts of the disk become almost black. In this case the average
parton density can be roughly represented as
$\rho(y,x_{\bot})~{}\simeq~{}\rho_{d}(x_{\bot})~{}\theta(R(y)-x_{\bot})~{}+~{}\rho_{0}~{}\theta(x_{\bot}-R(y))f(x_{\bot})~{}~{},$
(15)
where $\rho_{d}(x_{\bot})$ describes the behavior of the parton density in the
inner part of the disk, and where the grey border has the width $\lambda(y)\ll
R(y)$. In this border the parton density varies from the high value (almost
black) to small one. For example, it can have the form
$f(x_{\bot})~{}\simeq~{}e^{-(x_{\bot}-R(y))/\lambda(y)}~{}~{}.$ (16)
For collisions with an impact parameter $B<R(Y)$, when colliding disks stuck
with their almost black parts, we possibly can have a boost-invariant behavior
of $\sigma_{in}$. But for collisions with $B-R(Y)\sim\lambda(Y))$, when the
discs collide with their grey edges, the situation is different.
In the Lab frame of one of particle the transparency $T_{lab}=const\sim 1$,
because here only some ($\sim 1$) partons must penetrate without interaction
through the grey edge of the large disc.
And in the arbitrary system at the same impact parameter the transparency is
$T(y,Y-y,B)\sim e^{-N_{12}(y,Y-y,B)}\sim
e^{-S_{12}(y,Y-y,B)/r_{0}^{2}}~{}~{},$
where $N_{12}(y,Y-y,B)$ is the average number of parton interactions during
the collision and $S_{12}(y,Y-y,B)$ is the area of the two disks intersection
region. This region has a form of elongated figure whose width is
$\sim\lambda(y)$ and the length $l(y)\sim\sqrt{R(y)\lambda(y)}$ for $y\lesssim
Y-y$. So, for such B the two disks intersection area is
$S_{12}(y,Y-y,B\simeq
R(Y)+\lambda)~{}~{}\sim~{}~{}R(y)^{1/2}*\lambda(y)^{3/2}~{}~{}.$
In the center of mass system this gives for $Y\gg 1$
$T_{scm}\sim\exp(-c(R(y)/r_{0})^{1/2})\rightarrow 0$
even for parton disks with $\lambda\sim const(Y)$, although the width of the
grey border can also grows together with the disk size 666One can expect [8]
that for a realistic parton disc due to border shape fluctuations the mean
width of the grey zone grows with Y as $\lambda\sim\sqrt{Y}$. Therefore, for
the border collisions with such $B$ and $Y$ we have no boost-invariance of T ,
and this conclusion in fact almost does not depend on the explicit form of the
border distribution $f(x_{\bot})$. Probably, the only exception is the Gauss
type distribution of the parton density when the whole disk has the “structure
of border”.
### 2.4 Particle to heavy nuclei interaction
A slightly different type of restriction on the parton structure follows from
the boost invariance if we consider the high energy collision of a particle p
(for example a proton, a pion or any test color dipole) with heavy nucleus
$(A\gg 1)$.
To see this we compare the estimate of transparency T in Lab frame of nuclei
and the Lab frame of p. Also we choose Y not very large, but so that $Y\gg\ln
A$, and consider a collision at $B=0$. In fact, in this case we have a
collision of p with a long ($\sim A^{1/3}$) tube of nucleons, and we want to
calculate the probability of the passage of p through A without interaction.
First, consider the p $\bigotimes$ A collision in the Lab frame of p. For such
an Y due to the Lorentz contraction of the moving nuclei all soft partons of
the fast nuclei are placed in a tiny transverse region of the longitudinal
size $\sim 1/m$. And if the parton saturation take place, the number of soft
partons $N_{A}$ interacting with p should almost not depend on A, because all
“additional” soft partons coming from different nucleons in the A-tube are
absorbed one by another. Therefore, one can expect that the transparency in
the p-lab. frame is
$T_{p}~{}\sim~{}e^{-N_{p}(Y)}~{}~{}.$ (17)
On the other hand, to calculate T in the Lab frame of A at the same B and Y we
must find the probability that fast particle p penetrate without interaction
trough the $A^{1/3}$ long tube of nucleons. Here one can expect that
$T_{A}~{}\sim~{}e^{-c(Y)A^{1/3}}$ (18)
Because in such a “thought experiment” we can arbitrary choose Y and A and the
distance between the nucleons in the tube, we come to an apparent
contradiction with the frame independence. This means that some constrains
must be imposed on the parton dynamics. The simplest way is to suppose that
there is almost no parton saturation in the A-tube. Or on the contrary - that
some kind of the mechanism works, which makes the interaction of a fast p
particle with the nucleus otherwise dependent on A.
Possibly some indications on the causes of this inconsistency can be found if
we consider the regge description of this reaction, where we can calculate
$\sigma_{in}(Y,B)=1-T$ for a large A and not to a large Y.
If we take for a single pomeron exchange in the p $\bigotimes$ A reaction the
amplitude
$v(y,b)\sim ig^{2}A^{1/3}\exp{(\Delta y-b^{2}/4\alpha^{\prime}y)}$
and consider firstly the simple eiconal case which corresponds to a situation
without parton saturation we become for the corresponding S-matrix
$S(Y,B)=\exp{(iv(Y,B))}$, and this gives for the transparency
$T(Y,B=0)~{}=~{}|~{}S(Y,B=0~{}|^{2}~{}\sim~{}\exp{\Big{(}-2g^{2}A^{1/3}e^{\Delta
Y}\Big{)}}~{}~{}.$ (19)
The simplest way to take into account something similar to the parton
saturation is to include into the single pomeron exchange amplitude the
pomeron cascading from the side of A-vertices. So that from the p-side the
pomeron line joins to p and from the A side the pomeron line branching joins
to many nucleons. This corresponds to the new amplitude
$v~{}\rightarrow~{}\tilde{v}~{}=~{}\frac{v}{1-i\frac{r}{g\Delta}v}~{}~{},$
(20)
were $r$ is the 3-pomeron vertex. In this case for large $A^{1/3}$ and
$B~{}=~{}0$ the amplitude $\tilde{v}$ is stabilized at the value
$|\tilde{v}|=g\Delta/r$. And, therefore, the corresponding transparency
approaches to
$T(Y,B=0)~{}=~{}\exp{\big{(}-2|\tilde{v}|~{}\big{)}}~{}=~{}\exp{\Big{(}-2g\Delta/r\Big{)}}~{}~{}.$
(21)
Comparing the expressions ( 17 ) with (21) and (18) with (19) we see their
similarity, but this unfortunately does not help to find the right answer,
because the simple expression like (20) dos not take into account various
pomeron interactions in the $\tilde{v}$-cascade and also the other pomeron
interactions in eiconal multipomeron diagrams 777 Note that approximately the
same inconsistency appears if we consider the heavy A $\bigotimes$ A
interaction end compare the estimates of T in the Lab frame and in the CM
system .
### 2.5 Possible boost-invariant parton density distributions in a grey disk
In fact, in the case of asymptotically growing cross-section all parton
distributions corresponding to real theories like QCD will, probably, lead to
the grey dick. And it is interesting to find the sensible examples of parton
distributions that correspond to the boost-invariant T.
Let us consider collisions of particles with some parton distribution
$\rho(y,x_{\bot})$ and try to find the minimal conditions on the form of
$\rho(y,x_{\bot})$ for which cross-sections are boost-invariant
With the exponential precision the transparency can be expressed as:
$T(Y,y,B)~{}\sim~{}\exp\Big{(}~{}-N(y,Y-y,B)~{}\Big{)}~{}~{}~{},$ (22)
where
$N(y,Y-y,B)=\sigma_{0}\cdot\int d^{2}b\cdot\rho(y,|b|)\cdot\rho(Y-y,|B-b|)$
(23)
is proportional to the mean number of the parton scattering when two $\cal F$
d penetrate one through another during their collision at the impact parameter
B.
Because the expression (23) has the same structure as (6) one can repeat here
the calculation given above. Then we find that the expression ( 23 ) can be
boost invariant only for some very special Gaussian form of parton density
$\rho$ inside the disk :
$\rho(y,x_{\bot})~{}\sim~{}\rho_{0}~{}\frac{1}{y}~{}e^{\Delta
y-x_{\bot}^{2}/yr_{0}^{2}}~{}~{},~{}~{}$ (24)
This corresponds to the distribution arising in the parton cascade when
partons only split and do not join. The same answer (Eq (9)) for
$\rho(y,x_{\bot})$ was found for the rare parton systems - but here the
density can be arbitrary high. In the connected elastic amplitude it
corresponds to a regge pole exchange with the intercept $\Delta$. In fact, the
expression (24) for $\Delta>0$ corresponds again to almost black disk (but
without parton saturation !) of the radius $r_{0}\Delta~{}y$ with a thin grey
border, because the parton density changes here very fast from a small to a
big values at the distances $\delta x_{\bot}\sim r_{0}/\sqrt{\Delta}$.
In general case one must take into account that partons in the colliding disks
can have different virtualities $u\sim\ln k^{2}_{\bot}/m^{2}$ , where the
parton density $\rho(y,b,u)$ has now nontrivial dependence on $u$. Partons
with large $u$ are more strongly localized in transverse coordinates and their
interaction cross-sections $\sigma(u_{1},u_{2})$ usually decrease for large
$u_{i}$. The expression for the transparency in the process of collision of
two parton disks has again the form (22), where the mean number of parton
interactions during the collision is given by the following generalization of
(23)
$N(y,Y-y,B)=\int d^{2}b\int
du_{1}du_{2}~{}\sigma(u_{1},u_{2})\cdot\rho(y,|b|,u_{1})\rho(Y-y,|B-b|,u_{2})~{}.$
(25)
In this case the restrictions on the form of $\rho(y,b,u)$ coming from the
frame independence condition $(\partial/\partial y)N(y,Y-y,B)=0$ are not so
strong as for (23 \- 24).
If the parton cross-sections that enter (25) can be approximately factorized
as
$\sigma(u_{1},u_{2})\sim\ell(u_{1})\cdot\ell(u_{2})~{}~{},$
then the condition for the boost invariance of $N$ can be reduced to the more
simple equation
$\int du~{}\ell(u)\rho(y,b,u)~{}=\rho_{0}\frac{1}{y}~{}e^{\Delta
y-b^{2}/yr_{0}^{2}}$ (26)
In this case the form of $\rho(y,b,u)$ for some interesting models are again
almost completely fixed. For example, so is the superposition of grey
saturated disks with different virtualities
$\rho(y,b,u)\sim\varphi(y,u)~{}\theta(r_{1}\chi(y)-bu^{a})~{},$
so that the mean radii of these disks $r_{1}\chi(y)/u^{a}$ decrease 888In QCD
the radii of hard subdisks can grow as $\sim y/\sqrt{u}$, and this corresponds
to $a=2$ with growth of u. Here, from equation(26), one can find the explicit
expression for
$\varphi(y,u)~{}=~{}\varphi_{0}~{}\frac{e^{\Delta
y}}{\ell(u)~{}u^{2a}~{}y}~{}\exp{\Big{(}-c_{2}\frac{\chi^{2}(y)}{y~{}u^{2a}}~{}~{},\Big{)}}$
(27)
where $\varphi_{0},~{}a,~{}\Delta,~{}c_{2}$ and functions $\ell(u),~{}\chi(y)$
can be chosen arbitrary. If we choose $\chi(y)=~{}\chi_{0}y$ so to have the
Froissart type of the growth of disk radius we will have from (27) for the
disk density
$\varphi(y,u)~{}\sim~{}~{}\frac{e^{\Delta
y}}{\ell(u)u^{2a}y}~{}\exp{\Big{(}-\tilde{c_{2}}\frac{y}{u^{2a}}~{}~{}.\Big{)}}$
(28)
For large $u$ it is natural to expect that $\ell(u)\sim e^{-cu}$ and therefore
the mean density of hard subdisks will grow with u and y.
### 2.6 Corrections to the mean picture from a big fluctuations in the
colliding states
To discuss if the boost-invariance can be somehow restored also when the mean
parton density $\rho(y,x_{\bot})$ is not of the Gauss form (24) on must take
into account all essential parton configurations, and also these ones that are
very far from the mean one. In this case, one can hope that in different
frames the main contribution into cross-sections comes from some different
parton components so to compensate the variation of the contribution of the
mean states. Here especially interesting can be the rare components of
$\Psi(P)$. In the Fock state of a fast particle such rare parton
configurations contain a relatively small number of partons and therefore it
can give large contribution to the transparency and compensate the boost non-
invariance of $T$ and other quantities in the mean density states. Such
configurations can mainly arise due to large fluctuations in the initial
stages of the patron cascade. CM one can ask for such a parton component
$|~{}bare>$ for a fast hadron that does not contain a black disk at all and
interacts slowly (or does not interact at all). We can schematically represent
such a state of fast particle :
$\Psi(P\rightarrow\infty)~{}\simeq~{}f_{d}~{}|disk>+~{}f_{b}~{}|~{}bare>~{},~{}~{}~{}f_{d}\gg
f_{b}~{},$
where $f_{b}$ is the amplitude of the rare component $|bare>$ and $|disk>$ is
the superposition of “big” parton components that gives the main contributions
in a various cross-sections.
The probability for a fast hadron to be in the rare state is
$w(y)\sim|f_{b}|^{2}$. In this case the expression for the transparency can be
generalized to :
$\displaystyle
T(y,Y-y)~{}~{}\simeq~{}~{}T_{mean}(y,Y-y)+~{}\tau_{bd}\cdot\big{(}w(y)+w(Y-y)\big{)}~{}~{}+$
$\displaystyle+~{}\tau_{bb}\cdot w(Y-y))\cdot
w(y)~{},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$ (29)
where the transparencies of the rare component $\tau_{bd}$ and $\tau_{bb}$ can
be finite and do not decrease with growth of $y$.
The term $T_{mean}(y,Y-y)$ coming from the $|disk>\bigotimes|disk>$
interaction is not boost-invariant and it can be $const(Y\rightarrow\infty$)
in the lab.frame for a saturated (grey) disk, and very small in csm.
The two last terms in (2.6) coming from the $|bare>\cdot~{}|disk>$ and
$|bare>\cdot~{}|bare>$ components can dominate and so can make $T$ boost
invariant. But it is possible only if $w(y)$ is approximately constant for
high $y$. Various estimates of $w(y)$ lead to a decreasing function of the
type
$w(y)\sim\exp(-\gamma\cdot y)$
for the case of the growing total cross-section. It corresponds to the choice
at every rapidity stage of such an evolution direction, that does not increase
the parton number 999Such a behavior of $w(y)$ can also be found from the
boost-invariance condition applied to the behavior of some hard cross-
sections. See Eq.(31 \- 32) . Such a behavior of $w$ leads to the expression
$T(y,Y-y)~{}\sim~{}\tau_{bd}\cdot\big{(}~{}e^{-\gamma~{}(Y-y)}~{}+~{}e^{-\gamma~{}y}~{}\big{)}+\tau_{bb}\cdot
e^{-\gamma~{}Y}~{},$ (30)
corresponding to the collision of the rare state $|~{}bare>$ with other
particle. Such contribution to $T$ is $y$ dependent, and therefore on this way
the frame indeprndence can also not be restored.
### 2.7 Collision of parton disks in the case of particles moving in the same
direction
When we impose the condition of the independence of the cross-sections
$\sigma_{in}(Y,y,b)$ on the choice of system (i.e. on $y$), we can choose the
values of $y$ not only in the interval $0<y<Y$, i.e. between Lab and center of
mass ( CM ) systems. But let us also consider systems with $y<0$ and $y>Y$ ,
when both parton disks move in one direction.
This, in principle, can lead to additional constraints on the amplitudes. But
in this case, at first glance, paradoxes may also arise when estimating the
probability of the interaction. Especially this is seen in the case of the
growing cross-section.
To illustrate this let us consider the case of colliding Froissart type disks
when their radii $R(y)$ and $R(Y-y)$ grow with the particles rapidity as
$R(y)=r_{0}y$ and $R(Y-y)=r_{0}(Y-y)$, and estimate the behavior of the
inelastic cross-section with the definite impact parameter
$\sigma_{in}(Y,y,B)$ .
We chose $B>r_{0}Y$. In this case, when $0<y<Y$, the parton disks pass one by
another without interaction. But this is only if they move towards each other
because here $B>R(y)+R(Y-y)$ and therefore $\sigma_{in}=0$. But if we at the
same $B$ choose the system so that disks move in the same direction and so
that $y\gg Y\gg 1$ then disks will overlap when one disk will go through
another. And therefore partons from one disk can interact with partons from
another disk.
But it is essential that in such disk interaction no new particles can be
created. Indeed, if in this case a particles can be created, their momenta
will be small ($\sim m$) in this system. And the creation of such a particle
in CM system would correspond to a creation of a particle with energy $\sim
me^{y}$, where $y\gg Y$ and this is forbidden by the energy-momentum
conservation ; so $\sigma_{in}=0$.
From the other hand, the exchange of particles between these discs with an
approximate momentum conservation (or, with the exchange of small transverse
momenta) can give a contribution to their elastic scattering and which comes
here also from large transverse distances ( $B>R(Y)$ ).
The parton wave functions of these “ intersecting disks” can be entangled one
by another by such a mechanism, and also the conversion of a pure state to a
mixed one for every disk can in principal take place. There is here probably
no contradiction with the parton picture, since there is no way to distinguish
between low energy partons in the wave function (1) and the close energy
partons from vacuum fluctuations.
The entanglement between states of disks in such a collisions is proportional
to their area. This suggests that these discs have entropy $\sim$ their area
($\sim$ the number of low energy partons), i.e. $\sim y^{2}$ in this case of
the Froissart type growth of cross-sections.
### 2.8 Limitations on the dynamic of a hard elastic scattering
In the field theory the high energy hard elastic scattering of point-like
particles leads usually to the power behavior of elastic cross sections
$d\sigma_{1}^{el}(s,t\simeq-s/2)/dt\sim 1/s^{a}~{},~{}~{}~{}~{}~{}$
For the scattering of particles composed from n constituents with
approximately equal momenta we have
$d\sigma_{n}^{el}(s,t\sim-s/2)/dt\sim\mu^{-4(n-1)}(d\sigma_{1}(s/n^{2},t\sim-s/n^{2})/dt)^{n}~{}~{}.$
But the mean state can contain the growing number of partons and the direct
application of this expression leads to a small contribution. In this case the
main contribution to $d\sigma/dt$ can come from the rare parton configurations
containing the minimal number of partons (when both particles are in a “bare”
state). Then, the cross-section of particles in the system, where
$~{}s=m^{2}e^{Y}$ and the energies of colliding particles are
$me^{y},~{}me^{Y-u}$, can be represented as
$d\sigma^{el}(s,t\sim-s/2)/dt\sim\big{(}~{}d\sigma_{0}(s,t\sim-s/2)/dt~{}\big{)}^{n_{0}}~{}w(y)w(Y-y)~{}~{},$
(31)
where $w(y)$ is the probability that particle with energy $me^{y}$ is in the
bare state, and $n_{0}$ \- the number of “valent” components in the bare state
($n_{0}\simeq 2\div 3$ for meson $\div$ baryon). It follows from the boost-
invariance of (31) that
$w(y)\sim e^{-2cy}$ (32)
This condition essentially restricts the behavior of the asymptotic of hard
scattering and, in particular, gives the information about the amplitude (
$\sim\sqrt{w(y)}$ ) of the bare component of $\Psi(P)$.
The similar limitation follows from the consideration of the asymptotic cross-
sections of two particle reactions with exchange of quantum numbers (such as
$\pi^{-}+p~{}\rightarrow~{}\pi^{0}+n$). Here again, the dominant parton
configuration contributing to such reactions must contain the minimum number
of partons. So again, we have the factor $w(y)w(Y-y)$ in the cross-section.
Additionally, there is the factor of type $e^{-2gy}$ connected with the
probability that this parton configuration contains also the small energy
parton with “needed” quantum numbers. Therefore, from the frame independence
of amplitudes of such reactions we also come to the condition (32). And, if
interpreting in terms of the exchange of some nonvacuum reggeon we come to
estimate their intercept as $\alpha(0)\simeq 1-c-g$ .
## 3 Summary
The main aim of this note was to illustrate that the condition of boost-
invariants essentially restricts the behavior of high energy cross-sections
calculated in parton approaches. And the form of resulting constrains is of
the same type as coming from the t-channel unitarity condition. So that one
can suppose that this similarity, by their nature, has much more general
grounds.
Such a condition works especially effectively in the case of growing with
energy cross-section, that is, just when the t-unitarity conditions for
amplitudes is complicated to apply - because here the multiparticle exchange
becomes important. In this case the resulting restrictions on the asymptotic
behavior are rather strong and can, in principle, exclude some popular models.
## References
* [1] V.N. Gribov, arXiv:hep-ph/0006158
* [2] S.Brodsky, H-C Pauli, S.Pinsky, 9705477, Phys.Rept 301 (1998) 299
* [3] M.Perry, arXiv:hep-ph/9612244
* [4] T.Heinzl, arXiv hep-th/0008096
* [5] A.Harindranath, arXiv:hep-ph/9612244
* [6] A.B.Kaidalov, ITEP School of Physics 1983
* [7] Y. Kovchegov, E. Levin , Quantum Chromodynamics at High Energy,
Cambridge University Press, 2012
* [8] O.V. Kancheli arXiv 1609.07657
* [9] V.N. Gribov, I.Ya. Pomeranchuk, Phys.Rev.Lett. 8,343,412
* [10] V.N. Gribov, I.Ya. Pomeranchuk and K.A. Ter-Martirosyan, Phys. Rev. 139B (1965) 184 ;
V.N. Gribov, Soviet Phys. JETP 26, 414, (1968) .
|
2024-09-04T02:54:58.663665 | 2020-03-10T12:35:38 | 2003.04662 | {
"authors": "Isaac Alonso Asensio, Claudio Dalla Vecchia, Yannick M. Bah\\'e, David\n J. Barnes and Scott T. Kay",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26132",
"submitter": "Isaac Alonso Asensio",
"url": "https://arxiv.org/abs/2003.04662"
} | arxiv-papers | # The intra-cluster light as a tracer of the total matter density
distribution: a view from simulations
Isaac Alonso Asensio,1,2 Claudio Dalla Vecchia,1,2 Yannick M. Bahé,3David J.
Barnes4 and Scott T. Kay5
1Instituto de Astrofísica de Canarias, C/Vía Láctea s/n, E-38205 La Laguna,
Tenerife, Spain
2Departamento de Astrofísica, Universidad de La Laguna, Av. Astrofísico
Francisco Sánchez s/n, E-38206 La Laguna, Tenerife, Spain
3Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The
Netherlands
4Department of Physics, Kavli Institute for Astrophysics and Space Research,
Massachusetts Institute of Technology,
Cambridge, MA 02139, USA
5Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy,
School of Natural Sciences,
The University of Manchester, Manchester M13 9PL, UK E-mail<EMAIL_ADDRESS>(IAA)
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
By using deep observations of clusters of galaxies, it has been recently found
that the projected stellar mass density closely follows the projected total
(dark and baryonic) mass density within the innermost $\sim 140$ kpc. In this
work, we aim to test these observations using the Cluster-EAGLE simulations,
comparing the projected densities inferred directly from the simulations. We
compare the iso-density contours using the procedure of Montes & Trujillo
(2019), and find that the shape of the stellar mass distribution follows that
of the total matter even more closely than observed, although their radial
profiles differ substantially. The ratio between stellar and total matter
density profiles in circular apertures, shows a slope close to $-1$, with a
small dependence on the cluster’s total mass. We propose an indirect method to
calculate the halo mass and mass density profile from the radial profile of
the intra-cluster stellar mass density.
###### keywords:
galaxies: clusters: general – methods: numerical
††pubyear: 2020††pagerange: The intra-cluster light as a tracer of the total
matter density distribution: a view from simulations–References
## 1 Introduction
Ten to more than thirty percent of the stellar light of clusters of galaxies
comes from a diffuse distribution of stars emitting the so called intra-
cluster light (ICL), the inferred fraction depending on the definition of the
border between the brightest central galaxy and the diffuse stellar component,
the radial extent at which the stellar mass distribution is integrated and the
relaxation state of the clusters (e.g., Krick & Bernstein, 2007; Gonzalez et
al., 2013; Mihos et al., 2017; Jiménez-Teja et al., 2018; Zhang et al., 2019).
This distribution is produced by the stripping of stars from galaxies
undergoing mergers and tidal interactions during their evolution in the
cluster environment (see Mihos, 2015, for a review). Due to its low surface
brightness, the observational study of the stellar population producing the
intra-cluster light has been challenging. An increasing effort towards deep
imaging of clusters of galaxies in the recent years, both through individual
cluster imaging, up to $z\simeq 1.5$, (e.g., Mihos et al., 2005; Montes &
Trujillo, 2014; Burke et al., 2015; Morishita et al., 2017; Ko & Jee, 2018;
Jiménez-Teja et al., 2018; Montes & Trujillo, 2018; DeMaio et al., 2018;
DeMaio et al., 2020), and by stacking observations of multiple clusters (e.g.,
Zibetti et al., 2005; Zhang et al., 2019) has allowed new insights into the
ICL.
Figure 1: Projected density of stars (top) and matter (bottom) for three
different cluster of the C-EAGLE simulations at $z=0.352$. The secondary
density peak in the Cluster CE-21 (middle panel) can be due to a recent major
merger event which has stripped stars from the interacting galaxies, or
numerical artifacts produced by SUBFIND not being able to correctly assign
stellar particles to substructures.
One recent, remarkable result achieved with deep imaging is the tight
correlation between the distribution of the stellar surface density, inferred
from its surface brightness, and the surface density of the total mass,
measured by modelling the gravitational lensing signal (Montes & Trujillo,
2019, hereafter MT19). MT19 proposed that the surface density of the stellar
mass not bound to galaxies should settle in the potential well of the cluster
similarly to the dark matter. This could be used to trace the total matter
distribution of clusters within a cluster-centric distance set by the depth of
the observations. They also compared their result with total mass surface
densities inferred from the X-ray emission of the intra-cluster medium, and
concluded that this method is limited by the misalignment of the gaseous
component with respect to the dark matter and stellar mass in non-relaxed
clusters. Their quantitative analysis made use of the Modified Haussdorf
Distance (MHD) (Dubuisson & Jain, 1994) to quantify the deviation between iso-
density contours of stars and total matter. They found that, in general, the
stellar surface density has smaller MHD values than that of the intra-cluster
medium where both are compared with the iso-density contours of total mass.
In this Letter, we test this observational result with state-of-the-art
cosmological, hydrodynamic simulations of the Cluster-EAGLE project (C-EAGLE,
Barnes et al., 2017; Bahé et al., 2017). We give a brief description of the
simulations in the next section and a description of the analysis in section
3. The main results of this work are shown in section 4 and discussed in
section 5, along with some concluding remarks.
## 2 Simulations
We have used the set of 30 zoom-in cluster simulations performed within the
C-EAGLE project. The simulated clusters are uniformly distributed in the mass
range $10^{14}<M_{200}/\mathrm{M}_{\odot}<10^{15.4}$, where $M_{200}$ is the
halo mass.111$M_{200}$ is the mass enclosed in a sphere of radius $r_{200}$,
whose mean density equals 200 times the critical density of the Universe. The
simulations were performed with the EAGLE model for galaxy formation and
evolution, with the AGNdT9 calibration (Schaye et al., 2015). They provide a
physical spatial resolution of $\epsilon=0.7~{}\mathrm{kpc}$ (at $z<2.8$) and
baryonic mass resolution of $m_{\mathrm{gas}}\approx 1.81\times
10^{6}~{}\mathrm{M}_{\odot}$. For more information on the EAGLE model and its
comparison with global relations of the observed galaxy population, the reader
is referred to Schaye et al. (2015) and Crain et al. (2015).
For more details on the numerical algorithms describing photo-ionization
equilibrium cooling, star formation, stellar evolution, stellar feedback,
black hole growth and feedback, and the hydrodynamic scheme we refer the
reader to Wiersma et al. (2009a), Schaye & Dalla Vecchia (2007), Wiersma et
al. (2009b), Dalla Vecchia & Schaye (2012), Rosas-Guevara et al. (2015), and
Schaller et al. (2015), respectively.
Figure 2: Fraction of stellar mass contributing to the ICL,
$f_{\mathrm{ICL}}$, as function of halo mass, $M_{200}$, for all clusters in
the sample. There is no evidence for a correlation with halo mass. The solid
line marks the average value of $f_{\mathrm{ICL}}$, and the dashed lines the
spread around it.
For the results presented here, we have used the particle data, friends-of-
friends and SUBFIND (Dolag et al., 2009) groups at $z=0.352$ to match the
average redshift of the Hubble Frontier-Fields clusters (Lotz et al., 2017).
Furthermore, the same analysis was performed at $z=0$, and we found no
significant difference. Throughout the paper we assume the cosmological
parameters of the C-EAGLE simulations,
$(\Omega_{0},\Omega_{\Lambda},h,n_{\mathrm{s}},\sigma_{8})=(0.307,0.693,0.6777,0.961,0.8288)$
(Planck Collaboration et al., 2014), where $\Omega_{0}$ and $\Omega_{\Lambda}$
are the matter and dark energy fractions, $h$ is the Hubble constant in units
of $100~{}\mathrm{km}\,\mathrm{Mpc}^{-1}\,\mathrm{s}^{-1}$, $n_{\mathrm{s}}$
and $\sigma_{8}$ are the spectral index and the power spectrum normalisation
used to generate the initial conditions.
In the analysis, we have used all particles belonging to the main halo of the
largest friends-of-friends group in each simulation, i.e., we excluded all
particles bound to satellite galaxies and substructures within the same
friends-of-friends group. Maps of projected stellar and total matter density
were produced with a spatial resolution of $5~{}\mathrm{kpc}$, in order to
mimic the spatial resolution employed in the analysis of the observational
data ($3\times 3~{}\mathrm{arcsec}^{2}$ at $z\simeq 0.35$). We have repeated
the analysis with higher ($3.75~{}\mathrm{kpc}$) and lower
($7.5~{}\mathrm{kpc}$) resolution without finding any remarkable difference.
The main advantage with respect to observations is that there is no need of
masking the light of satellite galaxies. However, debris from tidal
interactions between galaxies will be included in the projected matter
density. Furthermore, there are biases due to SUBFIND failing to assign
stellar particles to satellites (Bahé et al., in prep).
Figure 3: Isodensity contours of the inner ($R\leq 140~{}\mathrm{kpc}$) and
outer ($R>140~{}\mathrm{kpc}$) regions (top and bottom, respectively) of total
matter (blue dotted lines) and stars (red dashed lines) for three different
clusters. Lighter colours indicate larger distances (lower densities) from the
centre.
Examples of the projected stellar and total mass density are shown in Fig. 1
for three simulated clusters of increasing virial mass. The top row
corresponds to the projected density of stars, while the bottom row shows the
density of total matter (dark and baryonic).
Uncertainties on the amount of ICL mass produced and its radial distribution
may arise from the modelling of the star formation rate and the spatial and
mass resolution of numerical simulations. The EAGLE model matches quite
accurately the observed stellar mass and luminosity functions (Schaye et al.,
2015; Trayford et al., 2015). Moreover, it reproduces the evolution of the
stellar mass function and the observationally inferred density of stars in the
universe up to high redshift ($z=7$) (Furlong et al., 2015). However, while
the reference simulation matches the observed sizes of galaxies over several
decades in stellar mass, the AGNdT9 calibration yields an offset in the
relation towards more compact galaxies (Schaye et al., 2015). This last point
seems to be relevant in the interpretation of the ICL mass fractions described
in the next section, where the inferred values are on the low side of the
distribution of those derived from observations (see references in section 1):
compact galaxies are less prone to stripping. On the other hand, Henden et al.
(2019) noted that having too large in size galaxies in their simulations
boosts the effect of tidal stripping, increasing the fraction of stellar mass
in the ICL, and that uncertainties in galaxy sizes are the major contributors
to the uncertainty in the determining the fraction of mass in the ICL in
simulations.
## 3 Analysis
Before describing the methodology used in the analysis of the simulation data,
we briefly discuss a consistency check for the simulated clusters. We computed
the fraction of stellar mass in the ICL, $f_{\mathrm{ICL}}$, and compared it
with expected observational and theoretical values. For the sake of ease, we
adopted the methodology of (Rudick et al., 2011). The mass fraction has been
computed as the stellar mass with projected stellar density below some
threshold surface brightness, $\mu$, with respect to the total stellar mass
within $r_{200}$. As in (Rudick et al., 2011), we have converted the stellar
surface density into surface brightness assuming a constant mass-to-light
ratio of $5~{}\mathrm{M}_{\odot}\,\mathrm{L}^{-1}_{\odot}$, and set
$\mu=26.5~{}\mathrm{mag}\,\mathrm{arcsec}^{-2}$ as the threshold.
We show in figure 2 the computed $f_{\mathrm{ICL}}$ as function of halo mass.
We find that $f_{\mathrm{ICL}}=0.091\pm 0.013$ (solid and dashed lines), with
no significant correlation with the total mass of the clusters (the Pearson
correlation coefficient is $0.0063$). The result is consistent with that of
(Rudick et al., 2011). Although the range of halo masses in our sample is
rather narrower, similar fractions and the lack of correlation have been
reported by (Pillepich et al., 2018), when using a definition of the ICL
related to the size of the central galaxy. The result is consistent with
previous simulations (Rudick et al., 2011; Contini et al., 2014), where they
applied semi-analytical models to N-body simulations, and hydrodynamical
cosmological simulations (Pillepich et al., 2018; Henden et al., 2019).
Finally, observations using similar thresholds have reported as well similar
mass fractions (Krick & Bernstein, 2007; Montes & Trujillo, 2014).
We have followed a methodology similar to MT19 to extract iso-density
contours. We computed circularly averaged radial profiles of the density of
the stellar and total mass. For this, we take the position of the minimum of
the potential energy as centre of the cluster (McAlpine et al., 2016). The
projected densities for drawing the contours222We used the contour function of
matplotlib to compute the contours. were selected interpolating the profiles
at radii of 50, 75, 100, 125, $140~{}\mathrm{kpc}$ (the distances used by
MT19) for the inner part, and of 170, 220, 300, 460, 620, 780, 940,
$1100~{}\mathrm{kpc}$ for the outer regions, and only up to $r_{200}$. At
large distance from the centre of the clusters ($r>140~{}\mathrm{kpc}$), we
down-sample the images merging $4\times 4$ pixels, thus degrading the spatial
resolution to $20~{}\mathrm{kpc}$, to smooth the otherwise very noisy
contours. The contours of the projected densities are shown in Fig. 3, for the
same three clusters as depicted in Fig. 1. The projected total mass density
contours are drawn with blue dotted lines, and the projected stellar density
contours with red dashed lines, where a darker colour indicates a smaller
radius. The top row is a close-up view of the contours near the centre of the
clusters, out to $140~{}\mathrm{kpc}$, whereas the contours at larger
distances are shown in the bottom row.
We measured projected radial distances from the centre of the cluster instead
of elliptical distances to the centre of the brightest central galaxy, as
usually done in observations. This simplification is not crucial to derive the
iso-density contours, as it only changes the values of density at which the
contours will be drawn. In practice, this means that the distances we use are
systematically different from those of MT19, the difference depending on the
eccentricity of the brightest central galaxy, or the presence of more than one
central galaxy, that we excluded from the analysis, or both. As this is only
an exploratory analysis we ignore these differences.
As in MT19, to compare the shape of the contours, we estimated the Modified
Hausdorff distance (MHD) defined by Dubuisson & Jain (1994):
$d_{\mathrm{MH}}(X,Y)=\max\left(d(X,Y),d(Y,X)\right),\\\ $ (1)
where
$d(X,Y)=\frac{1}{N_{X}}\sum_{\textbf{x}\in X}\min_{\textbf{y}\in
Y}\|\textbf{x}-\textbf{y}\|.$ (2)
The two samples,
$X\equiv\\{\textbf{x}_{1},\textbf{x}_{2},\dots,\textbf{x}_{N_{x}}\\}$ and
$Y\equiv\\{\textbf{y}_{1},\textbf{y}_{2},\dots,\textbf{y}_{N_{y}}\\}$,
contains the points defining two contours, and $\|\cdot\|$ is the Euclidean
norm. As we may have different closed contours for the same density value, we
select for each distance the contour composed by the largest number of
segments. The selected contours are shown in Fig. 3.
Figure 4: Left panel. Comparison of the MHD of MT19 (in green, signle
measurements with error bars) and that from the C-EAGLE simulations (in blue,
solid line). The shadows indicate the 1-$\sigma$ region for each method. A
small scatter in the radial distance of the MT19 data has been added for
clarity. Right panel. Histogram of $\zeta$ computed from all the contours
taken inside the virial radius of each C-EAGLE cluster. The vertical (green)
solid line represents the mean value of $\zeta$ obtained by MT19, embedded in
its 1-$\sigma$ region. The dotted, vertical line indicates their lowest value.
When measuring the MHD close to the virial radius of the clusters, we would
expect an increase of its value, as the outskirts of clusters are not
dynamically relaxed and fewer stellar particles are populating it, producing
noisier contours. In order to compare the MHD across different distances, we
define the relative MHD as
$\zeta=\frac{d_{\mathrm{MH}}(r)}{r}\,,$ (3)
where $r$ is the distance at which the iso-density contours have been
computed. This way, we are measuring deviations as fraction of the distance.
We find that this definition removes almost entirely the correlation with
distance.
## 4 Results
In the left panel of Fig. 4, we show with the blue, solid line the mean value
of $d_{\mathrm{MH}}$, the shaded area depicts the 1-$\sigma$ confidence
interval. We overplotted the MHDs calculated by MT19, as well as their
1-$\sigma$ area, in green. For sake of clarity, observational points for
individual cluster are slightly displaced along the x-axis. From that panel,
we can highlight that:
1. 1.
the $d_{\mathrm{MH}}$ from both simulations and observations are of the same
order of magnitude;
2. 2.
they show the same trends with radius;
3. 3.
and simulations have a $\sim 50\%$ lower $d_{\mathrm{MH}}$ than observations,
with smaller scatter.
As $d_{\mathrm{MH}}$ increases monotonically with the distance at which it is
computed, we introduced the relative MHD, $\zeta$, to obtain a distance-free
similarity measurement. We show in Fig. 4 (right panel) the distribution of
$\zeta$ for all contours and clusters, in blue, and the $\zeta$ extracted from
MT19’s data, in green. Most of the values of $\zeta$ are lower than those
observed: 96 percent of the relative MHDs are below the mean observed value.
The shape of the distribution is remarkably close to a Gaussian distribution
in logarithmic space, with mean $\langle\zeta\rangle=0.107$ and dispersion
$\sigma_{\zeta}=0.080$, indicating that $\zeta$ is a solid, scale-free
estimate of the similarity of contours at any cluster-centric distance.
Figure 5: Left panel. Stellar (solid lines) and total matter(dashed lines)
surface density profiles from the particles of the main halo of the cluster.
We consider only the ICL mass (see text), including the particles not bounded
to any substructure. The dashed line is the threshold used for computing the
ICL mass fraction (i.e., $\mu=26.5$ mag arcsec-2, or $\Sigma_{*}\approx
1.4\times 10^{6}~{}\mathrm{M}_{\odot}\,\mathrm{kpc}^{-2}$). Right panel. The
ratio between the stellar and total matter density profiles for all the
clusters. The red, dashed line is the best-fit power law given in equation 4.
We would like to highlight two relevant issues with the definition of
$d_{\mathrm{MH}}$ that can bias the observational values towards higher
values. The $d_{\mathrm{MH}}$ is defined based on points and not continuous
segments. This obviously simplifies the computation, but it has to be taken
into account when dealing with coarse datasets, as two similar shapes can have
a non-negligible $d_{\mathrm{MH}}$. Second, each point’s contribution is
defined positive and with respect to the other set of points. This provides a
distance that increases monotonically with noise (Dubuisson & Jain, 1994),
thus special care must be taken when dealing with data with low signal-to-
noise or large uncertainties. Both these points could be driving the observed
$d_{\mathrm{MH}}$ towards higher values, as masking galaxies introduces non-
continuous contours and the spatial resolution of the lensing models is
limited.
In addition to the study of the similarity between the total matter and
stellar mass distribution, we have also compared the density profiles of the
stellar component. In Fig. 5 (left panel) we show circularly averaged density
profiles of the stellar particles. They follow a power-law behaviour up to
$\sim 500~{}\mathrm{kpc}$ for the lightest halos, and $\sim 1~{}\mathrm{Mpc}$
for the more massive ones. At such distances, the interactions between
substructures are weaker, and fewer particles get ejected to the intra-cluster
medium, thus they can no longer successfully trace the potential well. In the
right panel of Fig. 5 we show the ratio between the stellar and total matter
density profiles. This ratio is close to a power law with scatter of
$0.1~{}\mathrm{dex}$ and a slope of about $-1$. We have performed a fit to all
the profiles at once, with and without normalising the radial distance using
$r_{200}$, yielding the relations:
$\displaystyle\log_{10}\Sigma_{\mathrm{tot}}=$
$\displaystyle\log_{10}\Sigma_{*}+$ $\displaystyle(1.115\pm
0.005)\log_{10}r-(0.25\pm 0.01)\,,$ (4)
$\displaystyle\log_{10}\Sigma_{\mathrm{tot}}=$
$\displaystyle\log_{10}\Sigma_{*}+$ $\displaystyle(1.085\pm
0.004)\log_{10}(r/r_{200})+(3.144\pm 0.005)\,.$ (5)
The residuals of both fits have a similar scatter: $0.147$ and $0.127$ dex for
equations 4 and 5, respectively. We recall that the AGNdT9 feedback
calibration, used in the C-EAGLE simulations, yields more compact galaxies
than the reference model for stellar masses
$M_{\star}>10^{10}~{}\mathrm{M}_{\odot}$. The less efficient tidal stripping
may therefore deposit more stellar mass closer to the centre of the cluster,
resulting in a steeper density profile. However, this bias may be of secondary
importance, at least within the central $100~{}\mathrm{kpc}$ (Bahé et al., in
prep.).
We propose a new, indirect way of measuring a cluster’s mass knowing its
stellar density profile in the innermost region. First, via deep imaging as
that performed by MT19, the stellar density profile can be obtained and
extrapolated up to $r_{200}$ assuming a power law. Then, using equation 4 or
5, the total mass density profile can be computed. This profile can be
integrated to obtain an estimation of the cluster’s total mass.
This procedure would be similar to that proposed by Pillepich et al. (2018).
In that case, however, only the power law slope of the 3D stellar mass density
profile was used to infer the total halo mass, in our case we use more
information (the 2D stellar density profile and equation 4 or 5), expecting
less scatter in the mass estimate.
## 5 Discussion & Conclusions
We have studied the similarity of the projected stellar and total matter
distributions in the halos of massive galaxy clusters using the C-EAGLE set of
30 zoom-in simulations of clusters of galaxies. In the analysis, we considered
as constituents of the diffuse distribution of stellar mass only particles in
the friends-of-friends group that were not assigned to any substructure by the
SUBFIND algorithm.
We can summarise our results as follows:
1. 1.
we confirm the finding of MT19: the projected distribution of stars closely
follow the projected distribution of the total mass, although their radial
profiles differ substantially;
2. 2.
the ICL, approximated as those stars in the region where
$\mu>26.5~{}\mathrm{mag}\,\mathrm{arcsec}^{-2}$ ($\Sigma_{*}\approx 1.4\times
10^{6}~{}\mathrm{M}_{\odot}$), accounts for $\sim 10$ percent of the stellar
content of the cluster within $r_{200}$; this fraction does not show any
correlation with the mass of the cluster;
3. 3.
the ratio between the surface density profiles of the stellar to the total
matter follows a simple power-law up to the virial radius, equations 4 and 5;
as the slope and amplitude of the stellar surface density profile can be
extracted from observation, we proposed a method to estimate the total mass
surface density profile, thus the mass of the halo;
4. 4.
the similarity between the stellar and total matter distributions in the
cluster halo is even higher in the simulations than that observed by MT19
(Fig. 4); This indicates that stars closely trace the underlying gravitational
potential;
5. 5.
in order to show any self-similarity, we have introduced the relative measure,
$\zeta=d_{\mathrm{MH}}/r$, whose distribution resembles a log normal when
using all the clusters and contours pairs; the parameter $\zeta$ could be used
to study the relaxation state of a cluster; the maximum of this distribution
is located at $\zeta\sim 0.1$, thus the typical $d_{\mathrm{MH}}$ is about
$10\%$ of the distance at which it is computed.
The study of the spatial distribution of the ICL can be used to infer, in high
detail, the distribution of the underlying dark matter in clusters of
galaxies. Moreover, the average density profile of total matter can be
extracted, and extrapolated up to the virial radius, only by measuring the
slope of the stellar mass density profile and its normalisation close to the
centre of the cluster. This is complementary to the study of Pillepich et al.
(2018), where only the total halo mass was given as function of the slope of
the 3D stellar density profile, with larger uncertainty.
## Acknowledgements
We are very grateful to Ignacio Trujillo and Mireia Montes for supporting this
work with useful ideas and discussions. CDV acknowledges the support of the
Spanish Ministry of Science, Innovation and Universities (MCIU) through grants
RYC-2015-18078 and PGC2018-094975-B-C22. YMB acknowledges funding from the EU
Horizon 2020 research and innovation programme under Marie Skłodowska-Curie
grant agreement 747645 (ClusterGal) and the Netherlands Organisation for
Scientific Research (NWO) through VENI grant 639.041.751. This work used the
DiRAC@Durham facility managed by the Institute for Computational Cosmology on
behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was
funded by BEIS capital funding via STFC capital grants ST/K00042X/1,
ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC
operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
## References
* Bahé et al. (2017) Bahé Y. M., et al., 2017, MNRAS, 470, 4186
* Barnes et al. (2017) Barnes D. J., et al., 2017, MNRAS, 471, 1088
* Burke et al. (2015) Burke C., Hilton M., Collins C., 2015, MNRAS, 449, 2353
* Contini et al. (2014) Contini E., De Lucia G., Villalobos Á., Borgani S., 2014, MNRAS, 437, 3787
* Crain et al. (2015) Crain R. A., et al., 2015, MNRAS, 450, 1937
* Dalla Vecchia & Schaye (2012) Dalla Vecchia C., Schaye J., 2012, MNRAS, 426, 140
* DeMaio et al. (2018) DeMaio T., Gonzalez A. H., Zabludoff A., Zaritsky D., Connor T., Donahue M., Mulchaey J. S., 2018, MNRAS, 474, 3009
* DeMaio et al. (2020) DeMaio T., et al., 2020, MNRAS, 491, 3751
* Dolag et al. (2009) Dolag K., Borgani S., Murante G., Springel V., 2009, Monthly Notices of the Royal Astronomical Society, 399, 497
* Dubuisson & Jain (1994) Dubuisson M.-P., Jain A., 1994, in Proceedings of 12th International Conference on Pattern Recognition. IEEE Comput. Soc. Press, pp 566–568, doi:10.1109/ICPR.1994.576361, http://ieeexplore.ieee.org/document/576361/
* Furlong et al. (2015) Furlong M., et al., 2015, MNRAS, 450, 4486
* Gonzalez et al. (2013) Gonzalez A. H., Sivanandam S., Zabludoff A. I., Zaritsky D., 2013, ApJ, 778, 14
* Henden et al. (2019) Henden N. A., Puchwein E., Sijacki D., 2019, arXiv e-prints, p. arXiv:1911.12367
* Jiménez-Teja et al. (2018) Jiménez-Teja Y., et al., 2018, ApJ, 857, 79
* Ko & Jee (2018) Ko J., Jee M. J., 2018, ApJ, 862, 95
* Krick & Bernstein (2007) Krick J. E., Bernstein R. A., 2007, AJ, 134, 466
* Lotz et al. (2017) Lotz J. M., et al., 2017, The Astrophysical Journal, 837, 97
* McAlpine et al. (2016) McAlpine S., et al., 2016, Astronomy and Computing, 15, 72
* Mihos (2015) Mihos J. C., 2015, in Proceedings of the International Astronomical Union. pp 27–34 (arXiv:1312.5380), doi:10.1017/S1743921315006857
* Mihos et al. (2005) Mihos J. C., Harding P., Feldmeier J., Morrison H., 2005, ApJ, 631, L41
* Mihos et al. (2017) Mihos J. C., Harding P., Feldmeier J. J., Rudick C., Janowiecki S., Morrison H., Slater C., Watkins A., 2017, ApJ, 834, 16
* Montes & Trujillo (2014) Montes M., Trujillo I., 2014, ApJ, 794
* Montes & Trujillo (2018) Montes M., Trujillo I., 2018, MNRAS, 474, 917
* Montes & Trujillo (2019) Montes M., Trujillo I., 2019, MNRAS, 482, 2838
* Morishita et al. (2017) Morishita T., Abramson L. E., Treu T., Schmidt K. B., Vulcani B., Wang X., 2017, ApJ, 846, 139
* Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 475, 648
* Planck Collaboration et al. (2014) Planck Collaboration et al., 2014, A&A, 571, A16
* Rosas-Guevara et al. (2015) Rosas-Guevara Y. M., et al., 2015, MNRAS, 454, 1038
* Rudick et al. (2011) Rudick C. S., Mihos J. C., McBride C. K., 2011, ApJ, 732
* Schaller et al. (2015) Schaller M., Dalla Vecchia C., Schaye J., Bower R. G., Theuns T., Crain R. A., Furlong M., McCarthy I. G., 2015, MNRAS, 454, 2277
* Schaye & Dalla Vecchia (2007) Schaye J., Dalla Vecchia C., 2007, MNRAS, 383, 1210
* Schaye et al. (2015) Schaye J., et al., 2015, MNRAS, 446, 521
* Trayford et al. (2015) Trayford J. W., et al., 2015, MNRAS, 452, 2879
* Wiersma et al. (2009a) Wiersma R. P. C., Schaye J., Smith B. D., 2009a, MNRAS, 393, 99
* Wiersma et al. (2009b) Wiersma R. P. C., Schaye J., Theuns T., Dalla Vecchia C., Tornatore L., 2009b, MNRAS, 399, 574
* Zhang et al. (2019) Zhang Y., et al., 2019, ApJ, 874, 165
* Zibetti et al. (2005) Zibetti S., White S. D. M., Schneider D. P., Brinkmann J., 2005, MNRAS, 358, 949
|
2024-09-04T02:54:58.674905 | 2020-03-08T06:14:13 | 2003.04720 | {
"authors": "Rohit Pandey",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26133",
"submitter": "Rohit Pandey",
"url": "https://arxiv.org/abs/2003.04720"
} | arxiv-papers | # The mean and variance in coupons required to complete a collection
Rohit Pandey
###### Abstract
This paper is about the Coupon collector’s problem. There are some coupons, or
baseball cards, or other plastic knick-knacks that are put into bags of chips
or under soda bottles, etc. A collector starts collecting these trinkets and
wants to form a complete collection of all possible ones. Every time they buy
the product however, they don’t know which coupon they will “collect” until
they open the product. How many coupons do they need to collect before they
complete the collection? In this paper, we explore the mean and variance of
this random variable, $N$ using various methods. Some of them work only for
the special case with the coupons having equal probabilities of being
collected, while others generalize to the case where the coupons are collected
with unequal probabilities (which is closer to a real world scenario).
## Problems and expressions
### Problems
There are $n$ coupons in a collection. A collector has the ability to purchase
a coupon, but can’t choose the coupon he purchases. Instead, the coupon is
revealed to be coupon $i$ with probability $p_{i}=\frac{1}{n}$. Let $N$ be the
number of coupons he’ll need to collect before he has at least one coupon of
each type. Let’s call this random variable $N$. Now, we want to solve the
following problems:
P1
The expected value of $N$ when the coupons have equal probabilities of being
collected.
P2
The expected value of $N$ when the coupons have unequal probabilities of being
collected.
P3
The variance of $N$ when the coupons have equal probabilities of being
collected.
P4
The variance of $N$ when the coupons have unequal probabilities of being
collected.
P5
The density function of $N$ (meaning the entire distribution) when the coupons
have equal probabilities.
P6
The density function of $N$ (meaning the entire distribution) when the coupons
have unequal probabilities.
This paper will go over various solutions, some more powerful (can answer more
of the above questions) than others. It’s also clear that if we can solve the
even numbered problems (2,4,6) we can simply substitute
$p_{i}=\frac{1}{n}\;\;\forall i$ and solve the corresponding odd numbered
problems (1,3,5) respectively.
### Expressions
In this section, we provide the solutions to the problems, P1 through P6 and
devote the rest of the paper to their derivations.
###### Theorem 1 (Expression for P1).
The expected number of coupons a collector will need to complete the
collection when the probabilities of collecting each of the $n$ coupons is
$\frac{1}{n}$ is:
$E(N)=n\sum\limits_{m=1}^{n}\frac{1}{m}$
###### Theorem 2 (Expression for P2).
The variance in the number of coupons a collector will need to complete the
collection when the probabilities of collecting each of the $n$ coupons is
$\frac{1}{n}$ is:
$V(N)=n^{2}\sum\limits_{i=1}^{n}\frac{1}{i^{2}}-n\sum\limits_{k=1}^{n}\frac{1}{k}$
###### Theorem 3 (Expression for P3).
The expected number of coupons a collector will need to complete the
collection when the probabilities of collecting coupon $i$ is $p_{i}$
($\sum\limits_{i=1}^{n}p_{i}=1$) is:
$E(N)=\sum\limits_{j}\frac{1}{p}_{j}-\sum\limits_{i<j}\frac{1}{p_{i}+p_{j}}+\dots+(-1)^{m-1}\frac{1}{p_{1}+\dots+p_{m}}$
###### Theorem 4 (Expression for P4).
The variance in the number of coupons a collector will need to complete the
collection when the probabilities of collecting coupon $i$ is $p_{i}$
($\sum\limits_{i=1}^{n}p_{i}=1$) is:
$V(N)=\left(\sum\frac{1}{p_{j}^{2}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})^{2}}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})^{2}}\right)-\\\
\left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)^{2}-\\\
\left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)$
## 1 A sum of geometric random variables
### 1.1 Proof 1 of theorem 1
Consider a state where the collector has already collected $m$ coupons. How
many coupons does he need to collect to get to $m+1$? Let this be represented
by the random variable, $N_{m}$. Then, if the total coupons needed is $N$, we
have:
$N=\sum\limits_{m=1}^{n}N_{m}$
Every coupon collected from here is like a coin toss where with probability
$\frac{m}{n}$, the collector hits a coupon he already has and makes no
progress. With probability $\frac{n-m}{n}$, he collects a new coupon. So, this
becomes a geometric random variable with $p=\frac{n-m}{n}$. We know that a
geometric random variable has a mean $\frac{1}{p}$ and variance
$\frac{1-p}{p^{2}}$. Hence,
$E(N_{m})=\frac{n}{n-m}$
Taking expectation of equation (1) and substituting we have:
$E(N)=E(N_{m})=\sum\limits_{m=1}^{n}\frac{n}{n-m}=n\sum\limits_{m=1}^{n}\frac{1}{n-m}$
Substituting $m=n-m$ we get:
$E(N)=n\sum\limits_{m=1}^{n}\frac{1}{m}$
### 1.2 Proof 1 of theorem 3
Since the random variables $N_{m}$ are independent, the variance of their sum
is equal to the sum of their variances. So, proceeding similarly to section
1.1 the variance, $V(N)$ can be calculated.
$V(N)=n^{2}\sum\limits_{i=1}^{n}\frac{1}{i^{2}}-n\sum\limits_{k=1}^{n}\frac{1}{k}$
## 2 Maximum of minimums identity
With this approach, we can prove theorems 1 and 2
### 2.1 Proof 1 of theorem 3
Let $N_{j}$ be the number of coupons to be collected before we see the first
coupon of type $j$ and $N$ the number of coupons until all are collected. We
have:
$N=\max_{1\leq j\leq n}N_{j}$
In conjunction with the maximum of minimums identity we get:
$N=\sum N_{j}-\sum_{1\leq j\leq k\leq n}\min N_{j},N_{k}+\sum_{1\leq j\leq
k\leq i\leq n}\min N_{j},N_{k},N_{i}-\dots$ (1)
and the fact that $\min_{1\leq j\leq m}N_{j}$ is a geometric random variable
with parameter $p=\sum\limits_{j=1}^{m}p_{j}$ lead to the result of theorem 3
and from there, we can substitute $p_{j}=\frac{1}{n}\forall j$ to get the
result of theorem 1
$E(N)=n\sum\limits_{k=1}^{n}\frac{1}{k}$
Note that it’s not easy to get the variance, $V(N)$ with this approach because
the terms in equation 1 are not independent.
## 3 A recurrence
With this approach, we can prove theorems 1 and 3.
Consider a state where the collector has $m$ coupons in his collection. Let
$T_{m}$ be the number of coupons needed to complete the collection. If the
total coupons he needs to collect to complete the collection is $N$, we then
have:
$N=T_{0}$
Now, we could observe that (the $N_{m}$ are the variables defined in section
1):
$N_{m}=T_{m+1}-T_{m}$
and summing over all $m$ (and noting that $T_{n}=0$) leads us to:
$T_{0}=\sum_{m}N_{m}$
and this leads to the approach in section 1 which makes the problem much
easier to solve. Alternately, we can continue working with the $T_{m}$’s and
construct a recurrence. Consider what happens when the collector has $m$
coupons and he collects one more. With probability $\frac{m}{n}$, he fails to
add a new coupon and is back to where he started, making no progress. Let
$I(\frac{n}{m})$ be a Bernoulli random variable with $p=\frac{n}{m}$. We then
have the expression:
$T_{m}=1+I\left(\frac{m}{n}\right)T_{m}^{\prime}+\left(1-I\left(\frac{m}{n}\right)\right)T_{m+1}$
(2)
Where $T_{m}^{\prime}$ is i.i.d with $T_{m}$.
### 3.1 Proof 2 of theorem 1
Taking expectation to both sides,
$E(T_{m})=1+\frac{m}{n}E(T_{m})+\frac{n-m}{n}T_{m+1}$
$E(T_{m})\left(1-\frac{m}{n}\right)=1+\left(1-\frac{m}{n}\right)T_{m+1}$
$E(T_{m})-E(T_{m+1})=\frac{n}{n-m}$
As noted before, the L.H.S is simply $E(N_{m})$ as defined in A1. In general
we have,
$\sum\limits_{m=k}^{n-1}E(T_{m})-\sum\limits_{m=k}^{n-1}E(T_{m+1})=\sum\limits_{m=k}^{n-1}\frac{n}{n-m}$
Noting that $T_{n}=0$ we have,
$E(T_{k})=\sum\limits_{m=k}^{n-1}\frac{n}{n-m}$
And letting $m=n-k$
$E(T_{n-m})=n\sum\limits_{k=1}^{m}\frac{1}{k}$
We’re interested in $T_{0}$, so let’s substitute $m=n$ in equation (3).
$E(T_{0})=n\sum\limits_{k=1}^{n}\frac{1}{k}$
### 3.2 Proof 2 of theorem 3
Now, let’s try and find the variance, $V(N)=V(T_{0})$. Let’s square both sides
of equation (1). To make the algebra easier, let’s re-arrange and note that
$I(\frac{m}{n})(1-I(\frac{m}{n}))=I(\frac{m}{n})-I(\frac{m}{n})^{2}=0$.
$=>(T_{m}-1)^{2}=I\left(\frac{m}{n}\right)^{2}T_{m}^{\prime
2}+(1+I\left(\frac{m}{n}\right)^{2}-2I\left(\frac{m}{n}\right))T_{m+1}^{2}$
Now, note the following property of Bernoulli random variables:
$I(\frac{m}{n})^{2}=I(\frac{m}{n})$. This means:
$T_{m}^{2}-2T_{m}+1=I\left(\frac{m}{n}\right)T_{m}^{\prime
2}+(1-I\left(\frac{m}{n}\right))T_{m+1}^{2}$
We have to be careful here to note which random variables are i.i.d. and which
are identical. See here.
Taking expectation and doing some algebra gives us,
$\left(1-\frac{m}{n}\right)E(T_{m}^{2})=2E(T_{m})+\left(1-\frac{m}{n}\right)E(T_{m+1}^{2})-1$
$=>E(T_{m}^{2})-E(T_{m+1}^{2})=2E(T_{m})\frac{n}{n-m}-\frac{n}{n-m}$
$=>\sum\limits_{m=0}^{n-1}E(T_{m}^{2})-\sum\limits_{m=0}^{n-1}E(T_{m+1}^{2})=\sum\limits_{m=0}^{n-1}2E(T_{m})\frac{n}{n-m}-\sum\limits_{m=0}^{n-1}\frac{n}{n-m}$
$=>E(T_{0}^{2})-E(T_{n}^{2})=\sum\limits_{m=0}^{n-1}2E(T_{m})\frac{n}{n-m}-\sum\limits_{m=0}^{n-1}\frac{n}{n-m}$
But, $T_{n}=0$ and from equation (3),
$E(T_{m})=n\sum\limits_{k=1}^{n-m}\frac{1}{k}$. So we get:
$E(T_{0}^{2})=\sum\limits_{m=0}^{n-1}2E(T_{m})\frac{n}{n-m}-\sum\limits_{m=0}^{n-1}\frac{n}{n-m}$
$=>E(T_{0}^{2})=2n^{2}\sum\limits_{m=0}^{n-1}\frac{1}{n-m}\sum\limits_{k=1}^{n-m}\frac{1}{k}-n\sum\limits_{m=0}^{n-1}\frac{1}{n-m}$
Now, change variables $j=n-m$
$=>E(T_{0}^{2})=2n^{2}\sum\limits_{j=n}^{1}\frac{1}{j}\sum\limits_{k=1}^{j}\frac{1}{k}-n\sum\limits_{j=n}^{1}\frac{1}{j}$
$=>E(T_{0}^{2})=2n^{2}\sum\limits_{1\leq k\leq j\leq n}\frac{1}{jk}-E(T_{0})$
This can be used in conjunction with the result of theorem 1 to get the
variance.
$V(T_{0})=2n^{2}\sum\limits_{1\leq k\leq j\leq
n}\frac{1}{jk}-E(T_{0})-E(T_{0})^{2}$
Substituting the result of theorem 1,
$V(T_{0})=2n^{2}\sum\limits_{1\leq k\leq j\leq
n}\frac{1}{jk}-n\sum\limits_{i=1}^{n}\frac{1}{i}-\left(n\sum\limits_{i=1}^{n}\frac{1}{i}\right)^{2}$
(3)
Comparing equation 3 above with the result of theorem 3 we get the easily
verifiable identity:
$2\sum_{1\leq j\leq k\leq
n}\frac{1}{jk}=\sum\limits_{i=1}^{n}\frac{1}{i^{2}}+\left(\sum\limits_{i=1}^{n}\frac{1}{i}\right)^{2}$
## 4 Using a Poisson process to make dependence disappear
Using the Poisson process to magically concoct independent random variables.
This is the most powerful of all approaches since it’s the only one that
allows us to solve for both mean and variance for the coupon collector’s
problem for the general case of coupons having unequal probabilities (and
higher moments as well). It is hence able to solve problems P1 through P4.
In example 5.17 of [1], the Coupon collector’s problem is tackled for the
general case where the probability of drawing coupon $j$ is given by $p_{j}$
and of course, $\sum\limits_{j}p_{j}=1$.
Now, he imagines that the collector collects the coupons in accordance to a
Poisson process with rate $\lambda=1$. Furthermore, every coupon that arrives
is of type $j$ with probability $p_{j}$.
Now, he defines $X_{j}$ as the first time a coupon of type $j$ is observed, if
the $j$th coupon arrives in accordance to a Poisson process with rate $p_{j}$.
We’re interested in the time it takes to collect all coupons, $X$ (for now,
eventually, we’re interested in the number of coupons to be collected, $N$).
So we get:
$X=\max_{1\leq j\leq m}X_{j}$
Note that if we denote $N_{j}$ as the number of coupons to be collected before
the first coupon of type $j$ is seen, we also have for the number needed to
collect all coupons, $N$:
$N=\max_{1\leq j\leq m}N_{j}$
This equation is less useful since the $N_{j}$ are not independent. It can
still be used to get the mean (see section 2), but trying to get the variance
with this approach gets considerably more challenging due to this lack of
independence of the underlying random variables (the are positively
correlated).
But, the incredible fact that the $X_{j}$ are independent (discussion on that
here), allows us to get:
$F_{X}(t)=P(X<t)=P(X_{j}<t\;\forall\;j)=\prod\limits_{j=1}^{m}(1-e^{-p_{j}t})$
(4)
### 4.1 Proof 2 of theorem 2
Now, Ross uses the expression: $E(X)=\int\limits_{0}^{\infty}S_{X}(t)dt$,
where $S_{X}(t)$ is the survival function to get:
$E(X)=\int\limits_{0}^{\infty}\left(1-\prod\limits_{j=1}^{m}(1-e^{-p_{j}t})\right)dt$
$=\sum\limits_{j}\frac{1}{p}_{j}-\sum\limits_{i<j}\frac{1}{p_{i}+p_{j}}+\dots+(-1)^{m-1}\frac{1}{p_{1}+\dots+p_{m}}$
and this proves the result of theorem 2.
### 4.2 Proof 4 of theorem 1
In the special case of all coupons having equal probabilities of being
collected we have: $p_{j}=\frac{1}{n}\forall j$
Substituting in the equation above we get:
$E(X)=\sum\limits_{k=1}^{n}(-1)^{k}\frac{{n\choose k}}{k}$ (5)
Let’s solve a general version of the binomial sum in equation 5.
###### Proposition 5.
We have the following binomial sum:
$\sum_{k=1}^{n}(-1)^{k-1}\frac{{n\choose
k}}{k^{r}}=\sum_{i_{1}<i_{2}<\dots<i_{r}}\frac{1}{i_{1}i_{2}\dots i_{r}}$
###### Proof.
Using the Binomial theorem:
$\frac{1-(1-t)^{n}}{t}=\sum\limits_{k=1}^{n}(-1)^{k-1}{{n\choose k}}{t^{k-1}}$
Integrate both sides from $0$ to $x$.
$\int\limits_{0}^{x}\frac{1-(1-t)^{n}}{t}dx=\sum\limits_{k=1}^{n}(-1)^{k-1}{{n\choose
k}}\frac{x^{k}}{k}$
For the LHS, let $1-t=u$
$\int\limits_{1}^{1-x}\frac{1-(u)^{n}}{1-u}(-du)=\sum\limits_{k=1}^{n}(-1)^{k-1}{{n\choose
k}}\frac{x^{k}}{k}$
$\frac{\sum\limits_{k=1}^{n}\frac{1-(1-x)^{k}}{k}}{x}=\sum\limits_{k=1}^{n}(-1)^{k-1}\frac{{n\choose
k}}{k}x^{k-1}$
Integrate both sides from $0$ to $1$, we get:
$\sum\limits_{k=1}^{n}\frac{1}{k}\int\limits_{0}^{1}\frac{1-(1-x)^{k}}{x}dx=\sum\frac{{n\choose
k}}{k^{2}}(-1)^{k-1}$
Substituting $1-x=t$ in the integral and expanding the geometric series we
get:
$\sum\limits_{k=1}^{n}\frac{1}{k}\sum\limits_{j=1}^{k}\frac{1}{j}=\sum\frac{{n\choose
k}}{k^{2}}(-1)^{k-1}=\sum\limits_{k=1}^{n}\sum\limits_{j=1}^{k}\frac{1}{jk}$
This can very easily be extended to $k^{r}$ in the denominator:
$\sum_{k=1}^{n}(-1)^{k-1}\frac{{n\choose
k}}{k^{r}}=\sum_{i_{1}<i_{2}<\dots<i_{r}}\frac{1}{i_{1}i_{2}\dots i_{r}}$ (6)
∎
Substituting $r=1$ in equation 6 and equation 5 we have,
$E(X)=n\sum\limits_{k=1}^{n}\frac{1}{k}$
Further, Ross shows that $E(N)=E(X)$ using the law of total expectation.
First, he notes,
$E(X|N=n)=nE(T_{i})$
where $T_{i}$ are the inter-arrival times for coupon arrivals. Since these are
assume to be exponential with rate 1,
$E(X|N)=N$
Taking expectations on both sides and using the law of total expectation we
get:
$E(X)=E(N)$
### 4.3 Proof 1 of theorem 4
This approach can easily be extended to find $V(N)$, the variance (not covered
by Ross). We can use the following expression to get $E(X^{2})$:
$E(X^{2})=\int\limits_{0}^{\infty}2tP(X>t)dt=\int\limits_{0}^{\infty}2t\left(1-\prod\limits_{j=1}^{n}(1-e^{-p_{j}t})\right)dt$
Using the fact that $\int\limits_{0}^{\infty}te^{-pt}=\frac{1}{p^{2}}$ and the
same algebra as for $E(X)$ we get:
$\frac{E(X^{2})}{2}=\sum\frac{1}{p_{j}^{2}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})^{2}}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})^{2}}$
(7)
Equation 7 has given us $E(X^{2})$ but remember that we’re interested in
finding $E(N^{2})$ and from there, $V(N)$. So, we need to relate the variances
of the two random variables. Using the law of total variance we get:
$V(X)=E(V(X|N))+V(E(X|N))$
So per equation (3) we have:
$V(X)=E(V(X|N))+V(N)$
Now,
$V(X|N)=NV(T_{i})$
And since $T_{i}\sim Exp(1)$, we have $V(T_{i})=1$ meaning, $V(X|N)=N$.
Substituting into (2),
$V(X)=E(N)+V(N)$
So,
$V(N)=E(X^{2})-E(N)-E(N)^{2}$ (8)
Substituting equation 7 and the result of theorem 2 into equation 8 we get:
$V(N)=\left(\sum\frac{1}{p_{j}^{2}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})^{2}}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})^{2}}\right)-\\\
\left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)^{2}-\\\
\left(\sum\frac{1}{p_{j}}-\sum_{i<j}\frac{1}{(p_{i}+p_{j})}+\dots+(-1)^{n-1}\frac{1}{(p_{1}+\dots+p_{n})}\right)$
(9)
### 4.4 Proof 3 of theorem 2
Now, let’s consider the special case where all coupons have an equal
probability of being selected. In other words,
$p_{j}=\frac{1}{n}\;\forall\;j$.
We get:
$\frac{E(X^{2})}{2}=n^{2}\left(\sum\limits_{k=1}^{n}(-1)^{k-1}\frac{{n\choose
k}}{k^{2}}\right)$ (10)
We now solve a general version of the binomial summation in equation 10 above.
Using equations 6 and 10 we get:
$E(X^{2})=2n^{2}\left(\sum_{j=1}^{n}\sum_{k=1}^{j}\frac{1}{jk}\right)$ (11)
Using equations 11 and 8, we get the same result we got from the recurrence in
section 3, equation 3.
## Acknowledgements
I’d like to thank mathexchange user, Simon for encouraging me to convert the
Q&A page on this into a paper.
## References
* [1] Ross, S. (2010). Introduction to Probability Models, 10th ed. Elsevier.
|
2024-09-04T02:54:58.689254 | 2020-03-07T13:23:01 | 2003.04730 | {
"authors": "Rapha\\\"el Berthon, Bastien Maubert, Aniello Murano, Sasha Rubin, Moshe\n Vardi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26134",
"submitter": "Bastien Maubert",
"url": "https://arxiv.org/abs/2003.04730"
} | arxiv-papers | # Strategy Logic with Imperfect Information
Raphaël Berthon École Normale Supérieure de RennesComputer Science and
TelecommunicationRennesFrance<EMAIL_ADDRESS>, Bastien Maubert
0000-0002-9081-2920 Università degli Studi di Napoli “Federico
II”DIETINaplesItaly<EMAIL_ADDRESS>, Aniello Murano Università
degli Studi di Napoli “Federico II”DIETINaplesItaly<EMAIL_ADDRESS>, Sasha
Rubin Università degli Studi di Napoli “Federico II”DIETINaplesItaly
<EMAIL_ADDRESS>and Moshe Y. Vardi Rice UniversityHoustonTexasUSA
<EMAIL_ADDRESS>
(September 2018)
###### Abstract.
We introduce an extension of Strategy Logic for the imperfect-information
setting, called $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, and study
its model-checking problem. As this logic naturally captures multi-player
games with imperfect information, this problem is undecidable; but we
introduce a syntactical class of “hierarchical instances” for which,
intuitively, as one goes down the syntactic tree of the formula, strategy
quantifications are concerned with finer observations of the model, and we
prove that model-checking $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
restricted to hierarchical instances is decidable. This result, because it
allows for complex patterns of existential and universal quantification on
strategies, greatly generalises the decidability of distributed synthesis for
systems with hierarchical information. It allows us to easily derive new
decidability results concerning strategic problems under imperfect information
such as the existence of Nash equilibria, or rational synthesis.
To establish this result we go through an intermediary, “low-level” logic much
more adapted to automata techniques. $\textnormal{{QCTL}}^{*}$ is an extension
of $\textnormal{{CTL}}^{*}$ with second-order quantification over atomic
propositions that has been used to study strategic logics with perfect
information. We extend it to the imperfect information setting by
parameterising second-order quantifiers with observations. The simple syntax
of the resulting logic, $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$, allows us to provide a conceptually neat reduction of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ that separates
concerns, allowing one to forget about strategies and players and focus solely
on second-order quantification. While the model-checking problem of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is, in general,
undecidable, we identify a syntactic fragment of hierarchical formulas and
prove, using an automata-theoretic approach, that it is decidable.
We apply our result to solve complex strategic problems in the imperfect-
information setting. We first show that the existence of Nash equilibria for
deterministic strategies is decidable in games with hierarchical information.
We also introduce distributed rational synthesis, a generalisation of rational
synthesis to the imperfect-information setting. Because it can easily be
expressed in our logic, our main result provides a solution to this problem in
the case of hierarchical information.
strategic reasoning, imperfect information, perfect recall, distributed
synthesis, hierarchical information, Nash equilibria, rational synthesis
††journal: TOCL††copyright: acmlicensed††doi: 0000001.0000001††ccs: Theory of
computation Logic and verification††ccs: Theory of computation Modal and
temporal logics††ccs: Theory of computation Automata over infinite objects
## 1\. Introduction
Temporal logics such as LTL (Pnueli, 1977) or $\textnormal{{CTL}}^{*}$
(Emerson and Halpern, 1986) are extremely successful logics that have been
studied in great detail and extended in many directions along the past
decades, notably in relation with the development of the model-checking
approach to program verification (Clarke et al., 1999). When considering
systems with multiple components such as multi-agent systems or distributed
programs, popular extensions of temporal logics are the family of so-called
_logics for strategic reasoning_ , or _strategic logics_ , which introduce
operators that can express the existence of strategies for components to
ensure that the system’s executions satisfy certain temporal properties.
A fundational logic in this family is Alternating-time Temporal Logic (ATL)
(Alur et al., 2002). It extends $\textnormal{{CTL}}^{*}$ with a coalition
operator $\langle A\rangle\varphi$, where $A$ is a subset of components/agents
of the system, which reads as “coalition $A$ has a strategy to enforce
property $\varphi$ no matter what the other components/agents do”. This logic
is thus quite expressive, as it allows for instance to express the existence
of winning strategies in games played on graphs. However it is not well suited
to reason about other important solution concepts in game theory, such as Nash
equilibria. To address this problem Strategy Logic (SL) was introduced
(Chatterjee et al., 2010a; Mogavero et al., 2014). In SL strategies are
treated as first-order objects, thanks to strategy variables $x$ that can be
quantified upon and bound to players: $\langle\\!\langle x\rangle\\!\rangle$
reads as “there exists a strategy $x$”, and $(a,x)$ reads as “strategy $x$ is
assigned to player $a$”. This leads to a very expressive logic that can
express many solution concepts from game-theory such as best response,
existence of Nash equilibria or subgame-perfect equilibria.
Imperfect information. An essential property of realistic multi-player games
is that players often have a limited view of the system. Such imperfect
information, or partial observation, is usually captured by equipping the
models with equivalence relations $o$ (called _observations_) over the state
space, that specify indistinguishable states. Strategies are then required to
be _uniform_ , i.e., they cannot assign different moves to indistinguishable
situations. Imperfect information is known to make games computationally
harder to solve. For two-player reachability games, Reif showed in (Reif,
1984) that deciding the existence of winning strategies is Exptime -complete
for imperfect information, while it is in Ptime for perfect information. This
result has later been generalised to omega-regular objectives (Berwanger et
al., 2010; Doyen and Raskin, 2011), and adapted to the setting of program
synthesis from temporal specifications (Pnueli and Rosner, 1989; Kupferman and
Vardi, 1999). In the case of multiple players/components/agents, which
interests us here, the situation is even worse: the existence of distributed
winning strategies is undecidable already for two players with incomparable
observation trying to enforce some reachability objective in the presence of
an adversarial third player (Peterson and Reif, 1979), and a similar result
was also proved in the framework of distributed synthesis (Pnueli and Rosner,
1990). Since then, the formal-methods community has spent much effort finding
restrictions and variations that ensure decidability (Kupferman and Vardi,
2001; Pnueli and Rosner, 1990; Gastin et al., 2009; Peterson et al., 2002;
Finkbeiner and Schewe, 2005; Pinchinat and Riedweg, 2005; Schewe and
Finkbeiner, 2007; Berwanger et al., 2018). The common thread in these
approaches is hierarchical information: players can be totally ordered
according to how well they observe the game. Another line of works establishes
that decidability can be retained by forbidding private communication, i.e.,
by considering variants around the idea that all new information should be
public (van der Meyden and Vardi, 1998; van der Meyden and Wilke, 2005;
Ramanujam and Simon, 2010; Belardinelli et al., 2017b, a; Bouyer, 2018).
Strategy Logic with imperfect information. We propose an extension of Strategy
Logic to the imperfect-information setting, which we call
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. The first step is to choose
how to introduce imperfect information in the logic. In the formal-methods
literature it is typical to associate observations to players. In
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, instead, we associate
observations to strategies: the strategy quantifier $\langle\\!\langle
x\rangle\\!\rangle{}$ from SL is now parameterised by observation $o$, written
$\langle\\!\langle x\rangle\\!\rangle^{o}$. This novelty allows one to
express, in the logic, that a player’s observation changes over time, to
capture for instance the loss of a sensor resulting in a diminished
observation power. We also add to our logic
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ the outcome quantifier ${\bf
A}$ from Branching-time Strategy Logic (BSL) (Knight and Maubert, 2019), which
quantifies on outcomes of strategies currently used by the agents, and the
unbinding operator $(a,\operatorname{?})$, which frees an agent from her
current strategy. This does not increase the expressivity of the logic but
presents advantages that we discuss in Section 2.2. For instance it allows us
to naturally consider nondeterministic strategies (Strategy Logic only
considers deterministic ones), which in turn allows us to capture module
checking, the extension of model checking to open systems (Kupferman et al.,
2001; Jamroga and Murano, 2014, 2015).
The logic $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is very powerful:
it is an extension of SL (which considers perfect information), and of the
imperfect-information strategic logics
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ (Bulling and Jamroga,
2014) and $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$
(Laroussinie et al., 2015). As already mentioned,
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express the distributed
synthesis problem (Pnueli and Rosner, 1990). This problem asks whether there
are strategies for components $a_{1},\dots,a_{n}$ of a distributed system to
enforce some property given as an LTL formula $\psi$ against all behaviours of
the environment. This can be expressed by the
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula
$\Phi_{\textsc{Synth}}:=\langle\\!\langle
x_{1}\rangle\\!\rangle^{o_{1}}\dots\langle\\!\langle
x_{n}\rangle\\!\rangle^{o_{n}}(a_{1},x_{1})\dots(a_{n},x_{n}){\bf A}\psi$,
where $o_{i}$ represents the local view of component $a_{i}$. Also,
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express more complicated
specifications by alternating quantifiers, binding the same strategy to
different agents and rebinding (these are inherited from SL), as well as
changing observations. For instance, it can express the existence of Nash
equilibria.
Main result. Of course, the high expressivity of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ comes at a cost from a
computational complexity point of view. Its satisfiability problem is
undecidable (this is already true of SL), and so is its model-checking problem
(this is already true of $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize
i,R}}$ even for the single formula $\langle\\{a,b\\}\rangle{\bf F}p$ (Dima and
Tiplea, 2011), which means that agents $a$ and $b$ have a strategy profile to
reach a situation where $p$ holds). We mentioned that the two main settings in
which decidability is retrieved for distributed synthesis are hierarchical
information and public actions. We extend the first approach to the setting of
strategic logics by introducing a syntactic class of “hierarchical instances”
of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, i.e., formula/model
pairs, and proving that the model-checking problem on this class of instances
is decidable. Intuitively, an instance of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is hierarchical if, as one
goes down the syntactic tree of the formula, the observations annotating
strategy quantifications can only become finer. Although the class of
hierarchical instances refers not only to the syntax of the logic but also to
the model, the class is syntactical in the sense that it depends only on the
structure of the formula and the observations in the model. Moreover, it is
straightforward to check (in linear time) whether an instance is hierarchical
or not.
Applications. Because the syntax of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ allows for arbitrary
alternations of quantifiers in the formulas, our decidability result for
hierarchical instances allows one to decide strategic problems more involved
than module checking and distributed synthesis. For instance, we show in
Section 7 how one can apply our result to establish that the existence of Nash
equilibria is decidable in games with imperfect information, in the case of
hierarchical observations and deterministic strategies. This problem is
relevant as Nash equilibria do not always exist in games with imperfect
information (Filiot et al., 2018). We then consider the problem of rational
synthesis (Fisman et al., 2010; Kupferman et al., 2016; Condurache et al.,
2016; Filiot et al., 2018), both in its cooperative and non-cooperative
variants. We introduce the generalisations of these problems to the case of
imperfect information, and call them cooperative and non-cooperative _rational
distributed synthesis_. We then apply again our main result to establish that
they are decidable in hierarchical systems for deterministic strategies. For
the non-cooperative variant, we need the additional assumption that the
environment is at least as informed as the system. This is the case for
example when one ignores the actual observation power of the environment, and
considers that it plays with perfect information. Doing so yields systems that
are robust to any observation power the environment may have. As Reif puts it,
this amounts to synthesising strategies that are winning even if the opponent
“cheats” and uses information it is not supposed to have access to (Reif,
1984).
Approach. In order to solve the model-checking problem for
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ we introduce an intermediate
logic $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, an extension to
the imperfect-information setting of $\textnormal{{QCTL}}^{*}$ (Laroussinie
and Markey, 2014), itself an extension of $\textnormal{{CTL}}^{*}$ by second-
order quantifiers over atoms. This is a low-level logic that does not mention
strategies and into which one can effectively compile instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. States of the models of the
logic $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ have internal
structure, much like the multi-player game structures from (Peterson et al.,
2001) and distributed systems (Halpern and Vardi, 1989). Model-checking
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is also undecidable
(indeed, we show how to reduce from the MSO-theory of the binary tree extended
with the equal-length predicate, known to be undecidable (Läuchli and Savioz,
1987)). We introduce the syntactical class
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ of
hierarchical formulas as those in which innermost quantifiers observe more
than outermost quantifiers, and prove that model-checking is decidable using
an extension of the automata-theoretic approach for branching-time logics. We
provide a reduction from model checking
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to model checking
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ that preserves being
hierarchical, thus establishing our main contribution, i.e., that model
checking the hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is decidable.
Complexity. To establish the precise complexity of the problems we solve, we
introduce a new measure on formulas called _simulation depth_. This measure
resembles the notion of alternation depth (see, e.g., (Mogavero et al.,
2014)), which counts alternations between existential and universal strategy
(or second-order) quantifications. But instead of merely counting alternations
between such operators, simulation depth reflects the underlying automata
operations required to treat formulas, while remaining a purely syntactical
notion. We prove that the model-checking problem for the hierarchical fragment
of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ and
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ are both $(k+1)$-Exptime
-complete for formulas of simulation depth at most $k$. Already for the
perfect-information fragment, this result is more precise than what was
previously known. Indeed, precise upper bounds based on alternation depth were
known for syntactic fragments of SL but not for the full logic (Mogavero et
al., 2014).
Related work. The literature on imperfect information in formal methods and
artificial intelligence is very vast. Imperfect information has been
considered in two-player games (Reif, 1984; Doyen and Raskin, 2011; Berwanger
et al., 2010), module checking (Kupferman et al., 2001; Jamroga and Murano,
2015), distributed synthesis of reactive systems (Pnueli and Rosner, 1990;
Kupferman and Vardi, 2001; Finkbeiner and Schewe, 2005) and strategies in
multiplayer games (Peterson and Reif, 1979; Peterson et al., 2002; Berwanger
et al., 2018), Nash equilibria (Ramanujam and Simon, 2010; Bouyer et al.,
2017; Bouyer, 2018), rational synthesis (Filiot et al., 2018; Gutierrez et
al., 2018), doomsday equilibria (Chatterjee et al., 2017), admissible
strategies (Brenguier et al., 2017), quantitative objectives (Degorre et al.,
2010; Pérez, 2017), and more, some of which we detail below.
Limited alternation of strategy quantification was studied in (Chatterjee and
Doyen, 2014a), in which several decidability results are proved for two and
three alternations of existential and universal quantifiers. Except for one
where the first player has perfect information, all the problems solved in
this work are hierarchical instances, and are thus particular cases of our
main result.
Quantified $\mu$-Calculus with partial observation is studied in (Pinchinat
and Riedweg, 2005), where the model-checking problem is solved by considering
a syntactic constraint based on hierarchical information, as we do for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. However they consider
asynchronous perfect recall, and the automata techniques they use to deal with
imperfect information cannot be used in the synchronous perfect-recall setting
that we consider in this work. Similarly the narrowing operation on tree
automata (see Section 4.1), which is crucial in our model-checking procedure,
considers synchronous perfect recall and does not seem easy to adapt to the
asynchronous setting.
A number of works have considered strategic logics with imperfect information.
Various semantics for ATL with imperfect information have been studied in,
e.g., (Jamroga and Bulling, 2011; Jamroga and van der Hoek, 2004). The model-
checking problem for these logics, which is undecidable for agents with
perfect recall (Dima and Tiplea, 2011), has been studied for agents with
bounded memory, for which decidability is recovered (Schobbens, 2004; Lomuscio
and Raimondi, 2006). An epistemic strategic logic with original operators
different from those of ATL and SL is proposed in (Huang and Van Der Meyden,
2014). It considers imperfect information strategies, but only for agents
without memory. Concerning perfect recall, which interest us in this work,
decidability results have also been obtained for ATL (Guelev et al., 2011) and
ATL with strategy context (Laroussinie et al., 2015) when agents have the same
information.
In (Knight and Maubert, 2019), a branching-time variant of SL is extended with
epistemic operators and agents with perfect recall. Strategies are not
required to be uniform in the semantics, but this requirement can be expressed
in the language. However no decidability result is provided. Another variant
of SL extended with epistemic operators and imperfect-information, perfect-
recall strategies is presented in (Belardinelli, 2015), but model checking is
not studied. The latter logic is extended in (Belardinelli et al., 2017a), in
which its model-checking problem is solved on the class of systems where all
agents’ actions are public, which is an assumption orthogonal to hierarchical
information.
The work closest to ours is (Finkbeiner and Schewe, 2010) which introduces a
logic CL in which one can encode many distributed synthesis problems. In this
logic, hierarchical information is a necessary consequence of the syntax and
semantics, and as a result its model-checking problem is decidable. However,
CL is close in spirit to our $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$, and its semantics is less intuitive than that of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Furthermore, by means of a
natural translation we derive that CL is strictly included in the hierarchical
instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ (Section 6.2).
In particular, hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express non-observable
goals, while CL cannot. When considering players that choose their own goals
it may be natural to assume that they can observe the facts that define
whether their objectives are satisfied or not. But when synthesising programs
for instance, it may be enough that their behaviours enforce the desired
properties, without them having the knowledge that it is enforced. Such non-
observable winning conditions have been studied in, e.g., (Chatterjee and
Doyen, 2010; Degorre et al., 2010; Berwanger et al., 2018).
Outline. In Section 2 we define $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ and hierarchical instances, and present some examples. In Section 3 we
define $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ and its
hierarchical fragment $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$. The proof that model checking
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$ is
decidable, including the required automata preliminaries, is in Section 4. The
hierarchy-preserving translation of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ into
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is in Section 5. In
Section 6 we compare $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with
related logics, and in Section 7 we apply our main result to obtain
decidability results for various strategic problems under imperfect
information. Finally we conclude and discuss future work in Section 8.
## 2\. SL with imperfect information
In this section we introduce $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$, an extension of SL to the imperfect-information setting with
synchronous perfect-recall. Our logic presents several original features
compared to SL, which we discuss in detail in Section 2.3: we introduce an
_outcome quantifier_ akin to the path quantifier in branching-time temporal
logics, we allow for nondeterministic strategies and unbinding agents from
their strategies, and we annotate strategy quantifiers with observation
symbols which denote the information available to strategies. We first fix
some basic notations.
### 2.1. Notations
Let $\Sigma$ be an alphabet. A _finite_ (resp. _infinite_) _word_ over
$\Sigma$ is an element of $\Sigma^{*}$ (resp. $\Sigma^{\omega}$). Words are
written $w=w_{0}w_{1}w_{2}\ldots$, i.e., indexing begins with $0$. The
_length_ of a finite word $w=w_{0}w_{1}\ldots w_{n}$ is $|w|:=n+1$, and
$\mbox{last}(w):=w_{n}$ is its last letter. Given a finite (resp. infinite)
word $w$ and $0\leq i<|w|$ (resp. $i\in\mathbb{N}$), we let $w_{i}$ be the
letter at position $i$ in $w$, $w_{\leq i}$ is the prefix of $w$ that ends at
position $i$ and $w_{\geq i}$ is the suffix of $w$ that starts at position
$i$. We write $w\preccurlyeq w^{\prime}$ if $w$ is a prefix of $w^{\prime}$,
and $\textit{pref}\,(w)$ is the set of finite prefixes of word $w$. Finally,
the domain of a mapping $f$ is written $\textit{dom}(f)$, its codomain
$\textit{codom}(f)$, and for $n\in\mathbb{N}$ we let
$[n]:=\\{i\in\mathbb{N}:1\leq i\leq n\\}$.
### 2.2. Syntax
For the rest of the paper, for convenience we fix a number of parameters for
our logics and models: AP is a finite non-empty set of _atomic propositions_ ,
Ag is a finite non-empty set of _agents_ or _players_ , and Var is a finite
non-empty set of _variables_. The main novelty of our logic is that we specify
which information is available to a strategy, by annotating strategy
quantifiers $\langle\\!\langle x\rangle\\!\rangle$ with _observation symbols_
$o$ from a finite set Obs, that we also fix for the rest of the paper. When we
consider model-checking problems, these data are implicitly part of the input.
###### Definition 2.1 ($\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
Syntax).
The syntax of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ is defined by
the following grammar:
$\displaystyle\varphi:=$
$\displaystyle\;p\mid\neg\varphi\mid\varphi\vee\varphi\mid\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi\mid(a,x)\varphi\mid(a,\operatorname{?})\varphi\mid{\bf
E}\psi$ $\displaystyle\psi:=$
$\displaystyle\;\varphi\mid\neg\psi\mid\psi\vee\psi\mid{\bf X}\psi\mid\psi{\bf
U}\psi$
where $p\in\textnormal{AP}$, $x\in\textnormal{Var}$, $o\in\textnormal{Obs}$
and $a\in\textnormal{Ag}$.
Formulas of type $\varphi$ are called _state formulas_ , those of type $\psi$
are called _path formulas_ , and $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ consists of all the state formulas defined by the grammar.
Boolean operators and temporal operators, ${\bf X}$ (read “next”) and ${\bf
U}$ (read “until”), have the usual meaning. The _strategy quantifier_
$\langle\\!\langle x\rangle\\!\rangle^{o}$ is a first-order-like
quantification on strategies: $\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi$ reads as “there exists a strategy $x$ that
takes decisions based on observation $o$ such that $\varphi$ holds”, where $x$
is a strategy variable. The _binding operator_ $(a,x)$ assigns a strategy to
an agent, and $(a,x)\varphi$ reads as “when agent $a$ plays strategy $x$,
$\varphi$ holds”. The _unbinding operator_ $(a,\operatorname{?})$ instead
releases agent $a$ from her current strategy, if she has one, and
$(a,\operatorname{?})\varphi$ reads as “when agent $a$ is not assigned any
strategy, $\varphi$ holds”. Finally, the _outcome quantifier_ ${\bf E}$
quantifies on outcomes of strategies currently in use: ${\bf E}\psi$ reads as
“$\psi$ holds in some outcome of the strategies currently used by the
players”.
We use abbreviations $\top:=p\vee\neg p$, $\perp:=\neg\top$,
$\varphi\to\varphi^{\prime}:=\neg\varphi\vee\varphi^{\prime}$,
$\varphi\leftrightarrow\varphi^{\prime}:=\varphi\to\varphi^{\prime}\wedge\varphi^{\prime}\to\varphi$
for boolean connectives, ${\bf F}\varphi:=\top{\bf U}\varphi$ (read
“eventually $\varphi$”), ${\bf G}\varphi:=\neg{\bf F}\neg\varphi$ (read
“globally $\varphi$”) for temporal operators,
$[\\![x]\\!]^{o}\varphi:=\neg\langle\\!\langle
x\rangle\\!\rangle^{o}\neg\varphi$ (read “for all strategies $x$ based on
observation $o$, $\varphi$ holds”) and ${\bf A}\psi:=\neg{\bf E}\neg\psi$
(read “all outcomes of the current strategies satisfy $\psi$”).
For every formula $\varphi\in\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$,
we let $\textit{free}\,(\varphi)$ be the set of variables that appear free in
$\varphi$, i.e., that appear out of the scope of a strategy quantifier. A
formula $\varphi$ is a _sentence_ if $\textit{free}\,(\varphi)$ is empty.
Finally, we let the _size_ $|\varphi|$ of a formula $\varphi$ be the number of
symbols in $\varphi$.
### 2.3. Discussion on the syntax
We discuss the syntactic differences between our logic and usual Strategy
Logic.
Outcome quantifier. This quantifier was introduced in Branching-time Strategy
Logic (BSL) (Knight and Maubert, 2019), which corresponds to the perfect-
information fragment of the logic we define here. It removes a quirk of
previous definitions, in which temporal operators could only be evaluated in
contexts where all agents were assigned a strategy. The outcome quantifier,
instead, allows for evaluation of temporal properties on partial assignments.
As a result, the notions of free agents and agent-complete assignments from
previous definitions of Strategy Logic are no longer needed (see, e.g.,
(Mogavero et al., 2014)). In addition, the outcome quantifier highlights the
inherent branching-time nature of Strategy Logic: indeed, in SL, branching-
time properties can be expressed by resorting to artificial strategy
quantifications for all agents. It will also make the correspondence with
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ tighter, which will
allow us to establish the precise complexity of the problem we solve, while
the exact complexity of model checking classic SL with perfect information is
still not known. Finally, since the usual definition of SL requires that the
current strategies define a unique outcome on which linear-time temporal
operators are evaluated, only deterministic strategies were considered. The
introduction of the outcome quantifier allows us to consider nondeterministic
strategies.
Unbinding. With the possibility to evaluate temporal operators even when some
agents are not bound to any strategy, it becomes interesting to include the
unbinding operator $(a,\operatorname{?})$, introduced in (Laroussinie and
Markey, 2015) for ATL with strategy context and also present in BSL. Note that
the outcome quantifier and unbinding operator do not increase the expressivity
of SL, at the level of sentences (Knight and Maubert, 2019).
Observations. In games with imperfect information and ATL-like logics with
imperfect information, a strategy is always bound to some player, and thus it
is clear with regards to what observations it should be defined. In SL on the
other hand, strategy quantification and binding are separate. This adds
expressive power with regards to ATL by allowing, for instance, to assign the
same strategy to two different players, but it also entails that when a
quantification is made on a strategy, one does not know with regards to which
observation this strategy should be defined. We know of three ways to solve
this. One is the approach followed here, which consists in associating with
strategy quantifiers an observation power. The second solution is to abandon
the separation between quantification and binding and to use instead
quantifiers of the form $\exists_{a}$, meaning “there exists a strategy for
player $a$”, like in (Chatterjee et al., 2010b; Belardinelli, 2014): with this
operator, the strategy is immediately bound to player $a$, which indicates
with regards to which observation the strategy should be compatible. The third
one, adopted in (Belardinelli et al., 2017a), consists in requiring that a
strategy be uniform for all agents to whom it will be bound in the formula. We
chose to adopt the first solution for its simplicity and expressiveness.
Indeed the second solution limits expressiveness by disallowing, for instance,
binding the same strategy to different agents. The third solution leads to a
logic that is more expressive than the second one, but less than the first
one. Indeed, the logic that we study here can capture the logic from
(Belardinelli et al., 2017a) (assuming that models contain observations
corresponding to unions of individual observations), and in addition
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can express changes of
agents’ observation power.
### 2.4. Semantics
The models of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ are classic
concurrent game structures extended by an interpretation for observation
symbols in Obs.
###### Definition 2.2 ($\textrm{CGS}_{\textnormal{ii}}$ ).
A _concurrent game structure with imperfect information_ (or
$\textrm{CGS}_{\textnormal{ii}}$ for short) is a tuple
$\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$ where
* •
Ac is a finite non-empty set of _actions_ ,
* •
$V$ is a finite non-empty set of _positions_ ,
* •
$E:V\times\textnormal{Ac}^{\textnormal{Ag}}\to V$ is a _transition function_ ,
* •
$\ell:V\to 2^{\textnormal{AP}}$ is a _labelling function_ ,
* •
$v_{\iota}\in V$ is an _initial position_ , and
* •
$\mathcal{O}:\textnormal{Obs}\to 2^{V\times V}$ is an _observation
interpretation_.
For $o\in\textnormal{Obs}$, $\mathcal{O}(o)$ is an equivalence relation on
positions, that we may write $\sim_{o}$. It represents what a strategy with
observation $o$ can see: $\mathcal{O}(o)$-equivalent positions are
indistinguishable to such a strategy. Also, $\ell(v)$ is the set of atomic
propositions that hold in position $v$.
We define the size $|\mathcal{G}|$ of a $\textrm{CGS}_{\textnormal{ii}}$
$\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$ as the size of
an explicit encoding of the transition function:
$|\mathcal{G}|:=|V|\times|\textnormal{Ac}|^{|\textnormal{Ag}|}\times\lceil\log(|V|)\rceil$.
We may write $v\in\mathcal{G}$ for $v\in V$.
We now introduce a number of notions involved in the semantics of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Consider a
$\textrm{CGS}_{\textnormal{ii}}$
$\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$.
Joint actions. In a position $v\in V$, each player $a$ chooses an action
$c_{a}\in\textnormal{Ac}$, and the game proceeds to position $E(v,\bm{c})$,
where $\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$ stands for the _joint
action_ $(c_{a})_{a\in\textnormal{Ag}}$. Given a joint action
$\bm{c}=(c_{a})_{a\in\textnormal{Ag}}$ and $a\in\textnormal{Ag}$, we let
$\bm{c}_{a}$ denote $c_{a}$.
Plays. A _finite_ (resp. _infinite_) _play_ is a finite (resp. infinite) word
$\rho=v_{0}\ldots v_{n}$ (resp. $\pi=v_{0}v_{1}\ldots$) such that
$v_{0}=v_{\iota}$ and for every $i$ such that $0\leq i<|\rho|-1$ (resp. $i\geq
0$), there exists a joint action $\bm{c}$ such that $E(v_{i},\bm{c})=v_{i+1}$.
Strategies. A (nondeterministic) _strategy_ is a function $\sigma:V^{+}\to
2^{\textnormal{Ac}}\setminus\emptyset$ that maps each finite play to a
nonempty finite set of actions that the player may play. A strategy $\sigma$
is _deterministic_ if for all $\rho$, $\sigma(\rho)$ is a singleton. We let
Str denote the set of all strategies.
Assignments. An _assignment_ is a partial function
$\chi:\textnormal{Ag}\cup\textnormal{Var}\rightharpoonup\mbox{\emph{Str}}$,
assigning to each player and variable in its domain a strategy. For an
assignment $\chi$, a player $a$ and a strategy $\sigma$,
$\chi[a\mapsto\sigma]$ is the assignment of domain
$\textit{dom}(\chi)\cup\\{a\\}$ that maps $a$ to $\sigma$ and is equal to
$\chi$ on the rest of its domain, and $\chi[x\mapsto\sigma]$ is defined
similarly, where $x$ is a variable; also, $\chi[a\mapsto\operatorname{?}]$ is
the restriction of $\chi$ to domain $\textit{dom}(\chi)\setminus\\{a\\}$. In
addition, given a formula
$\varphi\in\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, an assignment is
_variable-complete for $\varphi$_ if its domain contains all free variables of
$\varphi$.
Outcomes. For an assignment $\chi$ and a finite play $\rho$, we let
$\textnormal{Out}(\chi,\rho)$ be the set of infinite plays that start with
$\rho$ and are then extended by letting players follow the strategies assigned
by $\chi$. Formally, $\textnormal{Out}(\chi,\rho)$ is the set of plays of the
form $\rho\cdot v_{1}v_{2}\ldots$ such that for all $i\geq 0$, there exists
$\bm{c}$ such that for all $a\in\textit{dom}(\chi)\cap\textnormal{Ag}$,
$\bm{c}_{a}\in\chi(a)(\rho\cdot v_{1}\ldots v_{i})$ and
$v_{i+1}=E(v_{i},\bm{c})$, with $v_{0}=\mbox{last}(\rho)$.
Synchronous perfect recall. In this work we consider players with _synchronous
perfect recall_ , meaning that each player remembers the whole history of a
play, a classic assumption in games with imperfect information and logics of
knowledge and time. Each observation relation is thus extended to finite plays
as follows: $\rho\sim_{o}\rho^{\prime}$ if $|\rho|=|\rho^{\prime}|$ and
$\rho_{i}\sim_{o}\rho^{\prime}_{i}$ for every $i\in\\{0,\ldots,|\rho|-1\\}$.
Imperfect-information strategies. For $o\in\textnormal{Obs}$, a strategy
$\sigma$ is an _$o$ -strategy_ if $\sigma(\rho)=\sigma(\rho^{\prime})$
whenever $\rho\sim_{o}\rho^{\prime}$. The latter constraint captures the
essence of imperfect information, which is that players can base their
strategic choices only on the information available to them. For
$o\in\textnormal{Obs}$ we let $\mbox{\emph{Str}}_{o}$ be the set of all
$o$-strategies.
###### Definition 2.3 ($\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
semantics).
The semantics of a state formula is defined on a
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$, an assignment $\chi$ that is
variable-complete for $\varphi$, and a finite play $\rho$. For a path formula
$\psi$, the finite play is replaced with an infinite play $\pi$ and an index
$i\in\mathbb{N}$. The definition by mutual induction is as follows:
$\begin{array}[]{lcl}\mathcal{G},\chi,\rho\models p&\text{ if
}&p\in\ell(\mbox{last}(\rho))\\\\[1.0pt]
\mathcal{G},\chi,\rho\models\neg\varphi&\text{ if
}&\mathcal{G},\chi,\rho\not\models\varphi\\\\[1.0pt]
\mathcal{G},\chi,\rho\models\varphi\vee\varphi^{\prime}&\text{ if
}&\mathcal{G},\chi,\rho\models\varphi\;\text{ or
}\;\mathcal{G},\chi,\rho\models\varphi^{\prime}\\\\[1.0pt]
\mathcal{G},\chi,\rho\models\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi&\text{ if
}&\exists\,\sigma\in\mbox{\emph{Str}}_{o}\;\text{ s.t.
}\;\mathcal{G},\chi[x\mapsto\sigma],\rho\models\varphi\\\\[1.0pt]
\mathcal{G},\chi,\rho\models(a,x)\varphi&\text{ if
}&\mathcal{G},\chi[a\mapsto\chi(x)],\rho\models\varphi\\\\[1.0pt]
\mathcal{G},\chi,\rho\models(a,\operatorname{?})\varphi&\text{ if
}&\mathcal{G},\chi[a\mapsto\operatorname{?}],\rho\models\varphi\\\\[1.0pt]
\mathcal{G},\chi,\rho\models{\bf E}\psi&\text{ if }&\text{there exists
}\pi\in\textnormal{Out}(\chi,\rho)\text{ such that
}\mathcal{G},\chi,\pi,|\rho|-1\models\psi\\\\[5.0pt]
\mathcal{G},\chi,\pi,i\models\varphi&\text{ if }&\mathcal{G},\chi,\pi_{\leq
i}\models\varphi\\\\[1.0pt] \mathcal{G},\chi,\pi,i\models\neg\psi&\text{ if
}&\mathcal{G},\chi,\pi,i\not\models\psi\\\\[1.0pt]
\mathcal{G},\chi,\pi,i\models\psi\vee\psi^{\prime}&\text{ if
}&\mathcal{G},\chi,\pi,i\models\psi\;\text{ or
}\;\mathcal{G},\chi,\pi,i\models\psi^{\prime}\\\\[1.0pt]
\mathcal{G},\chi,\pi,i\models{\bf X}\psi&\text{ if
}&\mathcal{G},\chi,\pi,i+1\models\psi\\\\[1.0pt]
\mathcal{G},\chi,\pi,i\models\psi{\bf U}\psi^{\prime}&\text{ if
}&\exists\,j\geq i\mbox{ s.t. }\mathcal{G},\chi,\pi,j\models\psi^{\prime}\\\
&&\text{ and }\forall\,k\text{ s.t. }i\leq
k<j,\;\mathcal{G},\chi,\pi,k\models\psi\end{array}$
###### Remark 1.
Observe that because of the semantics of the outcome quantifier, and unlike
usual definitions of SL, the meaning of an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ sentence depends on the
assignment in which it is evaluated. For instance the
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula ${\bf A}{\bf F}p$ is
clearly a sentence, but whether $\mathcal{G},\chi,\rho\models{\bf A}{\bf F}p$
holds or not depends on which agents are bound to a strategy in $\chi$ and
what these strategies are. However, as usual, a sentence does not require an
assignment to be evaluated, and for an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ sentence $\varphi$ we let
$\mathcal{G},\rho\models\varphi$ if $\mathcal{G},\emptyset,\rho\models\varphi$
for the empty assignment $\emptyset$, and we write $\mathcal{G}\models\varphi$
if $\mathcal{G},v_{\iota}\models\varphi$.
SL is the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
obtained by interpreting all observation symbols as the identity relation
(which models perfect information), restricting to deterministic strategies,
and considering only assignments in which each agent has a strategy (in this
case the outcome of an assignment consists of a single play; one can thus get
rid of the outcome quantifier and evaluate temporal operators in the unique
outcome of the current assignment, as usually done in SL). Also,
$\textnormal{{CTL}}^{*}$ is the fragment of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ which uses no binding,
unbinding or strategy quantification.
### 2.5. Discussion on the semantics
We now discuss some aspects of the semantics.
Evaluation on finite plays. Unlike previous definitions of Strategy Logic, we
evaluate formulas on finite plays (instead of positions), where the finite
play represents the whole history starting from the initial position of the
$\textrm{CGS}_{\textnormal{ii}}$ in which the formula is evaluated. There are
several reasons to do so. First, it allows us to define the semantics more
simply without having to resort to the notion of assignment translations.
Second, it makes it easier to see the correctness of the reduction to
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, that we present in
Section 5. In SL, a strategy only has access to the history of the game
starting from the point where the strategy quantifier from which it arises has
been evaluated. In contrast, in $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ strategies have access to the whole history, starting from the initial
position. However this does not affect the semantics, in the sense that the
perfect-information fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ with deterministic strategies corresponds to SL. Indeed, when agents
have perfect information, having access to the past or not does not affect the
existence of strategies to enforce temporal properties that only concern the
future.
Players not remembering their actions. Our definition of synchronous perfect
recall only considers the sequence of positions in finite plays, and forgets
about actions taken by players. In particular, it is possible in this
definition that a player cannot distinguish between two finite plays in which
she plays different actions. This definition is standard in games with
imperfect information (van der Meyden and Wilke, 2005; Berwanger et al., 2010;
Doyen and Raskin, 2011; Berwanger et al., 2018), since remembering one’s
actions or not is indifferent for the existence of distributed winning
strategies or Nash equilibria. However it makes a difference for some more
involved solution concepts that are expressible in strategic logics such as
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. For instance it is observed
in (Bouyer, 2017, Appendix A) that some games admit subgame-perfect equilibria
only if agents remember their own past actions. Nonetheless we consider the
setting where agents do not remember their actions, as it is the most general.
Indeed, as noted in (Chatterjee and Doyen, 2014b, Remark 2.1, p.8), one can
simulate agents that remember their own actions by storing in positions of the
game the information of the last joint move played (this may create
$|\textnormal{Ac}|^{|\textnormal{Ag}|}$ copies of each position, but the
branching degree is unchanged). One can then adapt indistinguishability
relations to take actions into account. For instance, for an observation
symbol $o$ and an agent $a$, one could consider a new observation symbol
$o_{a}$ that would be interpreted in the enriched game structure as the
refinement of $\sim_{o}$ that considers two positions indistinguishable if
they are indistinguishable for $\sim_{o}$ and contain the same last action for
agent $a$. Binding agent $a$ only to strategies that use observation of the
form $o_{a}$ for some $o$ captures the fact that agent $a$ remembers her
actions.
Agents changing observation. In $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ observations are not bound to agents but to strategies. And because
agents can change their strategy thanks to the binding operator, it follows
that they can change observation, or more precisely they can successively play
with strategies that have different observations. For instance consider a
controller that observes a system through a set of $n$ sensors
$S=\\{s_{1},\ldots,s_{n}\\}$ as in, e.g., (Bittner et al., 2012). Let $o_{i}$
be the observation power provided by the set of sensors
$S\setminus\\{s_{i}\\}$ (one can think of a system where states are tuples of
local states, each sensor observing one component). Also let $o$ be the
observation power provided by the full set $S$ of sensors, and let atom
$\text{fault}_{i}$ represent the fact that a fault occurs on sensor $s_{i}$.
The formula
$\varphi:=\langle\\!\langle x\rangle\\!\rangle^{o}(a,x){\bf A}{\bf
G}\left(\text{safe}\wedge\bigwedge_{i=1}^{n}\text{fault}_{i}\to\langle\\!\langle
x\rangle\\!\rangle^{o_{i}}(a,x){\bf A}{\bf G}\text{\,safe}_{i}\right)$
expresses that the controller $a$ has a strategy (which uses all sensors in
$S$) to maintain the system safe, and if a sensor is lost, it can respond by
switching to a strategy using the remaining sensors to maintain some
alternative, possibly weaker, security requirement $\text{safe}_{i}$.
### 2.6. Model checking and hierarchical instances
We now introduce the main decision problem of this paper, which is the model-
checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. An
_$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ -instance_ is a model
together with a formula, i.e., it is a pair $(\mathcal{G},\Phi)$ where
$\mathcal{G}$ is a $\textrm{CGS}_{\textnormal{ii}}$ and
$\Phi\in\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$.
###### Definition 2.4 (Model checking
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$).
The _model-checking problem_ for $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ is the decision problem that, given an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance
$(\mathcal{G},\Phi)$, returns ‘Yes’ if $\mathcal{G}\models\Phi$, and ‘No’
otherwise.
It is well known that deciding the existence of winning strategies in multi-
player games with imperfect information is undecidable for reachability
objectives (Peterson et al., 2001). Since this problem is easily reduced to
the model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$, we get the following result.
###### Theorem 2.5.
The model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ is undecidable.
Hierarchical instances. We now isolate a sub-problem obtained by restricting
attention to _hierarchical instances_. Intuitively, an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance
$(\mathcal{G},\Phi)$ is hierarchical if, as one goes down a path in the
syntactic tree of $\Phi$, the observations tied to quantifications become
finer.
###### Definition 2.6 (Hierarchical instances).
An $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance
$(\mathcal{G},\Phi)$ is _hierarchical_ if for every subformula
$\varphi_{1}=\langle\\!\langle y\rangle\\!\rangle^{o_{1}}\varphi^{\prime}_{1}$
of $\Phi$ and subformula $\varphi_{2}=\langle\\!\langle
x\rangle\\!\rangle^{o_{2}}\varphi^{\prime}_{2}$ of $\varphi^{\prime}_{1}$, it
holds that $\mathcal{O}(o_{2})\subseteq\mathcal{O}(o_{1})$.
If $\mathcal{O}(o_{2})\subseteq\mathcal{O}(o_{1})$ we say that $o_{2}$ is
_finer_ than $o_{1}$ in $\mathcal{G}$, and that $o_{1}$ is _coarser_ than
$o_{2}$ in $\mathcal{G}$. Intuitively, this means that a player with
observation $o_{2}$ observes game $\mathcal{G}$ no worse than, i.e., knows at
least as much as a player with observation $o_{1}$.
###### Remark 2.
If one uses the trick described in Section 2.5 to model agents that remember
their own actions, then for an agent $a$ to know at least as much as another
agent $b$ it needs to be the case that, in particular, agent $a$ observes all
actions played by agent $b$.
###### Example 2.7 (Fault-tolerant diagnosibility).
Consider the following formula from Section 2.5:
$\varphi:=\langle\\!\langle x\rangle\\!\rangle^{o}(a,x){\bf A}{\bf
G}\left(\text{safe}\wedge\bigwedge_{i=1}^{n}\text{fault}_{i}\to\langle\\!\langle
x\rangle\\!\rangle^{o_{i}}(a,x){\bf A}{\bf G}\text{\,safe}_{i}\right)$
As already discussed, it expresses that the controller can react to the loss
of a sensor to keep ensuring some property of the system. Clearly, the
controller’s observation $o_{i}$ after the loss of sensor $i$ is coarser than
its original observation $o$, and thus formula $\varphi$ in such a system does
not form a hierarchical instance.
We now give an example of scenario where hierarchical instances occur
naturally.
###### Example 2.8 (Security levels).
Consider a system with different “security levels”, where higher levels have
access to more data (i.e., can observe more). Assume that the
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ is such that
$\mathcal{O}(o_{n})\subseteq\mathcal{O}(o_{n-1})\subseteq\ldots\subseteq\mathcal{O}(o_{1})$:
in other words, level $n$ has the highest security clearance, while level $1$
has the lowest. Consider that agent $a$ wants to reach some objective marked
by atom “goal”, that it starts with the lowest observation clearance $o_{1}$,
and that atomic formula “$\text{promote}_{i}$” means that the agent is granted
access to level $i$ (observe that whenever we have $\text{promote}_{i}$, we
should also have $\text{promote}_{j}$ for all $j<i$). For every $i$ we let
$\varphi_{i}(\varphi^{\prime}):=\text{goal}\vee(\text{promote}_{i}\wedge\langle\\!\langle
x\rangle\\!\rangle^{o_{i}}(a,x){\bf A}{\bf F}\varphi^{\prime})$
Now the formula
$\varphi:=\varphi_{1}(\varphi_{2}(\ldots\varphi_{n-1}(\varphi_{n}(\text{goal}))\ldots))$
means that agent $a$ can enforce her goal, possibly by first getting access to
higher security levels and using this additional observation power to reach
the goal. Because the strategy quantifications that are deeper in the formula
have access to more information, this formula forms a hierarchical instance in
$\mathcal{G}$.
Here is the main contribution of this work:
###### Theorem 2.9.
The model-checking problem for $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ restricted to the class of hierarchical instances is decidable.
We prove this result in Section 5 by reducing it to the model-checking problem
for the hierarchical fragment of a logic called $\textnormal{{QCTL}}^{*}$ with
imperfect information, which we now introduce and study in order to use it as
an intermediate, “low-level” logic between tree automata and
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. We then discuss some
applications of this theorem in Section 7.
## 3\. $\textnormal{{QCTL}}^{*}$ with imperfect information
In this section we introduce an imperfect-information extension of
$\textnormal{{QCTL}}^{*}$ (Sistla, 1983; Kupferman, 1999; Kupferman et al.,
2000a; French, 2001; Laroussinie and Markey, 2014), which is an extension of
$\textnormal{{CTL}}^{*}$ with second-order quantification on atomic
propositions. In order to introduce imperfect information, instead of
considering equivalence relations between states as in concurrent game
structures, we will enrich Kripke structures by giving internal structure to
their states, i.e., we see states as $n$-tuples of local states. This way of
modelling imperfect information is inspired from Reif’s multi-player game
structures (Peterson et al., 2001) and distributed systems (Halpern and Vardi,
1989), and we find it very suitable to application of automata techniques, as
discussed in Section 3.3.
The syntax of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is
similar to that of $\textnormal{{QCTL}}^{*}$, except that we annotate second-
order quantifiers by subsets $\textnormal{{o}}\subseteq[n]$. The idea is that
quantifiers annotated by o can only “observe” the local states indexed by
$i\in\textnormal{{o}}$. We define the tree-semantics of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$: this means that we
interpret formulas on trees that are the unfoldings of Kripke structures (this
will capture the fact that players in
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ have synchronous perfect
recall). We then define the syntactic class of _hierarchical formulas_ and
prove, using an automata-theoretic approach, that model checking this class of
formulas is decidable.
For the rest of the section we fix some natural number $n\in\mathbb{N}$ which
parameterises the logic $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$, and which is the number of components in states of the models.
### 3.1. $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ Syntax
The syntax of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is very
similar to that of $\textnormal{{QCTL}}^{*}$: the only difference is that we
annotate quantifiers by a set of indices that defines the “observation” of
that quantifier.
Concrete observations. A set $\textnormal{{o}}\subseteq[n]$ is called a
_concrete observation_ (to distinguish it from observations $o$ in the
definitions of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$).
###### Definition 3.1 ($\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
Syntax).
The syntax of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is
defined by the following grammar:
$\displaystyle\varphi:=$
$\displaystyle\;p\mid\neg\varphi\mid\varphi\vee\varphi\mid{\bf
E}\psi\mid\exists^{\textnormal{{o}}}p.\,\varphi$ $\displaystyle\psi:=$
$\displaystyle\;\varphi\mid\neg\psi\mid\psi\vee\psi\mid{\bf X}\psi\mid\psi{\bf
U}\psi$
where $p\in\textnormal{AP}$ and $\textnormal{{o}}\subseteq[n]$.
Formulas of type $\varphi$ are called _state formulas_ , those of type $\psi$
are called _path formulas_ , and
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ consists of all the
state formulas defined by the grammar. We use standard abbreviation ${\bf
A}\psi:=\neg{\bf E}\neg\psi$. We also use $\exists p.\,\varphi$ as a shorthand
for $\exists^{[n]}p.\,\varphi$, and we let $\forall p.\,\varphi:=\neg\exists
p.\,\neg\varphi$.
Given a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula
$\varphi$, we define the set of _quantified propositions_
${\textnormal{AP}_{\exists}}(\varphi)\subseteq\textnormal{AP}$ as the set of
atomic propositions $p$ such that $\varphi$ has a subformula of the form
$\exists^{\textnormal{{o}}}p.\,\varphi$. We also define the set of _free
propositions_ $\textnormal{AP}_{f}(\varphi)\subseteq\textnormal{AP}$ as the
set of atomic propositions that have an occurrence which is not under the
scope of any quantifier of the form $\exists^{\textnormal{{o}}}p.\,$ Observe
that ${\textnormal{AP}_{\exists}}(\varphi)\cap\textnormal{AP}_{f}(\varphi)$
may not be empty, i.e., a proposition may appear both free and quantified in
(different places of) a formula.
### 3.2. $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ semantics
Several semantics have been considered for $\textnormal{{QCTL}}^{*}$, the two
most studied being the _structure semantics_ and the _tree semantics_ (see
(Laroussinie and Markey, 2014) for more details). For the semantics of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ we adapt the tree
semantics, and we explain the reasons for doing so in Section 3.3.
As already mentioned, for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ we consider structures whose states are tuples of local states. We now
define these structures and related notions.
###### Definition 3.2 (Compound Kripke structures).
A _compound Kripke structure_ , or CKS , over AP is a tuple
$\mathcal{S}=(S,R,\ell,s_{\iota})$ where
* •
$S\subseteq\prod_{i\in[n]}L_{i}$ is a set of _states_ , with
$\\{L_{i}\\}_{i\in[n]}$ a family of $n$ disjoint finite sets of _local states_
,
* •
$R\subseteq S\times S$ is a left-total111i.e., for all $s\in S$, there exists
$s^{\prime}$ such that $(s,s^{\prime})\in R$. _transition relation_ ,
* •
$\ell:S\to 2^{\textnormal{AP}}$ is a _labelling function_ and
* •
$s_{\iota}\in S$ is an _initial state_.
A _path_ in $\mathcal{S}$ is an infinite sequence of states
$\lambda=s_{0}s_{1}\ldots$ such that for all $i\in\mathbb{N}$,
$(s_{i},s_{i+1})\in R$. A _finite path_ is a finite non-empty prefix of a
path. We may write $s\in\mathcal{S}$ for $s\in S$, and we define the _size_
$|\mathcal{S}|$ of a CKS $\mathcal{S}=(S,R,s_{\iota},\ell)$ as its number of
states: $|\mathcal{S}|:=|S|$.
Since we will interpret $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ on unfoldings of CKS , we now define infinite trees.
Trees. In many works, trees are defined as prefix-closed sets of words with
the empty word $\epsilon$ as root. Here trees represent unfoldings of Kripke
structures, and we find it more convenient to see a node $u$ as a sequence of
states and the root as the initial state. Let $X$ be a finite set of
_directions_ (typically a set of states). An _$X$ -tree_ $\tau$ is a nonempty
set of words $\tau\subseteq X^{+}$ such that:
* •
there exists $r\in X$, called the _root_ of $\tau$, such that each $u\in\tau$
starts with $r$ ($r\preccurlyeq u$);
* •
if $u\cdot x\in\tau$ and $u\cdot x\neq r$, then $u\in\tau$,
* •
if $u\in\tau$ then there exists $x\in X$ such that $u\cdot x\in\tau$.
The elements of a tree $\tau$ are called _nodes_. If $u\cdot x\in\tau$, we say
that $u\cdot x$ is a _child_ of $u$. The _depth_ of a node $u$ is $|u|$. An
$X$-tree $\tau$ is _complete_ if for every $u\in\tau$ and $x\in X$, $u\cdot
x\in\tau$. A _path_ in $\tau$ is an infinite sequence of nodes
$\lambda=u_{0}u_{1}\ldots$ such that for all $i\in\mathbb{N}$, $u_{i+1}$ is a
child of $u_{i}$, and $Paths(u)$ is the set of paths that start in node $u$.
Labellings. An _AP -labelled $X$-tree_, or _$(\textnormal{AP},X)$ -tree_ for
short, is a pair $t=(\tau,\ell)$, where $\tau$ is an $X$-tree called the
_domain_ of $t$ and $\ell:\tau\rightarrow 2^{\textnormal{AP}}$ is a
_labelling_ , which maps each node to the set of propositions that hold there.
For $p\in\textnormal{AP}$, a _$p$ -labelling_ for a tree is a mapping
$\ell_{p}:\tau\to\\{0,1\\}$ that indicates in which nodes $p$ holds, and for a
labelled tree $t=(\tau,\ell)$, the $p$-labelling of $t$ is the $p$-labelling
$u\mapsto 1$ if $p\in\ell(u)$, 0 otherwise. The composition of a labelled tree
$t=(\tau,\ell)$ with a $p$-labelling $\ell_{p}$ for $\tau$ is defined as
$t\otimes\ell_{p}:=(\tau,\ell^{\prime})$, where
$\ell^{\prime}(u)=\ell(u)\cup\\{p\\}$ if $\ell_{p}(u)=1$, and
$\ell(u)\setminus\\{p\\}$ otherwise. A $p$-labelling for a labelled tree
$t=(\tau,\ell)$ is a $p$-labelling for its domain $\tau$. A _pointed labelled
tree_ is a pair $(t,u)$ where $u$ is a node of $t$.
If $u=w\cdot x$, the _subtree_ $t_{u}$ of $t=(\tau,\ell)$ is defined as
$t_{u}:=(\tau_{u},\ell_{u})$ with $\tau_{u}=\\{x\cdot w^{\prime}\mid w\cdot
x\cdot w^{\prime}\in\tau\\}$, and $\ell_{u}(x\cdot w^{\prime})=\ell(w\cdot
x\cdot w^{\prime})$. A labelled tree is _regular_ if it has finitely many
disctinct subtrees.
In the tree semantics of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ that we consider here, formulas are evaluated on tree unfoldings of CKS
, which we now define.
Tree unfoldings. Let $\mathcal{S}=(S,R,\ell,s_{\iota})$ be a compound Kripke
structure over AP. The _tree-unfolding of $\mathcal{S}$_ is the
$(\textnormal{AP},S)$-tree $t_{\mathcal{S}}:=(\tau,\ell^{\prime})$, where
$\tau$ is the set of all finite paths that start in $s_{\iota}$, and for every
$u\in\tau$, $\ell^{\prime}(u):=\ell(\mbox{last}(u))$.
Note that a labelled tree is regular if and only if it is the unfolding of
some finite Kripke structure.
Narrowing. Let $X$ and $Y$ be two finite sets, and let $(x,y)\in X\times Y$.
The _$X$ -narrowing_ of $(x,y)$ is ${(x,y)\\!\downarrow_{X}}:=x$. This
definition extends naturally to words and trees over $X\times Y$ (point-wise).
Given a family of (disjoint) sets of local states $\\{L_{i}\\}_{i\in[n]}$ and
a subset $I\subseteq[n]$, we let $L_{I}:=\prod_{i\in I}L_{i}$ if
$I\neq\emptyset$ and $L_{\emptyset}:=\\{\mathbf{0}\\}$, where $\mathbf{0}$ is
a special symbol. For $I,J\subseteq[n]$ and $z\in L_{I}$, we also define
${z\\!\downarrow_{J}}:=z\\!\downarrow_{L_{I\cap J}}$, where $z$ is seen as a
pair $z=(x,y)\in L_{I\cap J}\times L_{I\setminus J}$, i.e., we apply the above
definition with $X=L_{I\cap J}$ and $Y=L_{I\setminus J}$. This is well defined
because having taken sets $L_{i}$ to be disjoint, the ordering of local states
in $z$ is indifferent. We also extend this definition to words and trees. In
particular, for every $L_{I}$-tree $\tau$, $\tau\\!\downarrow_{\emptyset}$ is
the only $L_{\emptyset}$-tree, $\mathbf{0}^{\omega}$.
Quantification and uniformity. In
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
$\exists^{\textnormal{{o}}}p.\,\varphi$ holds in a tree $t$ if there is some
o-uniform $p$-labelling of $t$ such that $t$ with this $p$-labelling satisfies
$\varphi$. Intuitively, a $p$-labelling of a tree is o-uniform if every two
nodes that are indistinguishable for observation o agree on $p$.
###### Definition 3.3 (o-indistinguishability and o-uniformity in $p$).
Fix $\textnormal{{o}}\subseteq[n]$ and $I\subseteq[n]$.
* •
Two tuples $x,x^{\prime}\in L_{I}$ are _o -indistinguishable_, written
$x\approx_{\textnormal{{o}}}x^{\prime}$, if
$x\\!\downarrow_{\textnormal{{o}}}=x^{\prime}\\!\downarrow_{\textnormal{{o}}}$.
* •
Two words $u=u_{0}\ldots u_{i}$ and $u^{\prime}=u^{\prime}_{0}\ldots
u^{\prime}_{j}$ over alphabet $L_{I}$ are _o -indistinguishable_, written
$u\approx_{\textnormal{{o}}}u^{\prime}$, if $i=j$ and for all
$k\in\\{0,\ldots,i\\}$ we have
$u_{k}\approx_{\textnormal{{o}}}u^{\prime}_{k}$.
* •
A $p$-labelling for a tree $\tau$ is _o -uniform_ if for all
$u,u^{\prime}\in\tau$, $u\approx_{\textnormal{{o}}}u^{\prime}$ implies
$\ell_{p}(u)=\ell_{p}(u^{\prime})$.
###### Definition 3.4 ($\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
semantics).
We define by induction the satisfaction relation $\models$ of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. Let $t=(\tau,\ell)$
be an AP-labelled $L_{I}$-tree, $u$ a node and $\lambda$ a path in $\tau$:
$\displaystyle t,u\models$ $\displaystyle\,p$ if $\displaystyle\quad
p\in\ell(u)$ $\displaystyle t,u\models$ $\displaystyle\,\neg\varphi$ if
$\displaystyle\quad t,u\not\models\varphi$ $\displaystyle t,u\models$
$\displaystyle\,\varphi\vee\varphi^{\prime}$ if $\displaystyle\quad
t,u\models\varphi\mbox{ or }t,u\models\varphi^{\prime}$ $\displaystyle
t,u\models$ $\displaystyle\,{\bf E}\psi$ if
$\displaystyle\quad\exists\,\lambda\in Paths(u)\mbox{ s.t.
}t,\lambda\models\psi$ $\displaystyle t,u\models$
$\displaystyle\,\exists^{\textnormal{{o}}}p.\,\varphi$ if
$\displaystyle\quad\exists\,\ell_{p}\mbox{ a $\textnormal{{o}}$-uniform
$p$-labelling for $t$ such that }t\otimes\ell_{p},u\models\varphi$
$\displaystyle t,\lambda\models$ $\displaystyle\,\varphi$ if
$\displaystyle\quad t,\lambda_{0}\models\varphi$ $\displaystyle
t,\lambda\models$ $\displaystyle\,\neg\psi$ if $\displaystyle\quad
t,\lambda\not\models\psi$ $\displaystyle t,\lambda\models$
$\displaystyle\,\psi\vee\psi^{\prime}\quad$ if $\displaystyle\quad
t,\lambda\models\psi\mbox{ or }t,\lambda\models\psi^{\prime}$ $\displaystyle
t,\lambda\models$ $\displaystyle\,{\bf X}\psi$ if $\displaystyle\quad
t,\lambda_{\geq 1}\models\psi$ $\displaystyle t,\lambda\models$
$\displaystyle\,\psi{\bf U}\psi^{\prime}$ if $\displaystyle\quad\exists\,i\geq
0\mbox{ s.t. }t,\lambda_{\geq i}\models\psi^{\prime}\text{ and }\forall
j\text{ s.t. }0\leq j<i,\;t,\lambda_{\geq j}\models\psi$
We write $t\models\varphi$ for $t,r\models\varphi$, where $r$ is the root of
$t$. Given a CKS $\mathcal{S}$ and a
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$, we
also write $\mathcal{S}\models\varphi$ if
$\mathcal{S},s_{\iota}\models\varphi$.
###### Example 3.5.
Consider the following CTL formula:
$\mathbf{border}(p):={\bf A}{\bf F}p\wedge{\bf A}{\bf G}(p\rightarrow{\bf
A}{\bf X}{\bf A}{\bf G}\neg p).$
This formula holds in a labelled tree if and only if each path contains
exactly one node labelled with $p$. Now, consider the following
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula:
$\mathbf{level}(p):=\exists^{\emptyset}p.\,\mathbf{border}(p).$
For a blind quantifier, two nodes of a tree are indistinguishable if and only
if they have same depth. Therefore, this formula holds on a tree iff the $p$’s
label all and only the nodes at some fixed depth. This formula can thus be
used to capture the equal level predicate on trees. Actually, just as
$\textnormal{{QCTL}}^{*}$ captures MSO, one can prove that
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ with tree semantics
subsumes MSO with equal level (Elgot and Rabin, 1966; Läuchli and Savioz,
1987; Thomas, 1992). In Theorem 3.7 we make use of a similar observation to
prove that model-checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ is undecidable.
### 3.3. Discussion on the definition of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
We now motivate in detail some aspects of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$.
Modelling of imperfect information. We model imperfect information by means of
local states (rather than equivalence relations) because this greatly
facilitates the use of automata techniques. More precisely, in our decision
procedure of Section 4 we use an operation on tree automata called _narrowing_
, which was introduced in (Kupferman and Vardi, 1999) to deal with imperfect-
information in the context of distributed synthesis for temporal
specifications. Given an automaton $\mathcal{A}$ that works on $X\times
Y$-trees, where $X$ and $Y$ are two finite sets, and assuming that we want to
model an operation performed on trees while observing only the $X$ component
of each node, this narrowing operation allows one to build from $\mathcal{A}$
an automaton $\mathcal{A}^{\prime}$ that works on $X$-trees, such that
$\mathcal{A}^{\prime}$ accepts an $X$-tree if and only if $\mathcal{A}$
accepts its widening to $X\times Y$ (intuitively, this widening is the
$X\times Y$-tree in which each node is labelled as its projection on the
original $X$-tree; see Section 4 for details).
With our definition of compound Kripke structures, their unfoldings are trees
over the Cartesian product $L_{[n]}$. To model a quantification
$\exists^{\textnormal{{o}}}p$ with observation $\textnormal{{o}}\subseteq[n]$,
we can thus use the narrowing operation to forget about components $L_{i}$,
for $i\in[n]\setminus\textnormal{{o}}$. We then use the classic projection of
nondeterministic tree automata to perform existential quantification on atomic
proposition $p$. Since the choice of the $p$-labelling is made directly on
$L_{\textnormal{{o}}}$-trees, it is necessarily o-uniform.
Choice of the tree semantics. The two most studied semantics for
$\textnormal{{QCTL}}^{*}$ are the _structure semantics_ , in which formulas
are evaluated directly on Kripke structures, and the _tree semantics_ , in
which Kripke structures are first unfolded into infinite trees. Tree semantics
thus allows quantifiers to choose the value of a quantified atomic proposition
in each _finite path_ of the model, while in structure semantics the choice is
only made in each state. When $\textnormal{{QCTL}}^{*}$ is used to express
existence of strategies, existential quantification on atomic propositions
labels the structure with strategic choices; in this kind of application,
structure semantics reflects so-called _positional_ or _memoryless_
strategies, while tree semantics captures _perfect-recall_ or _memoryful_
strategies. Since in this work we are interested in perfect-recall strategies,
we only consider the tree semantics.
### 3.4. Model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$
We now define the model-checking problem studied in the rest of this section.
###### Definition 3.6 (Model checking
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$).
The _model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$_ is the following
decision problem: given an instance $(\mathcal{S},\Phi)$ where $\mathcal{S}$
is a CKS, and $\Phi$ is a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ formula, return ‘Yes’ if $\mathcal{S}\models\Phi$ and ‘No’ otherwise.
We now prove that the model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable. This
comes as no surprise since, as we will show in Section 5,
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ can express the
existence of distributed winning strategies in imperfect-information games.
However we propose a proof that shows the connection between
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ and MSO with equal-
level predicate (Elgot and Rabin, 1966; Läuchli and Savioz, 1987; Thomas,
1992). This proof also has the benefit of showing that
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable already
for formulas that involve only propositional quantifiers that observe either
everything or nothing.
###### Theorem 3.7.
The model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is undecidable.
###### Proof.
Let $\textnormal{{MSO}}_{\textnormal{eq}}$ denote the extension of the logic
MSO (without unary predicates) by a binary predicate symbol eq.
$\textnormal{{MSO}}_{\textnormal{eq}}$ is interpreted on the full binary tree,
and the semantics of $\text{eq}(x,y)$ is that $x$ and $y$ have the same depth
in the tree. We show how to effectively translate
$\textnormal{{MSO}}_{\textnormal{eq}}$ into
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, and our result
follows since the $\textnormal{{MSO}}_{\textnormal{eq}}$-theory of the binary
tree is undecidable (Läuchli and Savioz, 1987). The translation from
$\textnormal{{MSO}}_{\textnormal{eq}}$ to
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is obtained by
extending that from MSO to QCTL (Laroussinie and Markey, 2014), using the
formula $\mathbf{level}(\cdot)$ from Example 3.5 to help capture the equal-
length predicate.
We define a translation $\widehat{\quad}$ from
$\textnormal{{MSO}}_{\textnormal{eq}}$ to
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ such that for every
tree $t$ with root $r$, nodes $u_{1},\ldots,u_{i}\in t$ and sets of nodes
$U_{1},\ldots,U_{j}\subseteq t$, and every
$\textnormal{{MSO}}_{\textnormal{eq}}$ formula
${\varphi(x,x_{1},\ldots,x_{i},X_{1},\ldots,X_{j})}$, we have that
(1)
$t,r,u_{1},\ldots,u_{i},U_{1},\ldots,U_{j}\models\varphi(x,x_{1},\ldots,x_{i},X_{1},\ldots,X_{j})\text{\quad
if and only if \quad}\widehat{t},r\models\widehat{\varphi}$
where $\widehat{t}$ is obtained from $t$ by defining the labelling for fresh
atomic propositions $p_{x_{k}}$ and $p_{X_{k}}$, with $k\in[i]$, as follows:
$p_{x_{k}}\in\widehat{\ell}(u)$ if $u=u_{k}$ and
$p_{X_{k}}\in\widehat{\ell}(u)$ if $u\in U_{k}$.
The translation of MSO to $\textnormal{{QCTL}}^{*}$ from (Laroussinie and
Markey, 2014) can be extended to one from
$\textnormal{{MSO}}_{\textnormal{eq}}$ to
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ by adding rules for
the equal level predicate. Indeed, for
$\varphi(x,x_{1},\ldots,x_{i},X_{1},\ldots,X_{j})\in\textnormal{{MSO}}_{\textnormal{eq}}$,
we inductively define the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ formula $\widehat{\varphi}$ as follows, where $k\in[i]$:
$\begin{array}[]{rclcrcl}\widehat{x=x_{k}}&:=&p_{x_{k}}&&\widehat{x_{k}=x_{l}}&:=&{\bf
E}{\bf F}(p_{x_{k}}\wedge p_{x_{l}})\\\\[5.0pt] \widehat{x\in
X_{k}}&:=&p_{X_{k}}&&\widehat{x_{k}\in X_{l}}&:=&{\bf E}{\bf
F}(p_{x_{k}}\wedge p_{X_{l}})\\\\[5.0pt]
\widehat{\neg\varphi^{\prime}}&:=&\neg\widehat{\varphi^{\prime}}&&\widehat{\varphi_{1}\vee\varphi_{2}}&:=&\widehat{\varphi_{1}}\vee\widehat{\varphi_{2}}\\\\[5.0pt]
\widehat{\exists x_{k}.\varphi^{\prime}}&:=&\lx@intercol\exists
<EMAIL_ADDRESS>\widehat{\exists X_{k}.\varphi^{\prime}}&:=&\lx@intercol\exists
<EMAIL_ADDRESS>\widehat{S(x,x_{k})}&:=&{\bf E}{\bf
X}p_{x_{k}}&&\widehat{S(x_{k},x)}&:=&\perp\\\\[5.0pt]
\widehat{S(x_{k},x_{l})}&:=&\lx@intercol{\bf E}{\bf F}(p_{x_{k}}\wedge{\bf
E}{\bf X}p_{x_{l}})\hfil\lx@intercol\end{array}$
where $\mathrm{uniq}(p):={\bf E}{\bf F}p\wedge\forall q.\;\left({\bf E}{\bf
F}(p\wedge q)\rightarrow{\bf A}{\bf G}(p\rightarrow q)\right)$ holds in a tree
iff it has exactly one node labelled with $p$. To understand the $x=x_{k}$ and
$x\in X_{k}$ cases, consider that $x$ will be interpreted as the root. For the
$S(x_{k},x)$ case, observe that $x$ has no incoming edge since it is
interpreted as the root. Second-order quantification $\exists X_{k}$ is
translated into quantification on atomic proposition $p_{X_{k}}$, and first-
order quantification $\exists x_{k}$ is treated similarly, with the additional
constraint that quantification is limited to $p_{x_{k}}$-labellings that set
$p_{x_{k}}$ to true in one and only one node of the tree.
The rules for eq are as follows:
$\displaystyle\widehat{\text{eq}(x,x_{k})}$ $\displaystyle:=p_{x_{k}}$
$\displaystyle\widehat{\text{eq}(x_{k},x_{l})}$
$\displaystyle:=\exists^{\emptyset}p.\,\mathbf{border}(p)\wedge{\bf A}{\bf
G}(p_{x_{k}}\rightarrow p\wedge p_{x_{l}}\rightarrow p)$
To understand the first case, observe that since $x$ is interpreted as the
root, $x_{k}$ is on the same level as $x$ if and only if it is also assigned
the root. For the second case, recall from Example 3.5 that the
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula
$\exists^{\emptyset}p.\,\mathbf{border}(p)$ places one unique horizontal line
of $p$’s in the tree, and thus requiring that $x_{k}$ and $x_{l}$ be both on
this line ensures that they are on the same level. The correctness of the
translation follows from (1), which is proven by induction.
Now take an instance $(t,\varphi(x))$ of the model-checking problem for
$\textnormal{{MSO}}_{\textnormal{eq}}$ on the full binary tree $t$. Let
$\mathcal{S}$ be a CKS with two states $s_{0}$ and $s_{1}$ (local states are
irrelevant here), whose transition relation is the complete relation, and with
empty labelling function. Clearly, $t_{\mathcal{S}}=t$, and applying (1) we
get:
$t,s_{0}\models\varphi(x)\text{\quad
iff\quad}\widehat{t},s_{0}\models\widehat{\varphi}.$
Observe that in the previous line, because there are no free variables besides
$x$, which stands for the root, we have that $\widehat{t}=t=t_{\mathcal{S}}$,
hence we have indeed produced an instance of the model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. ∎
## 4\. A decidable fragment of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$: hierarchy on
observations
The main result of this section is the identification of an important
decidable fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$.
###### Definition 4.1 (Hierarchical formulas).
A $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\varphi$ is
_hierarchical_ if for all subformulas
$\varphi_{1}=\exists^{\textnormal{{o}}_{1}}p_{1}.\,\varphi^{\prime}_{1}$ and
$\varphi_{2}=\exists^{\textnormal{{o}}_{2}}p_{2}.\,\varphi^{\prime}_{2}$ of
$\varphi$ where $\varphi_{2}$ is a subformula of $\varphi^{\prime}_{1}$, we
have $\textnormal{{o}}_{1}\subseteq\textnormal{{o}}_{2}$.
In other words, a formula is hierarchical if innermore propositional
quantifiers observe at least as much as outermore ones.
###### Example 4.2.
Formula $\exists^{\\{1,2\\}}p.\,\exists^{\\{1,2,4\\}}q.\,{\bf A}{\bf G}(p\vee
q)$ is hierarchical because $\\{1,2\\}\subseteq\\{1,2,4\\}$. On the other
hand, formula $\exists^{\\{1,2\\}}p.\,\big{(}\exists^{\\{1,2,4\\}}q.\,{\bf
A}{\bf G}(p\vee q)\wedge\exists^{\\{3\\}}q^{\prime}.\,{\bf E}{\bf F}(p\wedge
q^{\prime})\big{)}$ is not, because $\\{1,2\\}\not\subseteq\\{3\\}$. Note that
neither is it the case that $\\{3\\}\subseteq\\{1,2\\}$: the observation power
of quantifiers $\exists^{\\{1,2\\}}p.\,$ and $\exists^{\\{3\\}}q^{\prime}.\,$
are incomparable. Finally, formula
$\forall^{\\{1,2,3\\}}p.\,\exists^{\\{1,2\\}}q.\,.{\bf A}{\bf G}(p\vee q)$ is
not hierarchical even though $\\{1,2\\}\subseteq\\{1,2,3\\}$, as the
quantifier that observes best is _higher_ in the syntactic tree.
We let $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$ be the set of hierarchical
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formulas.
###### Theorem 4.3.
Model checking $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$ is non-elementary decidable.
Since our decision procedure for the hierarchical fragment of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ is based on an
automata-theoretic approach, we recall some definitions and results for
alternating tree automata.
### 4.1. Alternating parity tree automata
We recall alternating parity tree automata. Because their semantics is defined
via acceptance games, we start with basic definitions for two-player turn-
based parity games, or simply parity games.
Parity games. A _parity game_ is a structure $\mathcal{G}=(V,E,v_{\iota},C)$,
where $V=V_{E}\uplus V_{A}$ is a set of _positions_ partitioned between
positions of Eve ($V_{E}$) and those of Adam ($V_{A}$), $E\subseteq V\times V$
is a set of _moves_ , $v_{\iota}$ is an initial position and
$C:V\to\mathbb{N}$ is a colouring function of finite codomain. In positions
$V_{E}$, Eve chooses the next position, while Adam chooses in positions
$V_{A}$. A play is an infinite sequence of positions $v_{0}v_{1}v_{2}\ldots$
such that $v_{0}=v_{\iota}$ and for all $i\geq 0$, $(v_{i},v_{i+1})\in E$
(written $v_{i}\to v_{i+1}$). We assume that for every $v\in V$ there exists
$v^{\prime}\in V$ such that $v\to v^{\prime}$. A strategy for Eve is a partial
function $V^{*}\rightharpoonup V$ that maps each finite prefix of a play
ending in a position $v\in V_{E}$ to a next position $v^{\prime}$ such that
$v\to v^{\prime}$. A play $v_{0}v_{1}v_{2}\ldots$ _follows_ a strategy
$\sigma$ of Eve if for every $i\geq 0$ such that $v_{i}\in V_{E}$,
$v_{i+1}=\sigma(v_{0}\ldots v_{i})$. A strategy $\sigma$ is winning if every
play that follows it satisfies the parity condition, i.e., the least colour
seen infinitely often along the play is even.
Parity tree automata. Because it is sufficient for our needs and simplifies
definitions, we assume that all input trees are complete trees. For a set $Z$,
$\mathbb{B}^{+}(Z)$ is the set of formulas built from the elements of $Z$ as
atomic propositions using the connectives $\vee$ and $\wedge$, and with
$\top,\perp\in\mathbb{B}^{+}(Z)$. An _alternating tree automaton (ATA ) on
$(\textnormal{AP},X)$-trees_ is a structure
$\mathcal{A}=(Q,\delta,q_{{\iota}},C)$ where $Q$ is a finite set of states,
$q_{{\iota}}\in Q$ is an initial state, $\delta:Q\times
2^{\textnormal{AP}}\rightarrow\mathbb{B}^{+}(X\times Q)$ is a transition
function, and $C:Q\to\mathbb{N}$ is a colouring function. To ease reading we
shall write atoms in $\mathbb{B}^{+}(X\times Q)$ between brackets, such as
$[x,q]$. A _nondeterministic tree automaton (NTA ) on
$(\textnormal{AP},X)$-trees_ is an ATA $\mathcal{A}=(Q,\delta,q_{{\iota}},C)$
such that for every $q\in Q$ and $a\in 2^{\textnormal{AP}}$, $\delta(q,a)$ is
written in disjunctive normal form and for every direction $x\in X$ each
disjunct contains exactly one element of $\\{x\\}\times Q$. An NTA is
_deterministic_ if for each $q\in Q$ and $a\in 2^{\textnormal{AP}}$,
$\delta(q,a)$ consists of a single disjunct.
Acceptance of a pointed labelled tree $(t,u_{\iota})$, where $t=(\tau,\ell)$,
by an ATA $\mathcal{A}=(Q,\delta,q_{\iota},C)$ is defined via the parity game
$\mathcal{G}(\mathcal{A},t,u_{\iota})=(V,E,v_{\iota},C^{\prime})$ where
$V=\tau\times Q\times\mathbb{B}^{+}(X\times Q)$, position $(u,q,\alpha)$
belongs to Eve if $\alpha$ is of the form $\alpha_{1}\vee\alpha_{2}$ or
$[x,q^{\prime}]$, and to Adam otherwise,
$v_{{\iota}}=(u_{\iota},q_{\iota},\delta(q_{\iota},u_{\iota}))$, and
$C^{\prime}(u,q,\alpha)=C(q)$. Moves in $\mathcal{G}(\mathcal{A},t,u_{\iota})$
are defined by the following rules:
$\begin{array}[]{ll}(u,q,\alpha_{1}\;\mbox{$\dagger$}\;\alpha_{2})\rightarrow(u,q,\alpha_{i})&\mbox{where
}\mbox{$\dagger$}\in\\{\vee,\wedge\\}\mbox{ and }i\in\\{1,2\\},\\\
\lx@intercol(u,q,[x,q^{\prime}])\rightarrow(u\cdot
x,q^{\prime},\delta(q^{\prime},\ell(u\cdot x)))\hfil\lx@intercol\end{array}$
Positions of the form $(u,q,\top)$ and $(u,q,\perp)$ are sinks, winning for
Eve and Adam respectively.
A pointed labelled tree $(t,u)$ is _accepted_ by $\mathcal{A}$ if Eve has a
winning strategy in $\mathcal{G}(\mathcal{A},t,u)$, and the _language_ of
$\mathcal{A}$ is the set of pointed labelled trees accepted by $\mathcal{A}$,
written $\mathcal{L}(\mathcal{A})$. We write $t\in\mathcal{L}(\mathcal{A})$ if
$(t,r)\in\mathcal{L}(\mathcal{A})$, where $r$ is the root of $t$. Finally, the
_size_ $|\mathcal{A}|$ of an ATA $\mathcal{A}$ is its number of states plus
the sum of the sizes of all formulas appearing in the transition function.
Word automata. When the set of directions $X$ is a singleton, directions can
be forgotten and infinite trees can be identified with infinite words. We thus
call _parity word automaton_ a parity tree automaton on
$(\textnormal{AP},X)$-trees where $X$ is a singleton. In the case of a
nondeterministic parity word automaton, transitions can be represented as
usual as a mapping $\Delta:Q\times 2^{\textnormal{AP}}\to 2^{Q}$ which, in a
state $q\in Q$, reading the label $a\in 2^{\textnormal{AP}}$ of the current
position in the word, indicates a set of states $\Delta(q,a)$ from which Eve
can choose to send in the next position of the word.
We recall four classic operations on tree automata.
Complementation. Given an ATA $\mathcal{A}=(Q,\delta,q_{{\iota}},C)$, we
define its _dual_
$\overline{\mathcal{A}}=(Q,\overline{\delta},q_{{\iota}},\overline{C})$ where,
for each $q\in Q$ and $a\in 2^{\textnormal{AP}}$, $\overline{\delta}(q,a)$ is
the dual of $\delta(q,a)$, i.e., conjunctions become disjunctions and vice
versa, and $C(q):=C(q)+1$.
###### Theorem 4.4 (Complementation (Muller and Schupp, 1995)).
For every labelled tree $t$ and node $u$ in $t$,
$(t,u)\in\mathcal{L}(\overline{\mathcal{A}})\mbox{ if, and only if,
}(t,u)\notin\mathcal{L}(\mathcal{A}).$
Projection. The second construction is a projection operation, used by Rabin
to deal with second-order monadic quantification:
###### Theorem 4.5 (Projection (Rabin, 1969)).
Given an NTA $\mathcal{N}$ on $(\textnormal{AP},X)$-trees and an atomic
proposition $p\in\textnormal{AP}$, one can build in linear time an NTA
$\mathcal{N}\\!\Downarrow_{p}$ on $(\textnormal{AP}\setminus\\{p\\},X)$-trees
such that
$(t,u)\in\mathcal{L}(\mathcal{N}\\!\Downarrow_{p})\mbox{\;\;\;iff\;\;\;}\mbox{
there exists a $p$-labelling $\ell_{p}$ for $t$ s.t.
}(t\otimes\ell_{p},u)\in\mathcal{L}(\mathcal{N}).$
Intuitively, ${\mathcal{N}\\!\Downarrow_{p}}$ is automaton $\mathcal{N}$ with
the only difference that when it reads the label of a node, it can choose to
run as if $p$ was either true or false: if $\delta$ is the transition function
of $\mathcal{N}$, that of ${\mathcal{N}\\!\Downarrow_{p}}$ is
$\delta^{\prime}(q,a)=\delta(q,a\cup\\{p\\})\vee\delta(q,a\setminus\\{p\\})$,
for any state $q$ and label $a\in 2^{\textnormal{AP}}$. Another way of seeing
it is that $\mathcal{N}\\!\Downarrow_{p}$ guesses a $p$-labelling for the
input tree, and simulates $\mathcal{N}$ on this modified input.
Simulation. To prevent $\mathcal{N}\\!\Downarrow_{p}$ from guessing different
labels for a same node in different executions, it is crucial that
$\mathcal{N}$ be nondeterministic, which is the reason why we need the
following result:
###### Theorem 4.6 (Simulation (Muller and Schupp, 1995)).
Given an ATA $\mathcal{A}$, one can build in exponential time an NTA
$\mathcal{N}$ such that $\mathcal{L}(\mathcal{N})=\mathcal{L}(\mathcal{A})$.
The last construction was introduced by Kupferman and Vardi to deal with
imperfect information aspects in distributed synthesis. To describe it we need
to define a widening operation on trees which expands the directions in a
tree.
Tree widening. We generalise the widening operation defined in (Kupferman and
Vardi, 1999). In the following definitions we fix a CKS
$\mathcal{S}=(S,R,s_{\iota},\ell)$, and for $I\subseteq[n]$ we let
$S_{I}:=\\{s\\!\downarrow_{I}\,\mid s\in S\\}\subseteq L_{I}$ (recall that
$L_{I}=\prod_{i\in I}L_{i}$). Let $J\subseteq I\subseteq[n]$. For every
$S_{J}$-tree $\tau$ rooted in $s_{J}$ and $s_{I}\in S_{I}$ such that
$s_{I}\\!\downarrow_{J}=s_{J}$, we define the _$I$ -widening_ of $\tau$ as the
$S_{I}$-tree
$\tau\\!\uparrow^{I}_{s_{I}}:=\\{u\in s_{I}\cdot S_{I}^{*}\mid
u\\!\downarrow_{J}\in\tau\\}.$
For an $(\textnormal{AP},S_{J})$-tree $t=(\tau,\ell)$ rooted in $s_{J}$ and
$s_{I}\in S_{I}$ such that $s_{I}\\!\downarrow_{J}=s_{J}$, we let
$t\\!\uparrow^{I}_{s_{I}}:=(\tau\\!\uparrow^{I}_{s_{I}},\ell^{\prime}),\mbox{
where }\ell^{\prime}(u):=\ell(u\\!\downarrow_{J}).$
When clear from the context we may omit the subscript $s_{I}$. It is the case
in particular when referring to _pointed_ widenings of trees:
$(t\\!\uparrow^{I},u)$ stands for $(t\\!\uparrow^{I}_{u_{0}},u)$.
Narrowing. We now state a result from (Kupferman and Vardi, 1999) in our
slightly more general setting (the proof can be adapted straightforwardly).
The rough idea of this narrowing operation on ATA is that, if one just
observes $S_{J}$, uniform $p$-labellings on $S_{I}$-trees can be obtained by
choosing the labellings directly on $S_{J}$-trees, and then lifting them to
$S_{I}$.
###### Theorem 4.7 (Narrowing (Kupferman and Vardi, 1999)).
Given an ATA $\mathcal{A}$ on $S_{I}$-trees one can build in linear time an
ATA ${\mathcal{A}\\!\downarrow_{J}}$ on $S_{J}$-trees such that for every
pointed $(\textnormal{AP},S_{J})$-tree $(t,u)$ and every $u^{\prime}\in
S_{I}^{+}$ such that $u^{\prime}\\!\downarrow_{J}=u$,
$(t,u)\in\mathcal{L}(\mathcal{A}\\!\downarrow_{J})\mbox{ iff
}(t\\!\uparrow^{I},u^{\prime})\in\mathcal{L}(\mathcal{A}).$
### 4.2. Translating $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$ to ATA
In order to prove Theorem 4.3 we need some more notations and a technical
lemma that contains the automata construction.
###### Definition 4.8.
For every $\varphi\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$,
we let
$I_{\varphi}:=\bigcap_{\textnormal{{o}}\in\textnormal{Obs}(\varphi)}\textnormal{{o}}\subseteq[n],$
where $\textnormal{Obs}(\varphi)$ is the set of concrete observations that
occur in $\varphi$, with the intersection over the empty set defined as $[n]$.
For a CKS $\mathcal{S}$ with state set $S\subseteq\prod_{i\in[n]}L_{i}$ we
also let $S_{\varphi}:=\\{s\\!\downarrow_{I_{\varphi}}\mid s\in S\\}$.
Elements of $S_{\varphi}$ will be the possible directions used by the
automaton we build for $\varphi$. In other words, the automaton for $\varphi$
will work on $S_{\varphi}$-trees. The intuition is that the observations in
$\varphi$ determine which components of the model’s states can be observed by
the automaton.
Our construction, that transforms a
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$
formula $\varphi$ and a CKS $\mathcal{S}$ into an ATA, builds upon the classic
construction from (Kupferman et al., 2000b), which builds ATA for
$\textnormal{{CTL}}^{*}$ formulas. In addition, we use projection of automata
to treat second-order quantification, and to deal with imperfect information
we resort to automata narrowing.
Moreover, we use tree automata in an original way that allows us to deal with
non-observable atomic propositions, which in turn makes it possible to
consider non-observable winning conditions in our decidable fragment of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. The classical approach to
model checking via tree automata is to build an automaton that accepts all
tree models of the input formula, and check whether it accepts the unfolding
of the model (Kupferman et al., 2000b). We instead encode the model in the
automata, using the input tree only to guess labellings for quantified
propositions.
Encoding the model in the automaton. Quantification on atomic propositions is
classically performed by means of automata projection (see Theorem 4.5). But
in order to obtain a labelling that is uniform with regards to the observation
of the quantifier, we need to make use of the narrowing operation (see Theorem
4.7). Intuitively, to check that a formula
$\exists^{\textnormal{{o}}}p.\,\varphi$ holds in a tree $t$, we would like to
work on its narrowing $t^{\prime}:=t\\!\downarrow_{\textnormal{{o}}}$, guess a
labelling for $p$ on this tree thanks to automata projection, thus obtaining a
tree $t^{\prime}_{p}$, take its widening
$t_{p}^{\prime\prime}:=t^{\prime}_{p}\\!\uparrow^{[n]}$, obtaining a tree with
an o-uniform labelling for $p$, and then check that $\varphi$ holds on
$t_{p}^{\prime\prime}$. The problem is that unless $t=(\tau,\ell)$ is
o-uniform in every atomic proposition in AP, there is no way to define the
labelling of $\tau\\!\downarrow_{\textnormal{{o}}}$ without losing
information. This implies that, unless we restrict to models where all atomic
propositions are observable for all observations o, we cannot pass the model
as input to our automata, which will work on narrowings of trees.
Therefore, to model check a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ formula $\varphi$ on a CKS $\mathcal{S}$, each state of the automaton
that we build for $\varphi$ will contain a state of $\mathcal{S}$. The
automaton can thus guess paths in $\mathcal{S}$, and evaluate free occurrences
of atomic propositions in $\mathcal{S}$ without reading the input tree. The
input tree no longer represents the model, but we use it to carry labellings
for quantified atomic propositions in ${\textnormal{AP}_{\exists}}(\varphi)$:
we provide the automaton with an input tree whose labelling is initially
empty, and the automaton, through successive narrowing and projection
operations, decorates it with uniform labellings for quantified atomic
propositions.
We remark that this technique allows one to go beyond Coordination Logic
(Finkbeiner and Schewe, 2010): by separating between quantified atomic
propositions (that need to be uniform and are carried by the input tree) and
free atomic propositions (that state facts about the model and are coded in
the automaton), we manage to remove the restriction present in CL, that
requires all facts about the model to be known to every strategy (see
Proposition 6.3 in Section 6.2). To do this we assume without loss of
generality that propositions that are quantified in $\varphi$ do not appear
free in $\varphi$, i.e.,
${\textnormal{AP}_{\exists}}(\varphi)\cap\textnormal{AP}_{f}(\varphi)=\emptyset$.
Finally, given a formula $\varphi$, a CKS $\mathcal{S}$ and a state
$s\in\mathcal{S}$, the truth value of $\varphi$ in $(\mathcal{S},s)$ does not
depend on the labelling of $\mathcal{S}$ for atoms in
${\textnormal{AP}_{\exists}}(\varphi)$, which can thus be forgotten. Thus,
from now on we will assume that an instance $(\mathcal{S},\Phi)$ of the model-
checking problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
is such that
${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\Phi)=\emptyset$ and
$\mathcal{S}$ is a CKS over $\textnormal{AP}_{f}(\Phi)$.
Merging the decorated input tree and the model. To state the correctness of
our construction, we will need to merge the labels for quantified
propositions, carried by the input tree, with those for free propositions,
carried by CKS $\mathcal{S}$. Because, through successive widenings, the input
tree (represented by $t$ in the definition below) will necessarily be a
complete tree, its domain will always contain the domain of the unfolding of
$\mathcal{S}$ (represented by $t^{\prime}$ below), hence the following
definition.
###### Definition 4.9 (Merge).
Let $t=(\tau,\ell)$ be a complete $(\textnormal{AP},X)$-tree and
$t^{\prime}=(\tau^{\prime},\ell^{\prime})$ an
$(\textnormal{AP}\,^{\prime},X)$-tree with same root as $t$, where
$\textnormal{AP}\cap\textnormal{AP}\,^{\prime}=\emptyset$. We define the
_merge_ of $t$ and $t^{\prime}$ as the
$(\textnormal{AP}\cup\textnormal{AP}\,^{\prime},X)$-tree
$t\merge
t^{\prime}:=(\tau\cap\tau^{\prime}=\tau^{\prime},\ell^{\prime\prime}),$
where $\ell^{\prime\prime}(u)=\ell(u)\cup\ell^{\prime}(u)$.
We now describe our automata construction. Let $(\mathcal{S},\Phi)$ be an
instance of the model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$,
where $\mathcal{S}=(S,R,\ell_{\mathcal{S}},s_{\iota})$.
###### Lemma 4.10 (Translation).
For every subformula $\varphi$ of $\Phi$ and state $s$ of $\mathcal{S}$, one
can build an ATA $\mathcal{A}_{s}^{\varphi}$ on
$({\textnormal{AP}_{\exists}}(\Phi),S_{\varphi})$-trees such that for every
$({\textnormal{AP}_{\exists}}(\Phi),S_{\varphi})$-tree $t$ rooted in
$s_{\iota}\\!\downarrow_{I_{\varphi}}$, every $u\in t_{\mathcal{S}}$ ending in
$s$, it holds that
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})\mbox{\;\;\;iff\;\;\;}t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi.$
###### Proof.
Let ${\textnormal{AP}_{\exists}}={\textnormal{AP}_{\exists}}(\Phi)$ and
$\textnormal{AP}_{f}=\textnormal{AP}_{f}(\Phi)$, and recall that $\mathcal{S}$
is labelled over $\textnormal{AP}_{f}$. For each state $s\in S$ and each
subformula $\varphi$ of $\Phi$ (note that all subformulas of $\Phi$ are also
hierarchical), we define by induction on $\varphi$ the ATA
$\mathcal{A}_{s}^{\varphi}$ on
$({\textnormal{AP}_{\exists}},S_{\varphi})$-trees.
$\bm{\varphi=p:}$ First, by Definition 4.8, $S_{\varphi}=S_{[n]}=S$. We let
$\mathcal{A}_{s}^{p}$ be the ATA over $S$-trees with one unique state
$q_{\iota}$, with transition function defined as follows:
$\delta(q_{\iota},a)=\begin{cases}\top&\mbox{if
}\begin{array}[]{c}p\in\textnormal{AP}_{f}\mbox{ and
}p\in\ell_{\mathcal{S}}(s)\\\ \mbox{ or }\\\
p\in{\textnormal{AP}_{\exists}}\mbox{ and }p\in a\end{array}\\\ \perp&\mbox{if
}\begin{array}[]{c}p\in\textnormal{AP}_{f}\mbox{ and
}p\notin\ell_{\mathcal{S}}(s)\\\ \mbox{ or }\\\
p\in{\textnormal{AP}_{\exists}}\mbox{ and }p\notin a\end{array}\end{cases}$
$\bm{\varphi=\neg\varphi^{\prime}:}$ We let
$\mathcal{A}_{s}^{\varphi}:=\overline{\mathcal{A}_{s}^{\varphi^{\prime}}}$.
$\bm{\varphi=\varphi_{1}\vee\varphi_{2}:}$ Because
$I_{\varphi}=I_{\varphi_{1}}\cap I_{\varphi_{2}}$, and each
$\mathcal{A}_{s}^{\varphi_{i}}$ for $i\in\\{1,2\\}$ works on
$L_{\varphi_{i}}$-trees, we first narrow them so that they work on
$L_{\varphi}$-trees: for $i\in\\{1,2\\}$, we let
$\mathcal{A}_{i}:={\mathcal{A}_{s}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}}=(Q^{i},\delta^{i},q_{\iota}^{i},C^{i})$.
Letting $q_{\iota}$ be a fresh initial state we define
$\mathcal{A}_{s}^{\varphi}:=(\\{q_{\iota}\\}\cup Q^{1}\cup
Q^{2},\delta,q_{\iota},C)$, where $\delta$ and $C$ agree with $\delta^{i}$ and
$C^{i}$, respectively, on states from $Q^{i}$, and
$\delta(q_{\iota},a)=\delta^{1}(q_{\iota}^{1},a)\vee\delta^{2}(q_{\iota}^{2},a)$.
The colour of $q_{\iota}$ does not matter.
$\bm{\varphi={\bf E}\psi:}$ Let
$\max(\psi)=\\{\varphi_{1},\ldots,\varphi_{k}\\}$ be the set of maximal state
subformulas of $\psi$. In a first step we see these maximal state subformulas
as atomic propositions, we see $\psi$ as an LTL formula over $\max(\psi)$, and
we build a nondeterministic parity word automaton
$\mathcal{W}^{\psi}=(Q^{\psi},\Delta^{\psi},q^{\psi}_{\iota},C^{\psi})$ over
alphabet $2^{\max(\psi)}$ that accepts exactly the models of $\psi$ (and uses
two colours) (Vardi and Wolper, 1994). We define the ATA $\mathcal{A}$ that,
given as input a $(\max(\psi),S_{\varphi})$-tree $t$, nondeterministically
guesses a path $\lambda$ in $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$, or
equivalently a path in $\mathcal{S}$ starting from $s$, and simulates
$\mathcal{W}^{\psi}$ on it, assuming that the labels it reads while following
$\lambda\\!\downarrow_{I_{\varphi}}$ in its input $t$ correctly represent the
truth value of formulas in $\max(\psi)$ along $\lambda$. Recall that
$\mathcal{S}=(S,R,s_{\iota},\ell_{\mathcal{S}})$; we define
$\mathcal{A}:=(Q,\delta,q_{{\iota}},C)$, where
* •
$Q=Q^{\psi}\times S$,
* •
$q_{{\iota}}=(q^{\psi}_{{\iota}},s)$,
* •
for each $(q^{\psi},s^{\prime})\in Q$,
$C(q^{\psi},s^{\prime})=C^{\psi}(q^{\psi})$, and
* •
for each $(q^{\psi},s^{\prime})\in Q$ and $a\in 2^{\max(\psi)}$,
$\delta((q^{\psi},s^{\prime}),a)=\bigvee_{q^{\prime}\in\Delta^{\psi}(q^{\psi},a)}\bigvee_{s^{\prime\prime}\in
R(s^{\prime})}[s^{\prime\prime}\\!\downarrow_{I_{\varphi}},\left(q^{\prime},s^{\prime\prime}\right)].$
The intuition is that $\mathcal{A}$ reads the current label in
$2^{\max(\psi)}$, chooses nondeterministically a transition in
$\mathcal{W}^{\psi}$, chooses a next state $s^{\prime\prime}$ in $S$ and
proceeds in the corresponding direction
$s^{\prime\prime}\\!\downarrow_{I_{\varphi}}\in S_{\varphi}$.
Now from $\mathcal{A}$ we build the automaton $\mathcal{A}_{s}^{\varphi}$ over
$S_{\varphi}$-trees labelled with “real” atomic propositions in
${\textnormal{AP}_{\exists}}$. Intuitively, in each node it visits,
$\mathcal{A}_{s}^{\varphi}$ guesses what should be its labelling over
$\max(\psi)$, it simulates $\mathcal{A}$ accordingly, and checks that the
guess it made is correct. If, after having guessed a finite path $u\in
t_{\mathcal{S}}$ ending in state $s^{\prime}$, $\mathcal{A}_{s}^{\varphi}$
guesses that $\varphi_{i}$ holds, it checks this guess by starting a copy of
automaton $\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$ from node
$v=u\\!\downarrow_{I_{\varphi}}$ in its input $t$.
Formally, for each $s^{\prime}\in\mathcal{S}$ and each
$\varphi_{i}\in\max(\psi)$ we first build
$\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$, which works on
$S_{\varphi_{i}}$-trees. Observe that
$I_{\varphi}=\cap_{i=1}^{k}I_{\varphi_{i}}$, so that we need to narrow down
these automata222In the conference version of this work (Berthon et al., 2017)
we made a mistake here: we wrote that $I_{\varphi}=I_{\varphi_{i}}$, which is
not the case in general. As a consequence we do need to narrow down automata,
unlike what was written in the conference version.: We let
$\mathcal{A}^{i}_{s^{\prime}}:=\mathcal{A}_{s^{\prime}}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}=(Q^{i}_{s^{\prime}},\delta^{i}_{s^{\prime}},q^{i}_{s^{\prime}},C^{i}_{s^{\prime}})$.
We also let
$\overline{\mathcal{A}^{i}_{s^{\prime}}}=(\overline{Q^{i}_{s^{\prime}}},\overline{\delta^{i}_{s^{\prime}}},\overline{q^{i}_{s^{\prime}}},\overline{C^{i}_{s^{\prime}}})$
be the dualisation of $\mathcal{A}^{i}_{s^{\prime}}$, and we assume without
loss of generality all the state sets are pairwise disjoint. We define the ATA
$\mathcal{A}_{s}^{\varphi}=(Q\cup\bigcup_{i,s^{\prime}}Q^{i}_{s^{\prime}}\cup\overline{Q^{i}_{s^{\prime}}},\delta^{\prime},q_{{\iota}},C^{\prime}),$
where the colours of states are left as they were in their original automaton,
and $\delta^{\prime}$ is defined as follows. For states in
$Q^{i}_{s^{\prime}}$ (resp. $\overline{Q^{i}_{s^{\prime}}}$),
$\delta^{\prime}$ agrees with $\delta^{i}_{s^{\prime}}$ (resp.
$\overline{\delta^{i}_{s^{\prime}}}$), and for $(q^{\psi},s^{\prime})\in Q$
and $a\in 2^{{\textnormal{AP}_{\exists}}}$ we let
$\delta^{\prime}((q^{\psi},s^{\prime}),a)$ be the disjunction over
${a^{\prime}\in 2^{\max(\psi)}}$ of
(2)
$\displaystyle\Bigg{(}\delta\left((q^{\psi},s^{\prime}),a^{\prime}\right)\wedge\bigwedge_{\varphi_{i}\in
a^{\prime}}\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)\;\wedge\bigwedge_{\varphi_{i}\notin
a^{\prime}}\overline{\delta^{i}_{s^{\prime}}}(\overline{q^{i}_{s^{\prime}}},a)\Bigg{)}.$
Note that in general it is not possible to define a $\max(\psi)$-labelling of
$t$ that faithfully represents the truth values of formulas in $\max(\psi)$
for all nodes in $t_{\mathcal{S}}$, because a node in $t$ may correspond to
different nodes in $t_{\mathcal{S}}$ that have same projection on
$S_{\varphi}$ but satisfy different formulas of $\max(\psi)$. However this is
not a problem because different copies of $\mathcal{A}_{s}^{\varphi}$ that
visit the same node can guess different labellings, depending on the actual
state of $\mathcal{S}$ (which is part of the state of
$\mathcal{A}_{s}^{\varphi}$).
$\bm{\varphi=\exists}^{\bm{\textnormal{{o}}}}\bm{p.\,\varphi^{\prime}:}$ We
build automaton $\mathcal{A}_{s}^{\varphi^{\prime}}$ that works on
$S_{\varphi^{\prime}}$-trees; because $\varphi$ is hierarchical, we have that
$\textnormal{{o}}\subseteq I_{\varphi^{\prime}}$ and we can narrow down
$\mathcal{A}_{s}^{\varphi^{\prime}}$ to work on $S_{\textnormal{{o}}}$-trees
and obtain
$\mathcal{A}_{1}:={\mathcal{A}_{s}^{\varphi^{\prime}}\\!\downarrow_{\textnormal{{o}}}}$.
By Theorem 4.6 we can nondeterminise it to get $\mathcal{A}_{2}$, which by
Theorem 4.5 we can project with respect to $p$, finally obtaining
$\mathcal{A}_{s}^{\varphi}:=\mathcal{A}_{2}\\!\Downarrow_{p}$.
Correctness. We now prove by induction on $\varphi$ that the construction is
correct. In each case, we let $t=(\tau,\ell)$ be a complete
$({\textnormal{AP}_{\exists}},S_{\varphi})$-tree rooted in
$s_{\iota}\\!\downarrow_{I_{\varphi}}$.
$\bm{\varphi=p:}$ First, note that $I_{p}=[n]$, so that $t$ is rooted in
$s_{\iota}\\!\downarrow_{I_{\varphi}}=s_{\iota}$, and
$u\\!\downarrow_{I_{\varphi}}=u$. Also recall that $u$ ends in $s$. Let us
consider first the case where $p\in\textnormal{AP}_{f}$. By definition of
$\mathcal{A}_{s}^{p}$, we have that $(t,u)\in\mathcal{L}(\mathcal{A}_{s}^{p})$
if and only if $p\in\ell_{\mathcal{S}}(s)$. We also have
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models p$ if and only if
$p\in\ell^{\prime}(u)$, where $\ell^{\prime}$ is the labelling of tree
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$. By definition of unfolding and
merge, we have that $\ell^{\prime}(u)=\ell_{\mathcal{S}}(s)$, which concludes
this direction. Now if $p\in{\textnormal{AP}_{\exists}}$: by definition of
$\mathcal{A}_{s}^{p}$, we have $(t,u)\in\mathcal{L}(\mathcal{A}_{s}^{p})$ if
and only if $p\in\ell(u)$; also, by definition of the merge and unfolding, we
have that $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models p$ if and only
if $p\in\ell(u)$, and we are done.
$\bm{\varphi=\neg\varphi^{\prime}:}$ Correctness follows from the induction
hypothesis and Theorem 4.4.
$\bm{\varphi_{1}\vee\varphi_{2}:}$ We have
$\mathcal{A}_{i}=\mathcal{A}_{s}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}$, so
by Theorem 4.7 we have
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{i})$ if and only
if
$(t\\!\uparrow^{I_{\varphi_{i}}},u\\!\downarrow_{I_{\varphi_{i}}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi_{i}})$,
which by induction hypothesis holds if and only if
$(t\\!\uparrow^{I_{\varphi_{i}}})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi_{i}$,
i.e., $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi_{i}$. We
conclude by observing that
$\mathcal{L}(\mathcal{A}_{s}^{\varphi})=\mathcal{L}(\mathcal{A}_{1})\cup\mathcal{L}(\mathcal{A}_{2})$.
$\bm{\varphi={\bf E}\psi:}$ Suppose that
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models{\bf E}\psi$. There exists
an infinite path $\lambda$ in $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$
starting at $u$ such that
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda\models\psi$. Again, let
$\max(\psi)$ be the set of maximal state subformulas of $\varphi$, and let $w$
be the infinite word over $2^{\max(\psi)}$ that agrees with $\lambda$ on the
state formulas in $\max(\psi)$, i.e., for each node $\lambda_{k}$ of $\lambda$
and formula $\varphi_{i}\in\max(\psi)$, it holds that $\varphi_{i}\in w_{k}$
if and only if
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda_{k}\models\varphi_{i}$. To
show that
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})$ we
show that Eve can win the acceptance game
$\mathcal{G}(\mathcal{A}_{s}^{\varphi},t,u\\!\downarrow_{I_{\varphi}})$. In
this game, Eve can guess the path $\lambda$ while the automaton follows
$\lambda\\!\downarrow_{I_{\varphi}}$ in its input $t$, and she can also guess
the corresponding word $w$ on $2^{\max(\psi)}$. By construction of
$\mathcal{W}^{\psi}$, Eve has a winning strategy $\sigma_{\psi}$ in the
acceptance game of $\mathcal{W}^{\psi}$ on $w$. From $\lambda$, $w$ and
$\sigma_{\psi}$ we can easily define a strategy for Eve in
$\mathcal{G}(\mathcal{A}_{s}^{\varphi},t,u\\!\downarrow_{I_{\varphi}})$ on all
positions that can be reached while Adam does not choose to challenge her on a
guess she made for the truth value of some maximal state subformula, and on
such plays this strategy is winning because $\sigma_{\psi}$ is winning.
Now if Adam challenges her on one of these guesses: Let $\lambda_{k}\in
t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ be a node along $\lambda$, let
$s^{\prime}$ be its last direction and let
$\lambda_{k}^{\prime}=\lambda_{k}\\!\downarrow_{I_{\varphi}}\in t$. Assume
that in node $\lambda^{\prime}_{k}$ of the input tree, in a state
$(q^{\psi},s^{\prime})\in Q$, Adam challenges Eve on some
$\varphi_{i}\in\max(\psi)$ that she assumes to be true in
$\lambda^{\prime}_{k}$, i.e., such that $\varphi_{i}\in w_{k}$. Formally, in
the evaluation game this means that Adam chooses the conjunct
$\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)$ in transition formula 2, where
$a=\ell(\lambda^{\prime}_{k})$, thus moving to position
$(\lambda^{\prime}_{k},(q^{\psi},s^{\prime}),\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$.
We want to show that Eve wins from this position. To do so we first show that
$(t,\lambda^{\prime}_{k})\in\mathcal{L}(\mathcal{A}^{i}_{s^{\prime}})$.
First, since
$\mathcal{A}^{i}_{s^{\prime}}=\mathcal{A}_{s^{\prime}}^{\varphi_{i}}\\!\downarrow_{I_{\varphi}}$,
by Theorem 4.7,
$(t,\lambda^{\prime}_{k})\in\mathcal{L}(\mathcal{A}^{i}_{s^{\prime}})$ if and
only if
$(t\\!\uparrow^{I_{\varphi_{i}}},\lambda_{k}\\!\downarrow_{I_{\varphi_{i}}})\in\mathcal{L}(\mathcal{A}_{s^{\prime}}^{\varphi_{i}})$.
Next, by applying the induction hypothesis we get that
$(t\\!\uparrow^{I_{\varphi_{i}}},\lambda_{k}\\!\downarrow_{I_{\varphi_{i}}})\in\mathcal{L}(\mathcal{A}_{s^{\prime}}^{\varphi_{i}})$
if and only if
$t\\!\uparrow^{I_{\varphi_{i}}}\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda_{k}\models\varphi_{i}$,
i.e.,
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda_{k}\models\varphi_{i}$. The
latter holds because $\varphi_{i}\in w_{k}$, and by assumption $w_{k}$ agrees
with $\lambda_{k}$ on $\varphi_{i}$. Thus
$(t,\lambda^{\prime}_{k})\in\mathcal{L}(\mathcal{A}^{i}_{s^{\prime}})$.
This means that Eve has a winning strategy from the initial position
$(\lambda^{\prime}_{k},q^{i}_{s^{\prime}},\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$
of the acceptance game of $\mathcal{A}^{i}_{s^{\prime}}$ on
$(t,\lambda^{\prime}_{k})$. Since
$(\lambda^{\prime}_{k},q^{i}_{s^{\prime}},\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$
and
$(\lambda^{\prime}_{k},(q^{\psi},s^{\prime}),\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a))$
contain the same node $\lambda^{\prime}_{k}$ and transition formula
$\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)$, the subgames that start in
these positions are isomorphic and a winning strategy in one of these
positions induces a winning strategy in the other, and therefore Eve wins
Adam’s challenge (recall that positional strategies are sufficient in parity
games (Zielonka, 1998)). With a similar argument, we get that also when Adam
challenges Eve on some $\varphi_{i}\in\max(\psi)$ assumed not to be true in
node $\lambda_{k}$, Eve wins the challenge. Finally, Eve wins the acceptance
game of $\mathcal{A}_{s}^{\varphi}$ on $(t,u\\!\downarrow_{I_{\varphi}})$, and
thus
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})$.
For the other direction, assume that
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})$,
i.e., Eve wins the evaluation game of $\mathcal{A}_{s}^{\varphi}$ on
$(t,u\\!\downarrow_{I_{\varphi}})$. A winning strategy for Eve describes a
path $\lambda$ in $t_{\mathcal{S}}$ from $s$, which is also a path in
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ from $u$. This winning strategy
also defines an infinite word $w$ over $2^{\max(\psi)}$ such that $w$ agrees
with $\lambda$ on the formulas in $\max(\psi)$, and it also describes a
winning strategy for Eve in the acceptance game of $\mathcal{W}^{\psi}$ on
$w$. Hence $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},\lambda\models\psi$, and
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi$.
$\bm{\varphi=\exists}^{\bm{\textnormal{{o}}}}\bm{p.\,\varphi^{\prime}:}$
First, by definition we have $I_{\varphi}=\textnormal{{o}}\cap
I_{\varphi^{\prime}}$. Because $\varphi$ is hierarchical,
$\textnormal{{o}}\subseteq\textnormal{{o}}^{\prime}$ for every
$\textnormal{{o}}^{\prime}$ that occurs in $\varphi^{\prime}$, and thus
$\textnormal{{o}}\subseteq I_{\varphi^{\prime}}$. It follows that
$I_{\varphi}=\textnormal{{o}}$. Next, by Theorem 4.5 we have that
(3)
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})\mbox{\;\;\;iff\;\;\;}\exists\,\ell_{p}\mbox{
a $p$-labelling for $t$ such that
}(t\otimes\ell_{p},u)\in\mathcal{L}(\mathcal{A}_{2}).$
By Theorem 4.6, $\mathcal{L}(\mathcal{A}_{2})=\mathcal{L}(\mathcal{A}_{1})$,
and since
$\mathcal{A}_{1}=\mathcal{A}_{s}^{\varphi^{\prime}}\\!\downarrow_{\textnormal{{o}}}=\mathcal{A}_{s}^{\varphi^{\prime}}\\!\downarrow_{I_{\varphi}}$
we get by Theorem 4.7 that
(4)
$(t\otimes\ell_{p},u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{2})\mbox{\;\;\;iff\;\;\;}((t\otimes\ell_{p})\\!\uparrow^{L_{\varphi^{\prime}}},u\\!\downarrow_{I_{\varphi^{\prime}}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi^{\prime}}).$
By induction hypothesis,
(5)
$((t\otimes\ell_{p})\\!\uparrow^{L_{\varphi^{\prime}}},u\\!\downarrow_{I_{\varphi^{\prime}}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi^{\prime}})\mbox{\;\;\;iff\;\;\;}(t\otimes\ell_{p})\\!\uparrow^{L_{\varphi^{\prime}}}\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}.$
Now, by points (3), (4) and (5) and the fact that
$(t\otimes\ell_{p})\\!\uparrow^{I_{\varphi^{\prime}}}\\!\uparrow^{[n]}=(t\otimes\ell_{p})\\!\uparrow^{[n]}$,
we get that
(6)
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s}^{\varphi})\mbox{\;\;\;iff\;\;\;}\exists\,\ell_{p}\mbox{
a $p$-labelling for $t$ such that
}(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}.$
We now prove the following equation which, together with point (6), concludes
the proof:
(7) $\begin{array}[]{c}\exists\,\ell_{p}\mbox{ a $p$-labelling for $t$ such
that
}(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}\\\
\mbox{\;\;\;iff\;\;\;}\\\
t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\exists^{\textnormal{{o}}}p.\,\varphi^{\prime}\end{array}$
Assume that there exists a $p$-labelling $\ell_{p}$ for $t$ such that
$(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}$.
Let $\ell_{p}^{\prime}$ be the $p$-labelling of
$(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$. By definition of
the merge, $\ell_{p}^{\prime}$ is equal to the $p$-labelling of
$(t\otimes\ell_{p})\\!\uparrow^{[n]}$, which by definition of the widening is
$I_{\varphi}$-uniform, i.e., it is o-uniform. In addition, it is clear that
$(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=(t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}})\otimes\ell_{p}^{\prime}$,
which concludes this direction.
For the other direction, assume that
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\exists^{\textnormal{{o}}}p.\,\varphi^{\prime}$:
there exists a o-uniform $p$-labelling $\ell_{p}^{\prime}$ for
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$ such that
$(t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}})\otimes\ell_{p}^{\prime},u\models\varphi^{\prime}$.
We define a $p$-labelling $\ell_{p}$ for $t$ such that
$(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\varphi^{\prime}$.
First, let us write
$t^{\prime}=t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=(\tau^{\prime},\ell^{\prime})$.
For each node $u$ of $t$, let
$\ell_{p}(u)=\begin{cases}\ell_{p}^{\prime}(u^{\prime})&\mbox{if there exists
}u^{\prime}\in\tau^{\prime}\mbox{ such that
}u^{\prime}\\!\downarrow_{\textnormal{{o}}}=u,\\\
0&\mbox{otherwise.}\end{cases}$
This is well defined because $\ell_{p}^{\prime}$ is o-uniform in $p$, so that
if two nodes $u^{\prime},v^{\prime}$ project on $u$, we have
$u^{\prime}\approx_{\textnormal{{o}}}v^{\prime}$ and thus
$\ell_{p}^{\prime}(u^{\prime})=\ell_{p}^{\prime}(v^{\prime})$. In case there
is no $u^{\prime}\in\tau^{\prime}$ such that
$u^{\prime}\\!\downarrow_{I_{\varphi}}=u$, the value of $\ell_{p}(u)$ has no
impact on $(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}$.
Finally,
$(t\otimes\ell_{p})\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=(t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}})\otimes\ell_{p}^{\prime}$,
hence the result.
∎
### 4.3. Proof of Theorem 4.3
We now prove Theorem 4.3. Let $\mathcal{S}$ be a CKS with initial state
$s_{\iota}$, and let $\Phi\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$. By Lemma 4.10 one can build an ATA
$\mathcal{A}_{s_{\iota}}^{\Phi}$ such that for every labelled
$S_{\varphi}$-tree $t$ rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$, and
every node $u\in t_{\mathcal{S}}$,
$(t,u\\!\downarrow_{I_{\varphi}})\in\mathcal{L}(\mathcal{A}_{s_{\iota}}^{\varphi})$
if, and only if, $t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}},u\models\Phi$.
Let $\tau$ be the full $S_{\varphi}$-tree rooted in
$s_{\iota}\\!\downarrow_{I_{\varphi}}$, and let $t=(\tau,\ell_{\emptyset})$,
where $\ell_{\emptyset}$ is the empty labelling. Clearly,
$t\\!\uparrow^{[n]}\merge\;t_{\mathcal{S}}=t_{\mathcal{S}}$, and because $t$
is rooted in $s_{\iota}\\!\downarrow_{I_{\varphi}}$, we get that
$t\in\mathcal{L}(\mathcal{A}_{s_{\iota}}^{\varphi})$ if, and only if
$t_{\mathcal{S}}\models\Phi$, i.e., $\mathcal{S}\models\Phi$. It remains to
check whether tree $t$, which is regular, is accepted by
$\mathcal{A}_{s_{\iota}}^{\Phi}$. This can be done by solving a parity game
built from the product of $\mathcal{A}_{s_{\iota}}^{\Phi}$ with a finite
Kripke structure representing $t$ (Löding, 2011).
### 4.4. Complexity
To state a precise upper bound on the complexity of our procedure, we first
introduce a syntactic notion of _simulation depth_ for formulas of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. While alternation depth
(see, e.g., (Mogavero et al., 2014)) simply counts the number of alternations
between existential and universal strategy quantifications, simulation depth
reflects automata operations required to treat a formula, and counts the
maximum number of nested simulations of alternating tree automata that need to
be performed when applying our automata construction. However, like
alternation depth, it is a purely syntactic notion. Formally we define a
function $\mbox{sd}:\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}\to\mathbb{N}\times\\{\mbox{nd},\mbox{alt}\\}$ which returns, for each
formula $\varphi$, a pair $\mbox{sd}(\varphi)=(k,x)$ where $k$ is the
simulation depth of $\varphi$, and $x\in\\{\mbox{nd},\mbox{alt}\\}$ indicates
whether the automaton $\mathcal{A}_{s}^{\varphi}$ built from $\varphi$ and a
state $s$ of a CKS $\mathcal{S}$ is nondeterministic (nd) or alternating
(alt). If $\mbox{sd}(\varphi)=(k,x)$ we shall denote $k$ by
$\mbox{sd}_{k}(\varphi)$ and $x$ by $\mbox{sd}_{x}(\varphi)$. The inductive
definition for state formulas is as follows:
$\begin{array}[]{l}\mbox{sd}(p):=(0,\mbox{nd})\\\\[5.0pt]
\mbox{sd}(\neg\varphi):=(\mbox{sd}_{k}(\varphi),\mbox{alt})\\\\[5.0pt]
\mbox{sd}(\varphi_{1}\vee\varphi_{2}):=\left(\max_{i\in\\{1,2\\}}\mbox{sd}_{k}(\varphi_{i}),x\right),\\\
\hfill\mbox{where }x=\begin{cases}\mbox{nd}&\mbox{if
}\mbox{sd}_{x}(\varphi_{1})=\mbox{sd}_{x}(\varphi_{2})=\mbox{nd}\\\
\mbox{alt}&\mbox{otherwise}\end{cases}\\\\[15.0pt] \mbox{sd}({\bf
E}\psi):=\begin{cases}(0,\mbox{nd})&\mbox{if }\psi\in\textnormal{LTL}\\\
(\max_{\varphi\in\max(\psi)}\mbox{sd}_{k}(\varphi),\mbox{alt})&\mbox{otherwise}\end{cases}\\\\[15.0pt]
\mbox{sd}(\exists^{\textnormal{{o}}}p.\,\varphi):=(k,\mbox{nd}),\\\
\hfill\quad\quad\quad\quad\quad\mbox{where
}k=\begin{cases}\mbox{sd}_{k}(\varphi)&\mbox{if
}\mbox{sd}_{x}(\varphi)=\mbox{nd}\mbox{ and
}\textnormal{{o}}=I_{\varphi}\quad\mbox{(recall Definition~{}\ref{def-
Iphi})}\\\ \mbox{sd}_{k}(\varphi)+1&\mbox{otherwise}\end{cases}\end{array}$
We explain each case. For an atomic proposition $p$, the automaton
$\mathcal{A}_{s}^{p}$ is clearly nondeterministic and no simulation is
involved in its construction. For a formula $\neg\varphi$, the automaton
$\mathcal{A}_{s}^{\neg\varphi}$ is obtained by dualising
$\mathcal{A}_{s}^{\varphi}$, an operation that in general does not return a
nondeterministic automaton but an alternating one; also this dualisation does
not involve any simulation, hence the definition of the first component. Now
for the disjunction, the first component should be clear; for the second one,
observe that by construction of
$\mathcal{A}_{s}^{\varphi_{1}\vee\varphi_{2}}$, if both
$\mathcal{A}_{s}^{\varphi_{1}}$ and $\mathcal{A}_{s}^{\varphi_{2}}$ are
nondeterministic, then so is $\mathcal{A}_{s}^{\varphi_{1}\vee\varphi_{2}}$;
otherwise, it is alternating. For the path quantifier, by construction
$\mathcal{A}_{s}^{{\bf E}\psi}$ is alternating in the general case as it
starts copies of automata for each maximal state subformula in $\psi$; for the
first component, we recall that $\max(\psi)$ denotes the set of these maximal
state subformulas and we observe that no additional simulation is performed to
build $\mathcal{A}_{s}^{{\bf E}\psi}$ besides those needed to construct the
automata for the maximal state subformulas. If $\psi$ is an LTL formula, then
one can build the nondeterministic word automaton $\mathcal{W}^{\psi}$
directly working on “real” atomic propositions in
${\textnormal{AP}_{\exists}}\cup\textnormal{AP}_{f}$. The automaton
$\mathcal{A}$ can then be built working directly on
${\textnormal{AP}_{\exists}}$, with $\mathcal{W}^{\psi}$ reading valuations
for ${\textnormal{AP}_{\exists}}$ in the input tree and those for atoms in
$\textnormal{AP}_{f}$ in the current state of $\mathcal{S}$. Because we do not
need to guess valuations of maximal state subformulas and launch additional
automata to check that these guesses are correct, we obtain a nondeterministic
automaton. Finally, for a formula of the form
$\exists^{\textnormal{{o}}}p.\,\varphi$, to build automaton
$\mathcal{A}_{s}^{\exists^{\textnormal{{o}}}p.\,\varphi}$ we first build
$\mathcal{A}_{s}^{\varphi}$, which we then narrow down to work on
$L_{\textnormal{{o}}}$-trees. Since the narrowing operation introduces
alternation, we need to nondeterminise the resulting automaton before
projecting it with respect to $p$. Now observe that if
$I_{\varphi}=\textnormal{{o}}$ we do not need to perform this narrowing, and
thus if $\mathcal{A}_{s}^{\varphi}$ is a nondeterministic automaton we can
directly perform the projection. This justifies the definition of the first
component; for the second one, observe that the projection of a
nondeterministic automaton is also nondeterministic.
###### Example 4.11.
Assume that $n=3$, i.e., states of CKS have three components (recall that
$[3]=\\{1,2,3\\}$). Let us consider formula
$\varphi=\forall^{\\{1,3\\}}p.\,\forall^{[3]}q.\,\exists^{[3]}r.\,{\bf E}{\bf
G}(p\wedge q\vee r)$. We describe how its simulation depth is computed. First,
let us rewrite
$\varphi=\neg\exists^{\\{1,3\\}}p.\,\exists^{[3]}q.\,\neg\exists^{[3]}r.\,{\bf
E}{\bf G}(p\wedge q\vee r)$.
Since ${\bf G}(p\wedge q\vee r)$ is an LTL formula, $\mbox{sd}({\bf E}{\bf
G}(p\wedge q\vee r))=(0,\mbox{nd})$. Next, because $I_{{\bf E}{\bf G}(p\wedge
q\vee r)}=[3]$, it follows that $\mbox{sd}(\exists^{[3]}r.\,{\bf E}{\bf
G}(p\wedge q\vee r))=(0,\mbox{nd})$, and $\mbox{sd}(\neg\exists^{[3]}r.\,{\bf
E}{\bf G}(p\wedge q\vee r))=(0,\mbox{alt})$. Next we have that
$\mbox{sd}(\exists^{[3]}q.\,\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee
r))=(1,\mbox{nd})$. This reflects the fact that the automaton obtained for
formula $\neg\exists^{[3]}r.\,{\bf E}{\bf G}(p\wedge q\vee r)$, which is
alternating because of complementation, needs to be simulated before
projecting it over $q$. Then, because $\\{1,3\\}\neq[3]$, it holds that
$\mbox{sd}(\exists^{\\{1,3\\}}p.\,\exists^{[3]}q.\,\neg\exists^{[3]}r.\,{\bf
E}{\bf G}(p\wedge q\vee r))=(2,\mbox{nd})$: to project over $p$ we first need
to narrow down the previous automaton to make it see only components 1 and 3,
and because the narrowing operation introduces alternation, the resulting
automaton needs to be simulated before projecting it. Finally, we get that
$\mbox{sd}(\varphi)=(2,\mbox{alt})$
We now introduce two additional depth measures on
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formulas, which help
us establish more precise upper bounds on the sizes of the automata we build.
For every $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula
$\varphi$, we let ${\bf E}\mathrm{d}(\varphi)$ be the maximum number of nested
path quantifiers ${\bf E}$ in $\varphi$, and $\exists\mathrm{d}(\varphi)$ is
the maximum number of nested second-order quantifiers $\exists$ in $\varphi$.
We also inductively define the function $\mathrm{exp}\big{(}k\mid n\big{)}$,
for $k,n\in\mathbb{N}$, as follows: $\mathrm{exp}\big{(}0\mid n\big{)}:=n$ and
$\mathrm{exp}\big{(}k+1\mid n\big{)}:=2^{\mathrm{exp}\big{(}k\mid n\big{)}}$.
###### Proposition 4.12.
Let $\Phi$ be a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$ formula, $\mathcal{S}$ a CKS and $s\in\mathcal{S}$ a
state.
* •
If $\mbox{sd}_{k}(\Phi)=0$, $\mathcal{A}_{s}^{\Phi}$ has at most
$f_{\mathcal{S}}^{\Phi}$ states and 2 colours, and
* •
if $\mbox{sd}_{k}(\Phi)\geq 1$, $\mathcal{A}_{s}^{\Phi}$ has at most
$\mathrm{exp}\big{(}\mbox{sd}_{k}(\Phi)\mid f_{\mathcal{S}}^{\Phi}\log
f_{\mathcal{S}}^{\Phi}\big{)}$ states and its number of colours is at most
$\mathrm{exp}\big{(}\mbox{sd}_{k}(\Phi)-1\mid f_{\mathcal{S}}^{\Phi}\log
f_{\mathcal{S}}^{\Phi}\big{)}$,
where
$f_{\mathcal{S}}^{\Phi}=m_{1}^{\exists\mathrm{d}(\Phi)}|\Phi||\mathcal{S}|^{{\bf
E}\mathrm{d}(\Phi)}2^{m_{2}|\Phi|{\bf E}\mathrm{d}(\Phi)}$, with
$m_{1},m_{2}\in\mathbb{N}$ constants.
Also, if $\mathcal{A}_{s}^{\varphi}$ has state set $Q$ then for each $q\in Q$
and $a\in 2^{{\textnormal{AP}_{\exists}}(\Phi)}$ we have
$|\delta(q,a)|\leq|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\varphi|}$, where
$H=1+{\bf E}\mathrm{d}(\varphi)$.
Constants $m_{1}$ and $m_{2}$ are derived from constants in the complexity of,
respectively, the simulation procedure, and the procedure that builds a
nondeterministic word automaton for an LTL formula. For more detail, see the
proof of Proposition 4.12 in Appendix A.
From this we get the following complexity result.
###### Proposition 4.13.
The model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$
formulas of simulation depth at most $k$ is $(k+1)$-Exptime -complete.
###### Proof.
We start with the upper bounds. For an instance $(\Phi,\mathcal{S})$, our
decision procedure in Section 4.3 first builds automaton
$\mathcal{A}_{s_{\iota}}^{\Phi}$, and concludes by testing whether the full
$S_{\Phi}$-tree with empty labelling $t$ is accepted by
$\mathcal{A}_{s_{\iota}}^{\Phi}$. This can be done in time
$O((|\mathcal{A}_{s_{\iota}}^{\Phi}|\cdot|t|)^{l})$, where $|t|$ is the size
of a smallest Kripke structure representing the regular tree $t$,
$|\mathcal{A}_{s_{\iota}}^{\Phi}|$ is the sum of the number of states and
sizes of formulas in the transition function of
$\mathcal{A}_{s_{\iota}}^{\Phi}$, and $l$ the number of colours it uses
(Löding, 2011). Clearly $t$ can be represented by a Kripke structure of size
$|S_{\Phi}|$, so that $|t|\leq|S_{\Phi}|\leq|\mathcal{S}|$.
By Proposition 4.12, each formula in the transition function of
$\mathcal{A}_{s_{\iota}}^{\Phi}$ is of size at most
$|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\Phi|}$, where $Q$ is the set of states
in $\mathcal{A}_{s_{\iota}}^{\Phi}$ and $H=1+{\bf E}\mathrm{d}(\Phi)$. There
are at most $|Q|2^{|{\textnormal{AP}_{\exists}}(\Phi)|}$ such formulas333In
fact the final automaton $\mathcal{A}_{s_{\iota}}^{\Phi}$ does not read
anything in its input, hence the alphabet could be considered to be a
singleton. We thus have only $|Q|$ different formulas in the transition
function, at most. and $|{\textnormal{AP}_{\exists}}(\Phi)|\leq|\Phi|$, so
that
$|\mathcal{A}_{s_{\iota}}^{\Phi}|\leq|Q|+|Q|2^{|{\textnormal{AP}_{\exists}}(\Phi)|}|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\Phi|}\leq
2|\mathcal{S}||Q|^{|\mathcal{S}|+1}2^{(H+1)|\Phi|}$. Also $H+1\leq|\Phi|$, so
we finally have $|\mathcal{A}_{s_{\iota}}^{\Phi}|\leq
2|\mathcal{S}||Q|^{|\mathcal{S}|+1}2^{|\Phi|^{2}}$.
If $k=0$, by Proposition 4.12 $\mathcal{A}_{s_{\iota}}^{\Phi}$ has at most
$f_{\mathcal{S}}^{\Phi}$ states and 2 colours, and $f_{\mathcal{S}}^{\Phi}$ is
polynomial in $|\mathcal{S}|$ but exponential in $|\Phi|$. Therefore
$|\mathcal{A}_{s_{\iota}}^{\Phi}|$ is exponential in $|\Phi|$ and in
$|\mathcal{S}|$, and so is the complexity of checking that $t$ is accepted by
$\mathcal{A}_{s_{\iota}}^{\Phi}$.
If $k\geq 1$, by Proposition 4.12, $|Q|$ is $k$-exponential in
$f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}$, and
$f_{\mathcal{S}}^{\Phi}\log f_{\mathcal{S}}^{\Phi}$ itself is polynomial in
$|\mathcal{S}|$ but exponential in $|\Phi|$. As a result,
$|\mathcal{A}_{s_{\iota}}^{\Phi}|$ is $(k+1)$-exponential in $|\Phi|$ and
$k$-exponential in $|\mathcal{S}|$. Finally, still by Proposition 4.12, the
number of colours $l$ is $(k-1)$-exponential in $f_{\mathcal{S}}^{\Phi}\log
f_{\mathcal{S}}^{\Phi}$, hence $k$-exponential in $|\Phi|$. Checking that $t$
is accepted by $\mathcal{A}_{s_{\iota}}^{\Phi}$ can thus be done in time
$(k+1)$-exponential in $|\Phi|$, and $k$-exponential in $|\mathcal{S}|$, which
finishes to establish the upper bounds.
For the lower bounds, consider the fragment
$\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ of $\textnormal{{QCTL}}^{*}$
(with perfect information) which consists in formulas in prenex normal form,
i.e., with all second-order quantifications at the beginning, with at most $k$
alternations between existential and universal quantifiers, counting the first
quantifier as one alternation (see (Laroussinie and Markey, 2014, p.8) for a
formal definition). Clearly, $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ is
a fragment of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ (with
$n=1$), and formulas of $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ have
simulation depth at most $k$. It is proved in (Laroussinie and Markey, 2014)
that model checking $\textnormal{{EQ}}^{k}\textnormal{{CTL}}^{*}$ is
$(k+1)$-Exptime -hard. ∎
###### Remark 3.
One may wonder why we do not get our lower bounds from the distributed
synthesis problem in systems with hierarchical information. The reason is that
this problem is $k$-Exptime -complete for LTL or $\textnormal{{CTL}}^{*}$
specifications (Pnueli and Rosner, 1990; Kupferman and Vardi, 2001) and can be
expressed with formulas of simulation depth $k$, and thus would only provide
$k$-Exptime lower-bounds for simulation depth $k$, while our problem is
$k+1$-Exptime -complete. This may seem surprising, but we point out that
thanks to alternation of existential and universal quantifiers,
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formulas with
simulation depth $k$ can express more complex problems than classic
distributed synthesis, such as existence of Nash equilibria (see Section 7.1).
Improved upper bound. We now refine the previous result by observing that some
subformulas can be model-checked independently in a bottom-up labelling
algorithm which uses the above model-checking procedure as a subroutine. The
height of exponential of the overall procedure for a formula $\Phi$ is thus
determined by the maximal simulation-depth of the successive independent
subformulas $\varphi$ treated by the labelling algorithm, instead of the
simulation depth of the full formula $\Phi$. To make this precise we define
the _simulation number_ of a sentence, akin to the alternation number
introduced in (Mogavero et al., 2014).
Let $\Phi\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, and assume
without loss of generality that
${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\Phi)=\emptyset$. A
state subformula $\varphi$ of $\Phi$ is a _subsentence_ if no atom quantified
in $\Phi$ appears free in $\varphi$, i.e., $\varphi$ is a subsentence of
$\Phi$ if
${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\varphi)=\emptyset$.444Observe
that since we always assume that
${\textnormal{AP}_{\exists}}(\Phi)\cap\textnormal{AP}_{f}(\Phi)=\emptyset$,
$\Phi$ is a subsentence of itself. The _simulation number_ $\mbox{sn}(\Phi)$
of a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula $\Phi$ is
the maximal simulation depth of $\Phi$’s subsentences, where the simulation
depth is computed by considering strict subsentences as atoms.
Note that because temporal operators of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ can only talk about the
future, the truth value of a subsentence in a node $u$ of an unfolding
$t_{\mathcal{S}}$ only depends on the current state $\mbox{last}(u)$. The
bottom-up labelling algorithm for an instance $(\Phi,\mathcal{S})$ thus
consists in iteratively model checking innermore subsentences of $\Phi$ in all
states of $\mathcal{S}$, marking the states where they hold with fresh atomic
propositions with which the corresponding subsentences are replaced in $\Phi$.
###### Proposition 4.14.
The model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$
formulas of simulation number at most $k$ is $(k+1)$-Exptime -complete.
## 5\. Model-checking hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
In this section we establish that the model-checking problem for
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ restricted to the class of
hierarchical instances is decidable (Theorem 2.9).
### 5.1. Reduction to $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
We build upon the proof in (Laroussinie and Markey, 2015) that establishes the
decidability of the model-checking problem for
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ by reduction to the
model-checking problem for $\textnormal{{QCTL}}^{*}$. The main difference is
that we reduce to the model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ instead, using
quantifiers on atomic propositions parameterised with observations that
reflect the ones used in the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
model-checking instance.
Let $(\mathcal{G},\Phi)$ be a hierarchical instance of the
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ model-checking problem, and
assume without loss of generality that each strategy variable is quantified at
most once in $\Phi$. We define an equivalent instance of the model-checking
problem for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$.
Constructing the CKS $\mathcal{S}_{\mathcal{G}}$. We define
$\mathcal{S}_{\mathcal{G}}$ so that (indistinguishable) nodes in its tree-
unfolding correspond to (indistinguishable) finite plays in $\mathcal{G}$. The
CKS will make use of atomic propositions $\textnormal{AP}_{v}:=\\{p_{v}\mid
v\in V\\}$ (that we assume to be disjoint from AP). The idea is that $p_{v}$
allows the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula
$(\Phi)_{s}^{\,\emptyset}$ to refer to the current position $v$ in
$\mathcal{G}$. Later we will see that $(\Phi)_{s}^{\,\emptyset}$ will also
make use of atomic propositions $\textnormal{AP}_{c}:=\\{p_{c}^{x}\mid
c\in\textnormal{Ac}\mbox{ and }x\in\textnormal{Var}\\}$ that we assume, again,
are disjoint from $\textnormal{AP}\cup\textnormal{AP}_{v}$. This allows the
formula to use $p_{c}^{x}$ to refer to the actions $c$ advised by strategies
$x$.
Suppose $\textnormal{Obs}=\\{o_{1},\ldots,o_{n}\\}$, and let
$\mathcal{G}=(\textnormal{Ac},V,E,\ell,v_{\iota},\mathcal{O})$. For $i\in[n]$,
define the local states $L_{i}:=\\{[v]_{o_{i}}\mid v\in V\\}$ where $[v]_{o}$
is the equivalence class of $v$ for relation $\sim_{o}$. Since we need to know
the actual position of the $\textrm{CGS}_{\textnormal{ii}}$ to define the
dynamics, we also let $L_{n+1}:=V$.
Define the CKS $\mathcal{S}_{\mathcal{G}}:=(S,R,s_{{\iota}},\ell^{\prime})$
where
* •
$S:=\\{s_{v}\mid v\in V\\}$,
* •
$R:=\\{(s_{v},s_{v^{\prime}})\mid\exists\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}\mbox{
s.t. }E(v,\bm{c})=v^{\prime}\\}\subseteq S^{2}$,
* •
$s_{{\iota}}:=s_{v_{{\iota}}}$,
* •
$\ell^{\prime}(s_{v}):=\ell(v)\cup\\{p_{v}\\}\subseteq\textnormal{AP}\cup\textnormal{AP}_{v}$,
and $s_{v}:=([v]_{o_{1}},\ldots,[v]_{o_{n}},v)\in\prod_{i\in[n+1]}L_{i}$.
For every finite play $\rho=v_{0}\ldots v_{k}$, define the node
$u_{\rho}:=s_{v_{0}}\ldots s_{v_{k}}$ in $t_{\mathcal{S}_{\mathcal{G}}}$
(which exists, by definition of $\mathcal{S}_{\mathcal{G}}$ and of tree
unfoldings). Note that the mapping $\rho\mapsto u_{\rho}$ defines a bijection
between the set of finite plays and the set of nodes in
$t_{\mathcal{S}_{\mathcal{G}}}$.
Constructing the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$ formulas $(\varphi)_{s}^{\,f}$. We now describe how to
transform an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula
$\varphi$ and a partial function
$f:\textnormal{Ag}\rightharpoonup\textnormal{Var}$ into a
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula
$(\varphi)_{s}^{\,f}$ (that will also depend on $\mathcal{G}$). Suppose that
$\textnormal{Ac}=\\{c_{1},\ldots,c_{l}\\}$, and define $(\varphi)_{s}^{\,f}$
and $(\psi)_{p}^{\,f}$ by mutual induction on state and path formulas. The
base cases are as follows: $(p)_{s}^{\,f}:=p$ and
$(\varphi)_{p}^{\,f}:=(\varphi)_{s}^{\,f}$. Boolean and temporal operators are
simply obtained by distributing the translation:
$(\neg\varphi)_{s}^{\,f}:=\neg(\varphi)_{s}^{\,f}$,
$(\neg\psi)_{p}^{\,f}:=\neg(\psi)_{p}^{\,f}$,
$(\varphi_{1}\vee\varphi_{2})_{s}^{\,f}:=(\varphi_{1})_{s}^{\,f}\vee(\varphi_{2})_{s}^{\,f}$,
$(\psi_{1}\vee\psi_{2})_{p}^{\,f}:=(\psi_{1})_{p}^{\,f}\vee(\psi_{2})_{p}^{\,f}$,
$({\bf X}\psi)_{p}^{\,f}:={\bf X}(\psi)_{p}^{\,f}$ and $(\psi_{1}{\bf
U}\psi_{2})_{p}^{\,f}:=(\psi_{1})_{p}^{\,f}{\bf U}(\psi_{2})_{p}^{\,f}$.
We continue with the case of the strategy quantifier:
$\begin{array}[]{lrl}&(\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi)_{s}^{\,f}&:=\exists^{\widetilde{o}}p_{c_{1}}^{x}\ldots\exists^{\widetilde{o}}p_{c_{l}}^{x}.\varphi_{\text{str}}(x)\wedge(\varphi)_{s}^{\,f}\\\\[5.0pt]
\mbox{where}&\varphi_{\text{str}}(x)&:={\bf A}{\bf
G}\bigvee_{c\in\textnormal{Ac}}p_{c}^{x}\\\\[5.0pt]
\mbox{and}&\widetilde{o_{i}}&:=\\{j\mid\mathcal{O}(o_{i})\subseteq\mathcal{O}(o_{j})\\}.\end{array}$
The intuition is that for each possible action $c\in\textnormal{Ac}$, an
existential quantification on the atomic proposition $p_{c}^{x}$ “chooses” for
each node $u_{\rho}$ of the tree $t_{\mathcal{S}_{\mathcal{G}}}$ whether
strategy $x$ allows action $c$ in $\rho$ or not, and it does so uniformly with
regards to observation $\widetilde{o}$. $\varphi_{\text{str}}(x)$ checks that
at least one action is allowed in each node, and thus that atomic propositions
$p_{c}^{x}$ indeed define a strategy.
We define $\widetilde{o_{i}}$ as
$\\{j\mid\mathcal{O}(o_{i})\subseteq\mathcal{O}(o_{j})\\}$ instead of
$\\{i\\}$ in order to obtain a hierarchical instance. Note that including all
coarser observations does not increase the information accessible to the
quantifier: indeed, two nodes are $\\{i\\}$-indistinguishable if and only if
they are $\widetilde{o_{i}}$-indistinguishable.
Here are the remaining cases:
$\begin{array}[]{lrl}&((a,x)\varphi)_{s}^{\,f}&:=(\varphi)_{s}^{\,f[a\mapsto
x]}\quad\quad\text{for
}x\in\textnormal{Var}\cup\\{\operatorname{?}\\}\\\\[5.0pt] \mbox{and}&({\bf
E}\psi)_{s}^{\,f}&:={\bf
E}\,(\psi_{\text{out}}^{\,f}\wedge(\psi)_{p}^{\,f})\\\\[5.0pt]
\mbox{where}&\psi_{\text{out}}^{\,f}&:={\bf G}\bigvee_{v\in
V}\left(p_{v}\wedge\bigvee_{\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}}\bigwedge_{a\in\textit{dom}(f)}p_{\bm{c}_{a}}^{f(a)}\wedge{\bf
X}\,p_{E(v,\bm{c})}\right).\end{array}$
$\psi_{\text{out}}^{\,f}$ checks that each player $a$ in the domain of $f$
follows the strategy coded by the $p_{c}^{f(a)}$.
###### Remark 4.
If we consider the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ that only allows for deterministic strategies, the translation can be
adapted by simply replacing formula $\varphi_{\text{str}}(x)$ above with its
deterministic variant
$\varphi_{\text{str}}^{\text{det}}(x):={\bf A}{\bf
G}\bigvee_{c\in\textnormal{Ac}}(p_{c}^{x}\wedge\bigwedge_{c^{\prime}\neq
c}\neg p_{c^{\prime}}^{x}),$
which ensures that _exactly one_ action is chosen for strategy $x$ in each
finite play, and thus that atomic propositions $p_{c}^{x}$ characterise a
deterministic strategy.
To prove correctness of the translation, given a strategy $\sigma$ and a
strategy variable $x$ we let $\ell_{\sigma}^{x}:=\\{\ell_{p_{c}^{x}}\mid
c\in\textnormal{Ac}\\}$ be the family of $p_{c}^{x}$-labellings for tree
$t_{\mathcal{S}_{\mathcal{G}}}$ defined as follows: for each finite play
$\rho$ in $\mathcal{G}$ and $c\in\textnormal{Ac}$, we let
$\ell_{p_{c}^{x}}(u_{\rho}):=1$ if $c\in\sigma(\rho)$, 0 otherwise. For a
labelled tree $t$ with same domain as $t_{\mathcal{S}_{\mathcal{G}}}$ we write
$t\otimes\ell_{\sigma}^{x}$ for
$t\otimes\ell_{p_{c_{1}}^{x}}\otimes\ldots\otimes\ell_{p_{c_{l}}^{x}}$.
Given an infinite play $\pi$ and a point $i\in\mathbb{N}$, we also let
$\lambda_{\pi,i}$ be the infinite path in $t_{\mathcal{S}_{\mathcal{G}}}$ that
starts in node $u_{\pi_{\leq i}}$ and is defined as
$\lambda_{\pi,i}:=u_{\pi_{\leq i}}u_{\pi_{\leq i+1}}u_{\pi_{\leq i+2}}\ldots$
Finally, for an assignment $\chi$ and a partial function
$f:\textnormal{Ag}\rightharpoonup\textnormal{Var}$, we say that $f$ is
_compatible_ with $\chi$ if
$\textit{dom}(\chi)\cap\textnormal{Ag}=\textit{dom}(f)$ and for all
$a\in\textit{dom}(f)$, $\chi(a)=\chi(f(a))$.
###### Proposition 5.1.
For every state subformula $\varphi$ and path subformula $\psi$ of $\Phi$,
finite play $\rho$, infinite play $\pi$, point $i\in\mathbb{N}$, for every
assignment $\chi$ variable-complete for $\varphi$ (resp. $\psi$) and partial
function $f:\textnormal{Ag}\rightharpoonup\textnormal{Var}$ compatible with
$\chi$, assuming also that no $x_{i}$ in
$\textit{dom}(\chi)\cap\textnormal{Var}=\\{x_{1},\ldots,x_{k}\\}$ is
quantified in $\varphi$ or $\psi$, we have
$\displaystyle\mathcal{G},\chi,{\rho}\models\varphi$ if and only if
$\displaystyle
t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models(\varphi)_{s}^{\,f}$
$\displaystyle\mathcal{G},\chi,{\pi},i\models\psi$ if and only if
$\displaystyle
t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},\lambda_{\pi,i}\models(\psi)_{p}^{\,f}$
In addition, $\mathcal{S}_{\mathcal{G}}$ is of size linear in $|\mathcal{G}|$,
and $(\varphi)_{s}^{\,f}$ and $(\psi)_{p}^{\,f}$ are of size linear in
$|\mathcal{G}|^{2}+|\varphi|$.
###### Proof.
The proof is by induction on $\varphi$. We detail the cases for binding,
strategy quantification and outcome quantification, the others follow simply
by definition of $\mathcal{S}_{\mathcal{G}}$ for atomic propositions and
induction hypothesis for remaining cases.
For $\varphi=(a,x)\varphi^{\prime}$, we have
$\mathcal{G},\chi,{\rho}\models(a,x)\varphi^{\prime}$ if and only if
$\mathcal{G},\chi[a\mapsto\chi(x)],{\rho}\models\varphi^{\prime}$. The result
follows by using the induction hypothesis with assignment $\chi[a\mapsto x]$
and function $f[a\mapsto x]$. This is possible because $f[a\mapsto x]$ is
compatible with $\chi[a\mapsto x]$: indeed $\textit{dom}(\chi[a\mapsto
x])\cap\textnormal{Ag}$ is equal to
$\textit{dom}(\chi)\cap\textnormal{Ag}\cup\\{a\\}$ which, by assumption, is
equal to $\textit{dom}(f)\cup\\{a\\}=\textit{dom}(f[a\mapsto x])$. Also by
assumption, for all $a^{\prime}\in\textit{dom}(f)$,
$\chi(a^{\prime})=\chi(f(a^{\prime}))$, and by definition
$\chi[a\mapsto\chi(x)](a)=\chi(x)=\chi(f[a\mapsto x](a))$.
For $\varphi=\langle\\!\langle x\rangle\\!\rangle^{o}\varphi^{\prime}$, assume
first that $\mathcal{G},\chi,{\rho}\models\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi^{\prime}$. There exists an $o$-uniform strategy
$\sigma$ such that
$\mathcal{G},\chi[x\mapsto\sigma],\rho\models\varphi^{\prime}.$
Since $f$ is compatible with $\chi$, it is also compatible with assignment
$\chi^{\prime}=\chi[x\mapsto\sigma]$. By assumption, no variable in
$\\{x_{1},\ldots,x_{k}\\}$ is quantified in $\varphi$, so that $x\neq x_{i}$
for all $i$, and thus $\chi^{\prime}(x_{i})=\chi(x_{i})$ for all $i$; and
because no strategy variable is quantified twice in a same formula, $x$ is not
quantified in $\varphi^{\prime}$, so that no variable in
$\\{x_{1},\ldots,x_{k},x\\}$ is quantified in $\varphi^{\prime}$. By induction
hypothesis
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi^{\prime}(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi^{\prime}(x_{k})}^{x_{k}}\otimes\ell_{\chi^{\prime}(x)}^{x},u_{\rho}\models(\varphi^{\prime})_{s}^{\,f}.$
Because $\sigma$ is $o$-uniform, each
$\ell_{p_{c}^{x}}\in\ell_{\sigma}^{x}=\ell_{\chi^{\prime}(x)}^{x}$ is
$\widetilde{o}$-uniform, and it follows that
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi^{\prime}(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi^{\prime}(x_{k})}^{x_{k}},u_{\rho}\models\exists^{\widetilde{o}}p_{c_{1}}^{x}\ldots\exists^{\widetilde{o}}p_{c_{l}}^{x}.\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}.$
Finally, since $\chi^{\prime}(x_{i})=\chi(x_{i})$ for all $i$, we conclude
that
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models(\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi^{\prime})_{s}^{\,f}.$
For the other direction, assume that
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models(\varphi)_{s}^{\,f},$
and recall that
$(\varphi)_{s}^{\,f}=\exists^{\widetilde{o}}p_{c_{1}}^{x}\ldots\exists^{\widetilde{o}}p_{c_{l}}^{x}.\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}$.
Write
$t=t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}}$.
There exist $\widetilde{o}$-uniform $\ell_{p_{c}^{x}}$-labellings such that
$t\otimes\ell_{p_{c_{1}}^{x}}\otimes\ldots\otimes\ell_{p_{c_{l}}^{x}}\models\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}.$
By $\varphi_{\text{str}}(x)$, these labellings code for a strategy $\sigma$,
and because they are $\widetilde{o}$-uniform, $\sigma$ is $o$-uniform. Let
$\chi^{\prime}=\chi[x\mapsto\sigma]$. For all $1\leq i\leq k$, by assumption
$x\neq x_{i}$, and thus $\chi^{\prime}(x_{i})=\chi(x_{i})$. The above can thus
be rewritten
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi^{\prime}(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi^{\prime}(x_{k})}^{x_{k}}\otimes\ell_{\chi^{\prime}(x)}^{x}\models\varphi_{\text{str}}(x)\wedge(\varphi^{\prime})_{s}^{\,f}.$
By induction hypothesis we have
$\mathcal{G},\chi[x\mapsto\sigma],\rho\models\varphi^{\prime}$, hence
$\mathcal{G},\chi,\rho\models\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi^{\prime}$.
For $\varphi={\bf E}\psi$, assume first that
$\mathcal{G},\chi,{\rho}\models{\bf E}\psi$. There exists a play
$\pi\in\textnormal{Out}(\chi,\rho)$ such that
$\mathcal{G},\chi,\pi,|\rho|-1\models\psi$. By induction hypothesis,
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},\lambda_{\pi,|\rho|-1}\models(\psi)_{p}^{\,f}$.
Since $\pi$ is an outcome of $\chi$, each agent
$a\in\textit{dom}(\chi)\cap\textnormal{Ag}$ follows strategy $\chi(a)$ in
$\pi$. Because $\textit{dom}(\chi)\cap\textnormal{Ag}=\textit{dom}(f)$ and for
all $a\in\textit{dom}(f)$, $\chi(a)=\chi(f(a))$, each agent
$a\in\textit{dom}(f)$ follows the strategy $\chi(f(a))$, which is coded by
atoms $p_{c}^{f(a)}$ in the translation of $\Phi$. Therefore
$\lambda_{\pi,|\rho|-1}$ also satisfies $\psi_{\text{out}}^{\,\chi}$, hence
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},\lambda_{\pi,|\rho|-1}\models\psi_{\text{out}}^{\,\chi}\wedge(\psi)_{p}^{\,f}$,
and we are done.
For the other direction, assume that
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}},u_{\rho}\models{\bf
E}(\psi_{\text{out}}^{\,f}\wedge(\psi)_{p}^{\,f})$. There exists a path
$\lambda$ in
$t_{\mathcal{S}_{\mathcal{G}}}\otimes\ell_{\chi(x_{1})}^{x_{1}}\otimes\ldots\otimes\ell_{\chi(x_{k})}^{x_{k}}$
starting in node $u_{\rho}$ that satisfies both $\psi_{\text{out}}^{\,f}$ and
$(\psi)_{p}^{\,f}$. By construction of $\mathcal{S}_{\mathcal{G}}$ there
exists an infinite play $\pi$ such that $\pi_{\leq|\rho|-1}=\rho$ and
$\lambda=\lambda_{\pi,|\rho|-1}$. By induction hypothesis,
$\mathcal{G},\chi,\pi,|\rho|-1\models\psi$. Because $\lambda_{\pi,|\rho|-1}$
satisfies $\psi_{\text{out}}^{\,f}$,
$\textit{dom}(\chi)\cap\textnormal{Ag}=\textit{dom}(f)$, and for all
$a\in\textit{dom}(f)$, $\chi(a)=\chi(f(a))$, it is also the case that
$\pi\in\textnormal{Out}(\chi,\rho)$, hence $\mathcal{G},\chi,\rho\models{\bf
E}\psi$.
The size of $\mathcal{S}_{\mathcal{G}}$, $(\varphi)_{s}^{\,f}$ and
$(\psi)_{p}^{\,f}$ are easily verified. ∎
Applying Proposition 5.1 to the sentence $\Phi$, $\rho=v_{\iota}$, any
assignment $\chi$, and the empty function $\emptyset$, we get:
$\mathcal{G}\models\Phi\quad\mbox{if and only if}\quad
t_{\mathcal{S}_{\mathcal{G}}}\models(\Phi)_{s}^{\,\emptyset}.$
Preserving hierarchy. To complete the proof of Theorem 2.9 it remains to check
that $(\Phi)_{s}^{\,\emptyset}$ is a hierarchical
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ formula, which is the
case because $\Phi$ is hierarchical in $\mathcal{G}$ and for every two
observations $o_{i}$ and $o_{j}$ in Obs such that
$\mathcal{O}(o_{i})\subseteq\mathcal{O}(o_{j})$, by definition of
$\widetilde{o_{k}}$ we have that
$\widetilde{o_{i}}\subseteq\widetilde{o_{j}}$.
### 5.2. Complexity
We now establish the complexity of model checking hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. As we did for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, we first define the
simulation depth of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ state
formulas. In the following inductive definition, $\mathcal{O}_{\varphi}$
denotes the intersection of all indistinguishability relations used in
$\varphi$: $\mathcal{O}_{\varphi}:=\cap_{o\in\varphi}\mathcal{O}(o)$, with the
empty intersection being defined as the identity relation (perfect
information). Also, for a path formula $\psi$, $\max(\psi)$ is the set of
maximal state subformulas in $\psi$.
$\begin{array}[]{lcc}\mbox{sd}(p):=(0,\mbox{nd})&&\mbox{sd}(\neg\varphi):=(\mbox{sd}_{k}(\varphi),\mbox{alt})\\\\[7.0pt]
\lx<EMAIL_ADDRESS>\lx@intercol\hfil\mbox{where }x=\begin{cases}\mbox{nd}&\mbox{if
}\mbox{sd}_{x}(\varphi_{1})=\mbox{sd}_{x}(\varphi_{2})=\mbox{nd}\\\
<EMAIL_ADDRESS>\lx@intercol\mbox{sd}(\langle\\!\langle
<EMAIL_ADDRESS>\lx@intercol\hfil\mbox{where }k=\begin{cases}\mbox{sd}_{k}(\varphi)&\mbox{if
}\mbox{sd}_{x}(\varphi)=\mbox{nd}\mbox{ and
}\mathcal{O}(o)=\mathcal{O}_{\varphi}\\\
<EMAIL_ADDRESS>\mbox{sd}((a,x)\varphi):=\mbox{sd}(\varphi)\\\\[7.0pt]
\lx@intercol\mbox{sd}({\bf E}\psi):=\begin{cases}(0,\mbox{nd})&\mbox{if
}\psi\in\textnormal{LTL}\\\
(\max_{\varphi\in\max(\psi)}\mbox{sd}_{k}(\varphi),\mbox{alt})&\mbox{otherwise}\end{cases}\hfil\lx@intercol\end{array}$
###### Proposition 5.2.
The model-checking problem for hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ of simulation depth at most
$k$ is $(k+1)$-Exptime -complete.
###### Proof.
The upper bounds follow from the fact that the translated formulas in our
reduction have essentially the same simulation depth as the original ones.
However this is not quite right, because in the case where
$\mbox{sd}_{x}(\varphi)=\mbox{nd}$ and $\mathcal{O}(o)=\mathcal{O}_{\varphi}$
we have $\mbox{sd}(\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi)=(\mbox{sd}_{k}(\varphi),\mbox{nd})$, while
$\mbox{sd}((\langle\\!\langle
x\rangle\\!\rangle^{o}\varphi)_{s}^{\,f})=(\mbox{sd}_{k}((\varphi)_{s}^{\,f})+1,\mbox{nd})$:
indeed, while it is the case that $\mathcal{O}(o)=\mathcal{O}_{\varphi}$
implies that $\widetilde{o}=I_{(\varphi)_{s}^{\,f}}$, the translation
introduces a conjunction with $\varphi_{\text{str}}(x)$, and even when
$\mbox{sd}_{x}((\varphi)_{s}^{\,f})=\mbox{nd}$, we have
$\mbox{sd}_{x}(\varphi_{\text{str}}(x)\wedge(\varphi)_{s}^{\,f})=\mbox{alt}$.
According to Proposition 4.13, this should thus induce an additional
exponential to check the translated formula. However, this can be avoided by
noticing that the fixed formula $\varphi_{\text{str}}(x)={\bf A}{\bf
G}\bigvee_{c\in\textnormal{Ac}}p_{c}^{x}$ can be checked by a simple
_deterministic_ tree automaton with two states $q_{\text{check}}$ and
$q_{\text{rej}}$: the automaton starts in state $q_{\text{check}}$, which is
accepting (it has parity zero); when it visits a node $u$ in state
$q_{\text{check}}$, if $\ell(u)$ satisfies
$\bigvee_{c\in\textnormal{Ac}}p_{c}^{x}$, then the automaton sends state
$q_{\text{check}}$ to all children of $u$, otherwise it sends the state
$q_{\text{rej}}$ to all children. State $q_{\text{rej}}$ is rejecting (it has
parity one) and is a sink: it sends itself to all children, independently on
the label of the visited node. If we restrict
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to deterministic strategies,
the same observation can be made: the automaton that checks formula
$\varphi_{\text{str}}^{\text{det}}(x)={\bf A}{\bf
G}\bigvee_{c\in\textnormal{Ac}}(p_{c}^{x}\wedge\bigwedge_{c^{\prime}\neq
c}\neg p_{c^{\prime}}^{x})$ is the same as the one described above, except
that it checks whether
$\bigvee_{c\in\textnormal{Ac}}(p_{c}^{x}\wedge\bigwedge_{c^{\prime}\neq c}\neg
p_{c^{\prime}}^{x})$ is satisfied by the label of the current node.
Given two tree automata $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$, one
deterministic and one nondeterministic, one can easily build a
nondeterministic automaton $\mathcal{A}_{1}\cap\mathcal{A}_{2}$ of size
$|\mathcal{A}_{1}|\times|\mathcal{A}_{2}|$ that accepts the intersection of
their languages, so that in this case the conjunction does not introduce
alternation, and thus we do not need an additional simulation before
projecting to guess the strategy. We could refine the notion of simulation
depth to reflect this, but we find that it would become very cumbersome for
little added benefit, so we keep this observation in this proof.
The lower bounds are inherited from
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ thanks to the
polynomial reduction presented in Section 6.2.2, which preserves simulation
depth. ∎
We point out that all instances of the model-checking problem for the perfect-
information fragment are hierarchical, and thus this result provides improved
upper-bounds for SL, which was only known to be in $k$-Exptime for formulas of
length at most $k$ (Mogavero et al., 2014). Also the lower bounds for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ are inherited directly
from the perfect-information fragment $\textnormal{{QCTL}}^{*}$, which reduces
to the perfect-information fragment of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ following the construction
from Section 6.2.2. Therefore the lower bounds hold already for the perfect-
information fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$.
Note however that this does not provide lower bounds for the usual, linear-
time variant of Strategy Logic, where path quantifiers in
$\textnormal{{QCTL}}^{*}$ formulas must be simulated with strategy
quantifications which increase the simulation depth of the resulting Strategy
Logic formulas. The exact complexity of the linear-time variant is not known,
even in the perfect-information case.
Simulation number. The intuition behind the alternation number as considered
in (Mogavero et al., 2014) is to refine the classic alternation depth between
existential and universal quantifiers by observing that subsentences of a
sentence $\Phi$ to model-check can be treated independently thanks to a
bottom-up labelling algorithm: innermost sentences are evaluated in all states
of the model and replaced in $\Phi$ by atomic propositions that label the
states where they hold. The alternation number of $\Phi$ is the maximum
alternation depth of the successive subsentences that are treated by this
bottom-up procedure, and it determines the complexity of the overall model-
checking procedure.
However, as discussed in Remark 1, the semantics of the outcome quantifier
makes sentences sensitive to the assignment in which they are evaluated. As a
result, to define the notion of alternation number in our setting, we
introduce a notion of _independent subsentence_. Intuitively, a subsentence
$\varphi$ of a sentence $\Phi$ is _independent_ if it redefines or unbinds the
strategies of all players who are bound to a strategy when $\varphi$ is
reached in the evaluation of $\Phi$. More precisely, we say that an agent $a$
is _bound_ in a syntactic subformula $\varphi$ of $\Phi$ if the path that
leads to $\varphi$ in $\Phi$’s syntactic tree contains a binding operator
$(a,x)$ for $a$ which is not followed by an unbinding $(a,\operatorname{?})$
for her. A subsentence $\varphi$ of $\Phi$ is _independent_ if all agents that
are bound in $\varphi$ are either rebound by an operator $(a,x)$ or unbound by
an operator $(a,\operatorname{?})$ before any outcome quantifier is met in
$\varphi$. In an independent subsentence $\varphi$, the semantics of the
outcome quantifier does not depend on strategies that are quantified outside
$\varphi$, and in fact a subsentence $\varphi$ of $\Phi$ is independent if and
only if the formula that corresponds to $\varphi$ in
$(\Phi)_{s}^{\,\emptyset}$ is a subsentence of $(\Phi)_{s}^{\,\emptyset}$.
Similarly to what we did for $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
ii}}$ we now define the _simulation number_ $\mbox{sn}(\Phi)$ of an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ sentence $\Phi$ as the
maximum of the simulation depths for independent subsentences, where strict
independent subsentences are counted as atoms.
###### Lemma 5.3.
For every hierarchical instance $(\mathcal{G},\Phi)$ of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$,
$\mbox{sn}(\Phi)=\mbox{sn}((\Phi)_{s}^{\,\emptyset})$.
The following then follows from Proposition 5.1, Lemma 5.3 and Proposition
4.14.
###### Proposition 5.4.
The model-checking problem for hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ of simulation number at most
$k$ is $(k+1)$-Exptime -complete.
We now compare the latter result with the complexity of model checking SL[NG],
the nested goal fragment of Strategy Logic with perfect information (we refer
the interested reader to (Mogavero et al., 2014) for a definition of this
fragment). It is established in (Chatterjee et al., 2010b; Mogavero et al.,
2014) that this problem is in $(k+1)$-Exptime for formulas of _alternation
number_ $k$. We remark that the simulation number of an SL[NG] formula
translated in our branching-time version of SL (this is done by adding outcome
quantifiers between bindings and temporal operators) is equal to its
alternation number plus one, and thus Proposition 5.4 gives a $(k+2)$-Exptime
upper bound for SL[NG] formulas of alternation number $k$. In (Chatterjee et
al., 2010b; Mogavero et al., 2014) the extra exponential is avoided by
resorting to universal and nondeterministic tree automata, depending on
whether the innermost strategy quantification is existential or universal, to
deal with temporal formulas. Thus, the innermost strategy quantification can
be dealt with without incurring an exponential blowup.
The same thing cannot be done for $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$, for two reasons. The first one is that in general the innermost
strategy quantification may have imperfect information and thus require a
narrowing of the automaton; this operation introduces alternation, which needs
to be removed at the cost of one exponential before dealing with strategy
quantification. The second reason is that even when the innermost strategy has
perfect information, the outcome quantifier that we introduce in Strategy
Logic allows the expression of $\textnormal{{CTL}}^{*}$ formulas which cannot
be dealt with by nondeterministic and universal automata as is done in
(Chatterjee et al., 2010b; Mogavero et al., 2014).
## 6\. Comparison with related logics
In this section we first show that $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ subsumes SL and the main imperfect-information extensions of ATL. Then
we show that model checking Coordination Logic (CL) reduces to model checking
hierarchical instances of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
where the truth of all atomic propositions in the model is known by all agents
(or more precisely, all observations in the concurrent game structures are
fine enough to observe the truth value of all atomic propositions).
### 6.1. Comparison with ATL
The main difference between SL and ATL-like strategic logics is that in the
latter a strategy is always bound to some player, while in the former bindings
and quantifications are separated. This separation adds expressive power,
e.g., one can bind the same strategy to different players. Extending ATL with
imperfect-information is done by giving each player an indistinguishability
relation that its strategies must respect (Bulling and Jamroga, 2014). In
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ instead each strategy $x$ is
assigned an indistinguishability relation $o$ when it is quantified.
Associating observations to strategies rather than players allows us to obtain
a logic $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ that is a clean
generalisation of (perfect-information) SL, and subsumes imperfect-information
extensions of $\textnormal{{ATL}}^{*}$ that associate observations to players.
Concerning SL, it is rather easy to see that every sentence in SL has an
equivalent in the fragment of $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$ with deterministic strategies where all observation symbols are
interpreted as perfect information. We now prove that
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ also subsumes
$\textnormal{{ATL}}^{*}$ with imperfect information.
###### Proposition 6.1.
For every $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$
formula555See (Bulling and Jamroga, 2014) for the definition of
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$, where subscript i
refers to “imperfect information” and subscript R to “perfect recall”. Also,
we consider the so-called _objective semantics_ for
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$. $\varphi$ there is an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula $\varphi^{\prime}$
such that for every $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ there is a
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}^{\prime}$ such that
$\mathcal{G}\models\varphi$ if, and only if,
$\mathcal{G}^{\prime}\models\varphi^{\prime}$.
We recall that an $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$
formula $\langle A\rangle\psi$ reads as “there are strategies for players in
$A$ such that $\psi$ holds whatever players in $\textnormal{Ag}\setminus A$
do”. Formula $\varphi^{\prime}$ is built from $\varphi$ by replacing each
subformula of the form $\langle A\rangle\psi$, where
$A=\\{a_{1},\ldots,a_{k}\\}\subset\textnormal{Ag}$ is a coalition of players
and $\textnormal{Ag}\setminus A=\\{a_{k+1},\ldots,a_{n}\\}$ with formula
$\langle\\!\langle x_{1}\rangle\\!\rangle^{o_{1}}\ldots\langle\\!\langle
x_{k}\rangle\\!\rangle^{o_{k}}(a_{1},x_{1})\ldots(a_{k},x_{k})(a_{k+1},\operatorname{?})\ldots(a_{n},\operatorname{?}){\bf
A}\,\psi^{\prime}$, where $\psi^{\prime}$ is the translation of $\psi$. Then
$\mathcal{G}^{\prime}$ is obtained from $\mathcal{G}$ by interpreting each
$o_{i}$ as the equivalence relation for player $i$ in $\mathcal{G}$, and
interpreting $o_{p}$ as the identity relation.
Third, $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ also subsumes the
imperfect-information extension of $\textnormal{{ATL}}^{*}$ with strategy
context (see (Laroussinie et al., 2015) for the definition of
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ with partial
observation, which we refer to as
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$).
###### Proposition 6.2.
For every $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$ formula
$\varphi$ there is an $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
formula $\varphi^{\prime}$ such that for every
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ there is a
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}^{\prime}$ such that
$\mathcal{G}\models\varphi$ if, and only if,
$\mathcal{G}^{\prime}\models\varphi^{\prime}$.
The only difference between $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize
sc,i}}$ and $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$ is the
following: in $\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize i,R}}$, when a
subformula of the form $\langle A\rangle\psi$ is met, we quantify
existentially on strategies for players in $A$ and quantify universally on
possible outcomes obtained by letting other players behave however they want.
Therefore, if any player in $\textnormal{Ag}\setminus A$ had previously been
assigned a strategy, it is forgotten. In
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc,i}}$ on the other hand,
these strategies are stored in a _strategy context_ , which is a _partial_
assignment $\chi$, defined for the subset of players currently bound to a
strategy. A strategy context allows one to quantify universally only on
strategies of players who are not in $A$ and who are not already bound to a
strategy. It is then easy to adapt the translation presented for Proposition
6.1: it suffices not to unbind agents outside the coalition from their
strategies. $\mathcal{G}^{\prime}$ is defined as for Proposition 6.1.
### 6.2. Comparison with Coordination Logic
There is a natural and simple translation of instances of the model-checking
problem of CL (Finkbeiner and Schewe, 2010) into the hierarchical instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$. Moreover, the image of this
translation consists of instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with a very restricted form:
atoms mentioned in the $\textnormal{{SL}}_{\textnormal{\scriptsize
ii}}$-formula are observable by all observations of the
$\textrm{CGS}_{\textnormal{ii}}$ , i.e., for all $o\in\textnormal{Obs}$ and
$p\in\textnormal{AP}$, $v\sim_{o}v^{\prime}$ implies that $p\in\ell(v)$ iff
$p\in\ell(v^{\prime})$.
###### Proposition 6.3.
There is an effective translation that, given a CL-instance
$(\mathcal{S},\varphi)$ produces a hierarchical
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instance
$(\mathcal{G},\Phi)$ such that
1. (1)
$\mathcal{S}\models\varphi$ if, and only if, $\mathcal{G}\models\Phi$,
2. (2)
For all atoms $p\in\textnormal{AP}$ and observations $o\in\textnormal{Obs}$,
$v\sim_{o}v^{\prime}$ implies that $p\in\ell(v)$ iff $p\in\ell(v^{\prime})$.
To do this, one first translates CL into (hierarchical)
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, the latter is defined
in the next section. This step is a simple reflection of the semantics of CL
in that of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$. Then one
translates $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ into
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ by a simple adaptation of
the translation of $\textnormal{{QCTL}}^{*}$ into
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ (Laroussinie and
Markey, 2015).
We briefly recall the syntax and semantics of CL, and refer to (Finkbeiner and
Schewe, 2010) for further details.
Notation for trees. Note that our definition for trees (see Section 3.2)
differs slightly from the one in (Finkbeiner and Schewe, 2010), where the root
is the empty word. Here we adopt this convention to stay closer to notations
in (Finkbeiner and Schewe, 2010). Thus, $(Y,X)$-trees in CL are of the form
$(\tau,l)$ where $\tau\subseteq X^{*}$ and $l:\tau\to 2^{Y}$.
For two disjoint sets $X$ and $Y$, we identify $2^{X}\times 2^{Y}$ with
$2^{X\cup Y}$. Let $X$ and $Y$ be two sets with $Z=X\cup Y$, and let $M$ and
$N$ be two disjoint sets. Given an ${M}$-labelled $2^{Z}$-tree
$t=(\tau,\ell_{M})$ and an ${N}$-labelled $2^{Z}$-tree
$t^{\prime}=(\tau^{\prime},\ell_{N})$ with same domain $\tau=\tau^{\prime}$,
we define $t\uplus t^{\prime}:=(\tau,\ell^{\prime})$, where for every
$u\in\tau$, $\ell^{\prime}(u)=\ell_{M}(u)\cup\ell_{N}(u)$. Now, given a
complete ${M}$-labelled $2^{X}$-tree $t=((2^{X})^{*},\ell_{M})$ and a complete
${N}$-labelled $2^{Y}$-tree $t^{\prime}=((2^{Y})^{*},\ell_{N})$, we define
$t\oplus t^{\prime}:=t\\!\uparrow^{2^{Z\setminus
X}}\uplus\,t^{\prime}\\!\uparrow^{2^{Z\setminus Y}}$.
CL Syntax. Let $\mathcal{C}$ be a set of _coordination variables_ , and let
$\mathcal{S}$ be a set of _strategy variables_ disjoint from $\mathcal{C}$.
The syntax of CL is given by the following grammar:
$\varphi::=x\mid\neg\varphi\mid\varphi\vee\varphi\mid{\bf
X}\varphi\mid\varphi{\bf U}\varphi\mid\Finv C\exists s.\,\varphi$
where $x\in\mathcal{C}\cup\mathcal{S}$, $C\subseteq\mathcal{C}$ and
$s\in\mathcal{S}$, and with the restriction that each coordination variable
appears in at most one _subtree quantifier_ $\Finv C\exists s.\,$, and
similarly for strategy variables.
The notion of free and bound (coordination or strategy) variables is as usual.
The set of free coordination variables in $\varphi$ is noted
$\mathcal{F}_{\varphi}$. A bound coordination variable $c$ is _visible_ to a
strategy variable $s$ if $s$ is in the scope of the quantifier that introduces
$c$, and $\textit{Scope}_{\varphi}(s)$ is the union of the set of bound
coordination variables visible to $s$ and the set of free coordination
variables (note that this union is disjoint). We will see, in the semantics,
that the meaning of a bound strategy variable $s$ is a strategy
$f_{s}:(2^{\textit{Scope}_{\varphi}(s)})^{*}\to 2^{\\{s\\}}$. Free strategy
variables are called _atomic propositions_ , and we denote the set of atomic
propositions in $\varphi$ by $\textnormal{AP}_{\varphi}$.
CL Semantics. A CL formula $\varphi$ is evaluated on a complete
$\textnormal{AP}_{\varphi}$-labelled $2^{\mathcal{F}_{\varphi}}$-tree $t$. An
$(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree $t=(\tau,\ell)$
satisfies a CL formula $\varphi$ if for every path $\lambda$ that starts in
the root we have $t,\lambda,0\models\varphi$, where the satisfaction of a
formula at position $i\geq 0$ of a path $\lambda$ is defined inductively as
follows:
$\displaystyle t,\lambda,i\models$ $\displaystyle\,p$ if $\displaystyle\quad
p\in\ell(\lambda_{i})$ $\displaystyle t,\lambda,i\models$
$\displaystyle\,\neg\varphi^{\prime}$ if $\displaystyle\quad
t,\lambda,i\not\models\varphi^{\prime}$ $\displaystyle t,\lambda,i\models$
$\displaystyle\,\varphi_{1}\vee\varphi_{2}$ if $\displaystyle\quad
t,\lambda,i\models\varphi_{1}\mbox{ or }t,\lambda,i\models\varphi_{2}$
$\displaystyle t,\lambda,i\models$ $\displaystyle\,{\bf X}\varphi^{\prime}$ if
$\displaystyle\quad t,\lambda,i+1\models\varphi^{\prime}$ $\displaystyle
t,\lambda,i\models$ $\displaystyle\,\varphi_{1}{\bf U}\varphi_{2}$ if
$\displaystyle\quad\exists\,j\geq i\mbox{ s.t.
}t,\lambda,j\models\varphi_{2}\text{ and }\forall k\text{ s.t. }i\leq
k<j,\;t,\lambda,k\models\varphi_{1}$ $\displaystyle t,\lambda,i\models$
$\displaystyle\,\Finv C\exists s.\,\varphi^{\prime}\quad$ if
$\displaystyle\quad\exists\,f:(2^{\textit{Scope}_{\varphi}(s)})^{*}\to
2^{\\{s\\}}\mbox{ s.t.
}t_{\lambda_{i}}\oplus((2^{\textit{Scope}_{\varphi}(s)})^{*},f)\models\varphi^{\prime},$
where $t_{\lambda_{i}}$ is the subtree of $t$ rooted in $\lambda_{i}$.
First, observe that in the last inductive case, $t_{\lambda_{i}}$ being a
$2^{\mathcal{F}_{\varphi}}$-tree,
$t_{\lambda_{i}}\oplus((2^{\textit{Scope}_{\varphi}(s)})^{*},f)$ is a
$2^{\mathcal{F}_{\varphi}\cup\textit{Scope}_{\varphi}(s)}$-tree. By
definition, $\textit{Scope}_{\varphi}(s)=\mathcal{F}_{\varphi}\cup
C=\mathcal{F}_{\varphi^{\prime}}$. It follows that
$\mathcal{F}_{\varphi}\cup\textit{Scope}_{\varphi}(s)=\textit{Scope}_{\varphi}(s)=\mathcal{F}_{\varphi^{\prime}}$,
hence $\varphi^{\prime}$ is indeed evaluated on a
$\mathcal{F}_{\varphi^{\prime}}$-tree.
###### Remark 5.
Note that all strategies observe the value of all atomic propositions.
Formally, for every CL-formula $\varphi$ of the form $\varphi=\Finv
C_{1}\exists s_{1}.\,\ldots,\Finv C_{i}\exists s_{i}.\,\varphi^{\prime}$
evaluated on a $2^{\mathcal{F}_{\varphi}}$-tree $t=(\tau,\ell)$,
$\varphi^{\prime}$ is evaluated on a $2^{\mathcal{F}_{\varphi}\cup
C_{1}\cup\ldots\cup C_{i}}$-tree $t^{\prime}=(\tau^{\prime},\ell^{\prime})$
such that for every $p\in\textnormal{AP}_{\varphi}$, for every pair of nodes
$u,u^{\prime}\in t^{\prime}$ such that
$u\\!\downarrow_{2^{\mathcal{F}_{\varphi}}}=u^{\prime}\\!\downarrow_{2^{\mathcal{F}_{\varphi}}}$,
it holds that $p\in\ell^{\prime}(u)$ iff $p\in\ell^{\prime}(u^{\prime})$.
Thus, in CL one cannot directly capture strategic problems where atomic
propositions are not observable to all players.
The input to the model-checking problem for CL consists of a CL formula
$\varphi$ and a finite representation of a
$(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree $t$. The standard
assumption is to assume $t$ is a regular tree, i.e., is the unfolding of a
finite structure. Precisely, a _finite representation_ of a
$(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree
$t=(\tau,\ell^{\prime})$ is a structure $\mathcal{S}=(S,R,\ell,s_{\iota})$
such that
* •
$S=2^{\mathcal{F}_{\varphi}}$,
* •
$R=S\times S$,
* •
$\ell:S\to 2^{\textnormal{AP}_{\varphi}}$,
* •
$s_{\iota}\in S$,
and $t=t_{\mathcal{S}}$ is the unfolding of $\mathcal{S}$.
Thus, an _instance_ of the model-checking problem for CL is a pair
$(\mathcal{S},\Phi)$ where $\mathcal{S}=(S,R,s_{{\iota}},\ell)$ is a finite
representation of an
$(\textnormal{AP}_{\varphi},2^{\mathcal{F}_{\varphi}})$-tree and $\Phi$ is a
CL formula (over variables $\mathcal{S}\cup\mathcal{C}$). The _model-checking
problem for CL_ is the following decision problem: given an instance
$(\mathcal{S},\Phi)$, return ‘Yes’ if $t_{\mathcal{S}}\models\Phi$ and ‘No’
otherwise.
We now describe a natural translation of CL-instances to
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances. This translation:
1. (1)
reduces the model-checking problem of CL to that of the hierarchical fragment
of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$.
2. (2)
shows that CL only produces instances in which all atoms are uniform with
regard to all observations, i.e., instances $(\mathcal{G},\Phi)$ such that for
every $p\in\textnormal{AP}$ and $o\in\textnormal{Obs}$, $v\sim_{o}v^{\prime}$
implies $p\in\ell(v)\leftrightarrow p\in\ell(v^{\prime})$.
We will present the translation in two steps: first from CL-instances into
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$-instances, and then from
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$-instances to
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances such that
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$-instances translate to hierarchical
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances.
#### 6.2.1. Translating CL to
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize i,$\tiny{\subseteq}$}}$
Let $(\mathcal{S},\Phi)$ be an instance of the model-checking problem for CL,
where $\mathcal{S}=(S,R,\ell,s_{{\iota}})$. We will construct a
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$-instance
$(\widetilde{\mathcal{S}},\widetilde{\varphi})$ such that
$\mathcal{S}\models\Phi$ iff $\widetilde{\mathcal{S}}\models\widetilde{\Phi}$.
Let $\widetilde{\textnormal{AP}}$ be the set of all strategy variables
occurring in $\Phi$, let $\mathcal{C}(\Phi)$ be the set of coordination
variables that appear in $\Phi$, and assume, w.l.o.g., that
$\mathcal{C}(\varphi)=[n]$ for some $n\in\mathbb{N}$. Let
$\textit{hidden}(\Phi):=\mathcal{C}(\Phi)\setminus\mathcal{F}_{\varphi}$.
First, we define the CKS $\widetilde{\mathcal{S}}$ over
$\widetilde{\textnormal{AP}}$: the idea is to add in the structure
$\mathcal{S}$ the local states corresponding to coordination variables that
are not seen by all the strategies.
Formally,
$\widetilde{\mathcal{S}}:=(\widetilde{S},\widetilde{R},\widetilde{s_{{\iota}}},\widetilde{\ell})$
where
* •
$\widetilde{S}=\prod_{c\in\mathcal{C}(\Phi)}L_{c}$ where
$L_{c}=\\{c_{0},c_{1}\\}$,
* •
$\widetilde{R}=\widetilde{S}\times\widetilde{S}$,
* •
for every $s\in\widetilde{S}$,
$\widetilde{\ell}(s)=\ell(s\\!\downarrow_{\mathcal{F}_{\varphi}})$, and
* •
$\widetilde{s_{{\iota}}}\in\widetilde{S}$ is any state $s$ such that
$s\\!\downarrow_{\mathcal{F}_{\varphi}}=s_{{\iota}}$
Second, we define concrete observations corresponding to strategy variables in
$\Phi$. As explained in (Finkbeiner and Schewe, 2010), and as reflected in the
semantics of CL, the intuition is that a strategy variable $s$ in formula
$\Phi$ observes coordination variables $\textit{Scope}_{\varphi}(s)$.
Therefore, we simply define, for each strategy variable $s$ in $\Phi$, the
concrete observation $\textnormal{{o}}_{s}:=\textit{Scope}_{\varphi}(s)$.
Finally, we define the $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$
formula $\widetilde{\Phi}$. This is done by induction on $\Phi$ as follows
(recall that we take for atomic propositions in
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ the set of all
strategy variables in $\Phi$):
$\displaystyle\widetilde{x}$ $\displaystyle:=x$
$\displaystyle\widetilde{\neg\varphi}$
$\displaystyle:=\neg\widetilde{\varphi}$
$\displaystyle\widetilde{\varphi_{1}\vee\varphi_{2}}$
$\displaystyle:=\widetilde{\varphi_{1}}\vee\widetilde{\varphi_{2}}$
$\displaystyle\widetilde{{\bf X}\varphi}$ $\displaystyle:={\bf
X}\,\widetilde{\varphi}$ $\displaystyle\widetilde{\varphi_{1}{\bf
U}\varphi_{2}}$ $\displaystyle:=\widetilde{\varphi_{1}}\,{\bf
U}\,\widetilde{\varphi_{2}}$ $\displaystyle\widetilde{\Finv C\exists
s.\,\varphi}$ $\displaystyle:=\exists^{\textnormal{{o}}_{s}}s.\,{\bf
A}\widetilde{\varphi}$
In the last case, note that
$C\subseteq\textnormal{{o}}_{s}=\textit{Scope}_{\varphi}(s)$.
Note that $\widetilde{\Phi}$ is a hierarchical
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$-formula. Also, one can
easily check that the following holds:
###### Lemma 6.4.
$t_{\mathcal{S}}\models\Phi\quad\mbox{iff}\quad
t_{\widetilde{\mathcal{S}}}\models{\bf A}\widetilde{\Phi}$.
Importantly, we notice that ${\bf
A}\widetilde{\Phi}\in\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$, and that:
###### Lemma 6.5.
For every $x\in\textnormal{AP}_{\varphi}$ and every $s$ quantified in $\Phi$,
$t_{\widetilde{\mathcal{S}}}$ is $\textnormal{{o}}_{s}$-uniform in $x$.
#### 6.2.2. Translation from
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ to
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$
We now present a translation of
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$-instances to
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-instances. It is a simple
adaptation of the reduction from the model-checking problem for
$\textnormal{{QCTL}}^{*}$ to the model-checking problem for
$\textnormal{{ATL}}^{*}_{\textnormal{\scriptsize sc}}$ presented in
(Laroussinie and Markey, 2015).
Let $(\mathcal{S},\Phi)$ be an instance of the model-checking problem for
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, where
$\mathcal{S}=(S,R,\ell,s_{{\iota}})$ and $S\subseteq\prod_{i\in[n]}L_{i}$. We
assume, without loss of generality, that every atomic proposition is
quantified at most once, and that if it appears quantified it does not appear
free. Also, let ${\textnormal{AP}_{\exists}}(\Phi)=\\{p_{1},\ldots,p_{k}\\}$
be the set of atomic propositions quantified in $\Phi$, and for $i\in[k]$, let
$\textnormal{{o}}_{i}$ be the concrete observation associated to the
quantifier on $p_{i}$.
We build the $\textrm{CGS}_{\textnormal{ii}}$
$\mathcal{G}^{\mathcal{S}}:=(\textnormal{Ac},V,E,\ell^{\prime},v_{\iota},\mathcal{O})$
over agents $\textnormal{Ag}:=\\{a_{0},a_{1},\ldots,a_{k}\\}$, observations
$\textnormal{Obs}:=\\{o_{0},o_{1},\ldots,o_{k}\\}$ and atomic propositions
$\textnormal{AP}:={\textnormal{AP}_{\exists}}(\Phi)\cup\\{p_{S}\\}$, where
$p_{S}$ is a fresh atomic proposition. Intuitively, agent $a_{0}$ is in charge
of choosing transitions in $\mathcal{S}$, while agent $a_{i}$ for $i\geq 1$ is
in charge of choosing the valuation for
$p_{i}\in{\textnormal{AP}_{\exists}}(\Phi)$.
To this aim, we let
$V:=\begin{array}[]{l}\\{v_{s}\mid s\in S\\}\;\cup\\\ \\{v_{s,i}\mid s\in
S\mbox{ and }i\in[k]\\}\;\cup\\\ \\{v_{p_{i}}\mid 0\leq i\leq k\\}\;\cup\\\
\\{v_{\perp}\\}\end{array}$
and
$\textnormal{Ac}:=\\{c^{s}\mid s\in S\\}\cup\\{c^{i}\mid 0\leq i\leq k\\}.$
In positions of the form $v_{s}$ with $s\in S$, transitions are determined by
the action of agent $a_{0}$. First, she can choose to simulate a transition in
$\mathcal{S}$: for every joint action
$\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$ such that
$\bm{c}_{0}=c^{s^{\prime}}$,
$E(v_{s},\bm{c}):=\begin{cases}v_{s^{\prime}}&\text{if }R(s,s^{\prime})\\\
v_{\perp}&\text{otherwise}.\end{cases}$
She can also choose to move to a position in which agent $a_{i}$ will choose
the valuation for $p_{i}$ in the current node: for every joint action
$\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$ such that $\bm{c}_{0}=c^{i}$,
$E(v_{s},\bm{c}):=\begin{cases}v_{s,i}&\text{if }i\neq 0\\\
v_{\perp}&\text{otherwise}.\end{cases}$
Next, in a position of the form $v_{s,i}$, agent $a_{i}$ determines the
transition, which codes the labelling of $p_{i}$ in the current node: choosing
$c^{i}$ means that $p_{i}$ holds in the current node, choosing any other
action codes that $p_{i}$ does not hold. Formally, for every joint action
$\bm{c}\in\textnormal{Ac}^{\textnormal{Ag}}$,
$E(v_{s,i},\bm{c}):=\begin{cases}v_{p_{i}}&\text{if }\bm{c}_{i}=c^{i}\\\
v_{\perp}&\text{otherwise}.\end{cases}$
Positions of the form $v_{p_{i}}$ and $v_{\perp}$ are sink positions.
The labelling function $\ell^{\prime}$ is defined as follows:
$\ell^{\prime}(v):=\begin{cases}\ell(s)\cup\\{p_{S}\\}&\mbox{if }v=v_{s}\mbox{
for some }s\in S\\\ \emptyset&\mbox{if }v\in\\{v_{s,i}\mid s\in
S,i\in[k]\\}\cup\,\\{v_{p_{0}},v_{\perp}\\}\\\ \\{p_{i}\\}&\mbox{if
}v=v_{p_{i}}\text{ with }i\in[k]\end{cases}$
Finally we let $v_{{\iota}}:=v_{s_{{\iota}}}$ and we define the observation
interpretation as follows:
$\mathcal{O}(o_{0}):=\\{(v,v)\mid v\in V\\},$
meaning that agent $a_{0}$ has perfect information, and for $i\in[k]$,
$\mathcal{O}(o_{i})$ is the smallest reflexive relation such that
$\mathcal{O}(o_{i})\supseteq\bigcup_{s,s^{\prime}\in
S}\\{(v_{s},v_{s^{\prime}}),(v_{s,i},v_{s^{\prime},i})\mid
s\approx_{\textnormal{{o}}_{i}}s^{\prime}\\}.$
We explain the latter definition. First, observe that for every finite play
$\rho$ in $\mathcal{G}^{\mathcal{S}}$ that stays in $V_{S}=\\{v_{s}\mid s\in
S\\}$, writing $\rho=v_{s_{0}}\ldots v_{s_{n}}$, one can associate a finite
path $\lambda_{\rho}=s_{0}\ldots s_{n}$ in $\mathcal{S}$. This mapping
actually defines a bijection between the set of finite paths in $\mathcal{S}$
that start in $s_{{\iota}}$ and the set of finite plays in
$\mathcal{G}^{\mathcal{S}}$ that remain in $V_{S}$.
Now, according to the definition of the transition function, a strategy
$\sigma_{i}$ for agent $i$ with $i\in[k]$ is only relevant on finite plays of
the form $\rho=\rho^{\prime}\cdot v_{s,i}$, where $\rho^{\prime}\in
V_{S}^{*}$, and $\sigma_{i}(\rho)$ is meant to determine whether $p_{i}$ holds
in $\lambda_{\rho^{\prime}}$. If $\sigma_{i}$ is $o_{i}$-uniform, by
definition of $\mathcal{O}(o_{i})$, it determines an
$\textnormal{{o}}_{i}$-uniform labelling for $p_{i}$ in $t_{\mathcal{S}}$.
Reciprocally, an $\textnormal{{o}}_{i}$-uniform labelling for $p_{i}$ in
$t_{\mathcal{S}}$ induces an $\mathcal{O}(o_{i})$-strategy for agent $a_{i}$.
It remains to transform $\Phi$ into an
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$-formula.
We define the $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula
$\widetilde{\Phi}$ by induction on $\Phi$ as follows:
$\displaystyle\widetilde{p}$ $\displaystyle:=\begin{cases}{\bf E}{\bf X}{\bf
X}p&\text{if }p=p_{i}\\\ p&\text{otherwise}\end{cases}$
$\displaystyle\widetilde{\neg\varphi}$
$\displaystyle:=\neg\widetilde{\varphi}$
$\displaystyle\widetilde{\varphi_{1}\vee\varphi_{2}}$
$\displaystyle:=\widetilde{\varphi_{1}}\vee\widetilde{\varphi_{2}}$
$\displaystyle\widetilde{{\bf E}\psi}$ $\displaystyle:={\bf E}({\bf
G}p_{S}\wedge\widetilde{\psi})$
$\displaystyle\widetilde{\exists^{\textnormal{{o}}_{i}}p_{i}.\,\varphi}$
$\displaystyle:=\langle\\!\langle
x_{i}\rangle\\!\rangle^{o_{i}}(a_{i},x_{i})\widetilde{\varphi}.$
The cases for path formulas are obtained by distributing over the operators.
Observe that player 0 is never bound to a strategy. In the case for atomic
propositions, the existential quantification on outcomes thus lets player 0
choose to move to a position where agent $i$ fixes the value for $p_{i}$
according to his strategy, fixed by the strategy quantifier in the translation
for formulas of the form $\exists^{\textnormal{{o}}_{i}}p_{i}.\,\varphi$. In
the translation of formulas of the form ${\bf E}\psi$, the existential
quantification on outcomes lets player 0 choose a path in the original CKS
$\mathcal{S}$.
We have the following:
###### Lemma 6.6.
$\mathcal{S}\models\Phi\quad\text{if and only
if}\quad\mathcal{G}^{\mathcal{S}}\models\widetilde{\Phi}$.
We observe that if $\Phi$ is hierarchical, then
$(\widetilde{\Phi},\mathcal{G}^{\mathcal{S}})$ is a hierarchical instance,
and:
###### Lemma 6.7.
For every $p\in\textnormal{AP}_{f}(\Phi)$ and for every $i\in[k]$, if
$t_{\mathcal{S}}$ is $\textnormal{{o}}_{i}$-uniform in $p$ then
$v\sim_{o_{i}}v^{\prime}$ implies that $p\in\ell(v)$ iff
$p\in\ell(v^{\prime})$.
Combining Lemma 6.4 with Lemma 6.6 we get a reduction from the model-checking
problem for CL to that for the hierarchical fragment of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, and Lemma 6.5 together with
Lemma 6.7 show that in the models produced by this reduction, all atomic
propositions are observable to all players. This implies that in CL one cannot
reason about strategic problems with unobservable objectives. As a result it
does not fully capture classic distributed synthesis (Pnueli and Rosner, 1990;
Kupferman and Vardi, 2001), where the specification can talk about all
variables, hidden and visible. It also shows that CL does not capture in a
natural way ATL with imperfect information as defined in (Alur et al., 2002,
Section 7.1), where imperfect information of agents is modelled by defining
which atomic propositions they can observe. This, as well as unobservable
objectives, can be naturally modelled in
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$.
## 7\. Applications
In this section we apply Theorem 2.9 to decide the existence of Nash
Equilibria in hierarchical games of imperfect information. We then use a
similar approach to obtain decidability results for the rational synthesis
problem. In this section, for a tuple of agents $\bm{a}=(a_{i})_{i\in[m]}$ and
tuple of strategy variables $\bm{x}=(x_{i})_{i\in[m]}$, we let
$(\bm{a},\bm{x})$ be a macro for $(a_{1},x_{1})\ldots(a_{m},x_{m})$, and
similarly for the unbinding operator $(\bm{a},\operatorname{?})$ which stands
for $(a_{1},\operatorname{?})\ldots(a_{m},\operatorname{?})$.
### 7.1. Existence of Nash Equilibria in games with hierarchical observations
A Nash equilibrium in a game is a tuple of strategies such that no player has
an incentive to deviate. Let $\textnormal{Ag}=\\{a_{i}:i\in[n]\\}$. Assuming
that agent $a_{i}$ has observation $o_{i}$ and LTL goal $\psi_{i}$, the
following $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formula expresses
the existence of a Nash equilibrium:
$\displaystyle\Phi_{\textsc{NE}}:=$ $\displaystyle\langle\\!\langle
x_{1}\rangle\\!\rangle^{o_{1}}\dots\langle\\!\langle
x_{n}\rangle\\!\rangle^{o_{n}}(\bm{a},\bm{x})\bigwedge_{i\in[n]}\Big{[}\Big{(}\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{i}}(a_{i},y_{i})\,{\bf A}\psi_{i}\Big{)}\to{\bf
A}\psi_{i}\Big{]}$
where $\bm{a}=(a_{i})_{i\in[n]}$ and $\bm{x}=(x_{i})_{i\in[n]}$.
Nash equilibria do not always exist when one restricts attention to pure
strategies, as we do in this work. This is the case already in finite games,
and by extension also in the infinite concurrent games played on graphs that
we consider. This motivates the study of the Nash equilibria existence problem
in such games. In the perfect information case, the problem has been solved
for $\omega$-regular objectives, as well as more complex semi-quantitative
objectives (Bouyer et al., 2015). When moving to imperfect information, for
two players the problem is decidable for LTL objectives (Gutierrez et al.,
2018) and parity objectives (Filiot et al., 2018). However, as for distributed
synthesis, existence of Nash equilibria becomes undecidable for more than two
players. This result is proved in (Bouyer, 2018) for constrained Nash
equilibria (when one specifies for each player whether her objective is
satisfied or not), and in (Gutierrez et al., 2018) for unconstrained
equilibria. In both cases the proof proceeds by reduction from the distributed
synthesis problem (Peterson et al., 2001; Pnueli and Rosner, 1990).
The only known decidable cases for more than two players assume that all
players receive the same information. In (Bouyer, 2018) the problem is solved
on games where players observe the evolution of the game via _public signals_
and objectives are given by visible parity conditions or mean-payoff
functions. In (Belardinelli et al., 2017a), an epistemic extension of strategy
logic is used to solve the existence of Nash equilibria on games with
_broadcast actions_ for objectives given as formulas from epistemic temporal
logic. A stronger notion of Nash equilibria, called _locally consistent
equilibria_ , is studied in (Ramanujam and Simon, 2010). In a locally
consistent equilibrium, each player’s strategy has to be a best response not
only to other players’ strategies in the equilibrium, but also to all
strategies that are indistinguishable from those in the equilibrium. It is
proved in (Ramanujam and Simon, 2010) that the existence of such equilibria is
decidable on a model of games close in spirit to those with public signals
studied in (Bouyer, 2018).
Here we show that the existence of Nash equilibria is decidable for $n$
players when observations are hierarchical and objectives are given as LTL
formulas. Note that this result is orthogonal to those described above, which
all allow in a way or another some non-hierarchical information: in (Bouyer,
2018) players know their own actions in addition to the public signals, in
(Ramanujam and Simon, 2010) they know their private local state, and in
(Belardinelli et al., 2017a) they can have incomparable initial knowledge of
the situation.
###### Definition 7.1.
A $\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ presents _hierarchical
observation_ (Berwanger et al., 2018) if the “finer-than” relation is a total
ordering, i.e., if for all $o,o^{\prime}\in\textnormal{Obs}$, either
$\mathcal{O}(o)\subseteq\mathcal{O}(o^{\prime})$ or
$\mathcal{O}(o^{\prime})\subseteq\mathcal{O}(o)$.
Let $\mathcal{G}$ be a $\textrm{CGS}_{\textnormal{ii}}$ with hierarchical
observation, and since all agents have symmetric roles in the problem
considered, assume without loss of generality that
$\mathcal{O}(o_{n})\subseteq\ldots\subseteq\mathcal{O}(o_{1})$.
Because of the nested strategy quantifiers $\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{i}}$, the instance
$(\mathcal{G},\Phi_{\textsc{NE}})$ is _not_ hierarchical even if $\mathcal{G}$
yields hierarchical observation (unless
$\mathcal{O}(o_{i})=\mathcal{O}(o_{j})$ for all $i,j\in[n]$). However,
considering the special observation symbol $o_{p}$ that is always interpreted
as the identity relation (and thus represents perfect observation), and
letting
$\displaystyle\Phi^{\prime}:=$ $\displaystyle\langle\\!\langle
x_{1}\rangle\\!\rangle^{o_{1}}\dots\langle\\!\langle
x_{n}\rangle\\!\rangle^{o_{n}}(\bm{a},\bm{x})\bigwedge_{i\in[n]}\Big{[}\Big{(}\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\Big{)}\to{\bf
E}\psi_{i}\Big{]},$
we have that $\Phi^{\prime}$ forms a hierarchical instance with any
$\textrm{CGS}_{\textnormal{ii}}$ that presents hierarchical observation.
Besides, we can prove that for deterministic strategies, $\Phi^{\prime}$ is
equivalent to $\Phi_{\textsc{NE}}$:
###### Lemma 7.2.
If we consider deterministic strategies, then
$\Phi_{\textsc{NE}}\equiv\Phi^{\prime}$.
###### Proof.
Concerning the universal versus existential quantification on outcomes, it is
enough to observe that assigning a deterministic strategy to each agent
determines a unique outcome. Next, to change each inner $o_{i}$ for $o_{p}$,
we exploit the fact that in a one-player game of partial observation (such a
game occurs when all but one player have fixed their strategies), the player
has a strategy enforcing some goal iff she has a uniform strategy enforcing
that goal.
To see this, it is enough to establish that for every
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ and position $v$,
$\mathcal{G},\chi,v\models\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf
E}\psi_{i}\leftrightarrow\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{i}}(a_{i},y_{i})\,{\bf E}\psi_{i},$
for every $i\in[n]$ and every assignment $\chi$ such that $\chi(a_{j})$ is
defined for all $j$.
To this end, fix $i$ and $\chi$. The right-to-left implication is immediate
(since $o_{p}$ is finer than $o_{i}$). For the converse, let $\sigma$ be an
$o_{p}$-strategy (i.e., a perfect-information strategy) such that
$\mathcal{G}^{\prime},\chi^{\prime},v_{{\iota}}\models\psi_{i}$, where
$\chi^{\prime}=\chi[y_{i}\mapsto\sigma,a_{i}\mapsto\sigma]$. Because we
consider deterministic strategies and $\chi^{\prime}$ assigns a strategy to
each agent, it defines a unique outcome $\pi$ from the initial position, i.e.,
$\textnormal{Out}(\chi^{\prime},v_{\iota})=\\{\pi\\}$. We construct an
$o_{i}$-strategy $\sigma^{\prime}$ such that if $a_{i}$ uses it instead of
$\sigma$, we obtain the same outcome $\pi$, i.e.,
$\textnormal{Out}(\chi^{\prime\prime},v_{\iota})=\\{\pi\\}$, where
$\chi^{\prime\prime}=\chi[y_{i}\mapsto\sigma^{\prime},a_{i}\mapsto\sigma^{\prime}]$.
This can be done as follows: if $\rho\sim_{o_{i}}\pi_{\leq|\rho|-1}$ then
define $\sigma^{\prime}(\rho):=\sigma(\pi_{\leq|\rho|-1})$, and otherwise let
$\sigma^{\prime}(\rho):=c$ for some fixed action $c\in\textnormal{Ac}$. It is
easy to see that $\sigma^{\prime}$ is an $o_{i}$-strategy and that
$\chi^{\prime\prime}$ produces the same outcome as $\chi$ from $v_{\iota}$. ∎
###### Corollary 7.3.
If we consider deterministic strategies, then the existence of Nash Equilibria
in games with hierarchical observation and $k$ different observations is in
$(k+1)$-Exptime .
###### Proof.
Deciding the existence of a Nash Equilibrium in a
$\textrm{CGS}_{\textnormal{ii}}$ $\mathcal{G}$ amounts to model-checking
formula $\Phi_{\textsc{NE}}$ in $\mathcal{G}$, which by Lemma 7.2 is
equivalent to model-checking $\Phi^{\prime}$ in $\mathcal{G}$ if we restrict
to deterministic strategies. Because $\Phi^{\prime}$ forms hierarchical
instances with games that yield hierarchical observation, by Theorem 2.9 we
can model check it on such games. Now because each $\psi_{i}$ is an LTL
formula, we have that
$\displaystyle\mbox{sd}\left(\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\right)$
$\displaystyle=(0,\mbox{nd}),$
$\displaystyle\mbox{sd}\left(\bigwedge_{i\in[n]}\Big{[}\Big{(}\langle\\!\langle
y_{i}\rangle\\!\rangle^{o_{p}}(a_{i},y_{i})\,{\bf E}\psi_{i}\Big{)}\to{\bf
E}\psi_{i}\Big{]}\right)$ $\displaystyle=(0,\mbox{alt}),$
and finally we obtain that $\mbox{sd}(\Phi^{\prime})=(k,\mbox{nd})$, where $k$
is the number of different observations in $\mathcal{G}$, i.e.,
$k=|\\{\mathcal{O}(o_{1}),\ldots,\mathcal{O}(o_{n})\\}|$. By Proposition 5.2,
we can model check $\Phi^{\prime}$ on $\mathcal{G}$ in time
$(k+1)$-exponential, which concludes. ∎
We now show that, using the same trick, our main result can be applied to
solve a more general problem called _rational synthesis_.
### 7.2. Rational distributed synthesis in games with hierarchical
observations
In classic synthesis, the environment is considered monolithic and “hostile”,
in the sense that the system to be synthesised should be able to deal with all
possible behaviours of the environment, even the most undesirable ones. This
is a very strong requirement that can not always be met. When the environment
can be considered rational, and its objective is known, it is reasonable to
relax this requirement by asking that the system to synthesise behave well
against the _rational_ behaviours of the environment. This problem is known as
the _rational synthesis_ problem (Fisman et al., 2010; Kupferman et al., 2016;
Condurache et al., 2016; Filiot et al., 2018). In the setting considered in
the works above-mentioned, the system is seen as an agent $a$ and the
environment is composed of several components, say $\\{e_{1},\ldots,e_{m}\\}$,
that are assumed to be rational and follow individual objectives. While
(Condurache et al., 2016) and (Filiot et al., 2018) consider various types of
objectives such as reachability, safety or parity, here we consider LTL
objectives as is done in (Fisman et al., 2010; Kupferman et al., 2016): the
specification for the system is an LTL formula $\psi_{g}$, and the objective
of each component $e_{i}$ of the environment is an LTL formula $\psi_{i}$.
However note that the decidability results we establish would also hold for
arbitrary omega-regular objectives.
#### 7.2.1. Rational synthesis: state of the art
Two variants of the rational synthesis problem have been considered: the
_cooperative_ one, in which it is possible to tell the environment how to
behave, as long as the suggested behaviour for each component forms an
equilibrium, and the _non-cooperative_ one, in which the components of the
environment may have any behaviour that forms an equilibrium. The existence of
a solution to these problems can be expressed by the formulas
$\Phi_{\text{c-RS}}$ and $\Phi_{\text{nc-RS}}$, respectively, defined as
follows:
$\displaystyle\Phi_{\text{c-RS}}$ $\displaystyle:=\langle\\!\langle
x\rangle\\!\rangle^{o_{p}}\langle\\!\langle
y_{1}\rangle\\!\rangle^{o_{p}}\ldots\langle\\!\langle
y_{m}\rangle\\!\rangle^{o_{p}}(a,x)(\bm{e},\bm{y})\,\varphi_{\gamma}\wedge{\bf
A}\psi_{g}$ $\displaystyle\Phi_{\text{nc-RS}}$
$\displaystyle:=\langle\\!\langle
x\rangle\\!\rangle^{o_{p}}[\\![y_{1}]\\!]^{o_{p}}\ldots[\\![y_{m}]\\!]^{o_{p}}(a,x)(\bm{e},\bm{y})\,\varphi_{\gamma}\to{\bf
A}\psi_{g}$
where $\bm{e}=(e_{i})_{i\in[m]}$, $\bm{y}=(y_{i})_{i\in[m]}$, and
$\varphi_{\gamma}$ expresses that $\bm{y}$ forms an equilibrium for the
environment. Also, as in the previous section, $o_{p}$ represents the perfect-
information observation. Three different kinds of equilibria are considered in
(Kupferman et al., 2016): profiles of dominant strategies, Nash equilibria,
and subgame-perfect equilibria. Here we only consider Nash equilibria, because
subgames of games with imperfect information should start in situations where
all players have perfect information of the state, which we do not know how to
express in $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$; and for dominant
strategies, the natural formula to express them does not give rise to non-
trivial decidable cases in the imperfect-information setting that we introduce
later. The rational synthesis problem for Nash equilibria is obtained by
replacing $\varphi_{\gamma}$ in the above formula with:
$\displaystyle\varphi_{\text{NE}}$
$\displaystyle:=\bigwedge_{i\in[m]}\Big{[}\Big{(}\langle\\!\langle
y^{\prime}_{i}\rangle\\!\rangle^{o_{p}}(e_{i},y^{\prime}_{i})\,{\bf
A}\psi_{i}\Big{)}\to{\bf A}\psi_{i}\Big{]}$
It is proved in (Kupferman et al., 2016) that these problems are decidable for
perfect information. Concerning imperfect information, because the existence
of Nash equilibria is undecidable for three players, the problem is
undecidable when the environment consists of at least three components (Filiot
et al., 2018). Three decidable cases are known: when the environment consists
of a single component (Filiot et al., 2018), when actions of all components
are public (Belardinelli et al., 2017a), and when only the system has
imperfect information while the (finitely many) components of the environment
are perfectly informed (Filiot et al., 2018).
We now extend the latter result by defining a generalisation of the rational
synthesis problem that we call _rational distributed synthesis_ , and solving
it in the case of hierarchical information. The case where the environment is
perfectly informed and the system consists of a single component, solved in
(Filiot et al., 2018), is a particular case of our Corollary 7.5 below666We
only consider LTL objectives, but our automata construction can be adapted to
handle all $\omega$-regular objectives.. However the other decidability result
established in (Filiot et al., 2018) does not assume hierarchical information,
and thus cannot be derived from the results we now present.
#### 7.2.2. Rational distributed synthesis
While for perfect information, distributed synthesis amounts to synthesis for
a single meta-component which tells each component what to do, in the context
of imperfect information it makes sense to consider that the system to be
synthesised is composed of various components $\\{a_{1},\ldots,a_{n}\\}$ with
different observation power, say $o_{i}$ for component $a_{i}$. We also let
$o^{e}_{i}$ be the observation of the environment’s component $e_{i}$, for
$i\in[m]$.
We consider the imperfect-information variants of cooperative and non-
cooperative rational synthesis defined by the following
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ formulas:
$\displaystyle\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$
$\displaystyle:=\langle\\!\langle
x_{1}\rangle\\!\rangle^{o_{1}}\ldots\langle\\!\langle
x_{n}\rangle\\!\rangle^{o_{n}}\langle\\!\langle
y_{1}\rangle\\!\rangle^{o^{e}_{1}}\ldots\langle\\!\langle
y_{m}\rangle\\!\rangle^{o^{e}_{m}}(\bm{a},\bm{x})(\bm{e},\bm{y})\,\varphi_{\gamma}\wedge{\bf
A}\psi_{g}$ $\displaystyle\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$
$\displaystyle:=\langle\\!\langle
x\rangle\\!\rangle^{o_{1}}\ldots\langle\\!\langle
x_{n}\rangle\\!\rangle^{o_{n}}[\\![y_{1}]\\!]^{o^{e}_{1}}\ldots[\\![y_{m}]\\!]^{o^{e}_{m}}(\bm{a},\bm{x})(\bm{e},\bm{y})\,\varphi_{\gamma}\to{\bf
A}\psi_{g}$
The formula for Nash equilibrium is adapted as follows:
$\displaystyle\varphi^{\textnormal{\scriptsize ii}}_{\text{NE}}$
$\displaystyle:=\bigwedge_{i\in[m]}\Big{[}\Big{(}\langle\\!\langle
y^{\prime}_{i}\rangle\\!\rangle^{o^{e}_{i}}(e_{i},y^{\prime}_{i})\,{\bf
A}\psi_{i}\Big{)}\to{\bf A}\psi_{i}\Big{]}$
The only difference with the perfect-information case is that we use the
observation of the different components of the environment instead of the
perfect-information observation.
We call the problems expressed by formulas $\Phi^{\textnormal{\scriptsize
ii}}_{\text{c-RS}}$ and $\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$
_cooperative rational distributed synthesis_ and _non-cooperative rational
distributed synthesis_ , respectively. As in the previous section on the
existence of Nash equilibria, one can see that even if there is a total
hierarchy on all observations, these formula do not yield hierarchical
instances unless all observations are the same. However, the trick applied in
the proof of Corollary 7.3 also applies here, both for Nash equilibria and
subgame-perfect equilibria, i.e., we can replace each $o^{e}_{i}$ with $o_{p}$
in $\varphi^{\textnormal{\scriptsize ii}}_{\text{NE}}$ without affecting the
semantics of formulas $\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$ and
$\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$. As a result, when there
is a hierarchy on observations
$o_{1},\ldots,o_{n},o^{e}_{1},\ldots,o^{e}_{m}$, the cooperative rational
distributed synthesis is decidable.
###### Corollary 7.4.
If we consider deterministic strategies and hierarchical observations, then
cooperative rational distributed synthesis is decidable.
For the non-cooperative variant, one cannot switch universal quantifications
on strategies for the environments with existential quantifications for the
system in order to obtain hierarchical instances, as the resulting formula
would then capture a different problem. As a consequence, in addition to a
hierarchy on observations $o_{1},\ldots,o_{n},o^{e}_{1},\ldots,o^{e}_{m}$, we
need to have that the components of the environment observe better than the
components of the system or, in other words, that the least informed component
of the environment observes better than the best informed component of the
system. When it is the case, we say that the environment is _more informed_
than the system.
###### Corollary 7.5.
Non-cooperative rational distributed synthesis is decidable for deterministic
strategies and hierarchical observations where the environment is more
informed than the system.
This result applies for instance when there is hierarchical information
amongst the components of the system, and the environment has perfect
information. Note that when the system consists of a single component, this
corresponds to the second decidability result in (Filiot et al., 2018). As we
mentioned in the introduction, considering that the opponent has perfect
information is something classic in two-player games with imperfect
information, as doing so ensures that the strategy one synthesises is winning
no matter how much the opponent observes. In Reif’s words, this amounts to
considering the possibility that the opponent may “cheat” and use information
that it normally does not have access to (Reif, 1984). The non-cooperative
rational synthesis problem is not precisely a two-player game, but it
resembles one in the sense that the system as a whole (composed of its various
components $a_{1},\ldots,a_{n}$) should win against any “rational” behaviour
of the environment as a whole. In this view, considering that the components
of the environment have perfect information thus yields a distributed system
that is robust to possible leaks of hidden information to the environment.
###### Remark 6.
When all components of the environment have perfect information,
$\Phi^{\textnormal{\scriptsize ii}}_{\text{c-RS}}$ and
$\Phi^{\textnormal{\scriptsize ii}}_{\text{nc-RS}}$ already form hierarchical
instances with games where there is hierarchical observation amongst the
system’s components, and one does not need to resort to the trick used in the
proof of Corollary 7.3. A consequence is that in that case, corollaries 7.4
and 7.5 also hold for nondeterministic strategies.
## 8\. Conclusion
We introduced $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, a logic for
reasoning about strategic behaviour in multi-player games with imperfect
information. The syntax specifies the observations with which strategies have
to work, and thus allows one to reason about strategic problems in settings
where agents can change observation power, for instance by being eventually
granted access to previously hidden information. Moreover our logic contains
an outcome quantifier and an unbinding operator which simplify the semantics,
make it easier to express branching-time properties, allow us to naturally
consider nondeterministic strategies, and make the correspondence with
$\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$ tighter, enabling us
to derive precise complexity results for the model-checking of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$.
We isolated the class of hierarchical formula/model pairs $(\Phi,\mathcal{G})$
and proved that for such instances one can decide whether
$\mathcal{G}\models\Phi$. The proof reduces (hierarchical) instances of
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ to (hierarchical) formulas
of $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize ii}}$, a low-level logic
that we introduced, and that serves as a natural bridge between
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ and automata constructions.
We also studied in detail the complexity of the model-checking problems solved
in this work. To do so we introduced a new measure on formulas called
_simulation depth_. This measure, though being a purely syntactic notion,
reflects the complexity of automata constructions required to treat a given
formula.
Since one can alternate quantifiers in
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, our decidability result
goes beyond synthesis and can be used to easily obtain the decidability of
many strategic problems. In this work we applied it to the problem of
existence of Nash equilibria in games with hierarchical observation, and to
the imperfect-information generalisations of rational synthesis that we called
(cooperative and non-cooperative) _rational distributed synthesis_. Our result
has also been used to prove that the existence of admissible strategies in
games with hierarchical information is decidable (Brenguier et al., 2017).
An interesting direction for future work would be to try and adapt the notion
of hierarchical instances to allow for situations in which hierarchies can
change along a play, as done in (Berwanger et al., 2018). We would also like
to consider alternatives to the synchronous perfect recall setting considered
here, such as the classic asynchronous perfect recall setting (Fagin et al.,
1995; Puchala, 2010), or the more recent notion of causal knowledge (Genest et
al., 2015). Finally, it is often interesting in presence of imperfect
information to introduce epistemic operators to reason explicitely about what
agents know. We already generalised the main result of this work to an
extension of $\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$ with such
operators (Maubert and Murano, 2018); we would like to see if this can be used
to reason about subgame-perfect equilibria in games with imperfect
information, which do not seem to be easy to characterise in
$\textnormal{{SL}}_{\textnormal{\scriptsize ii}}$, as mentioned in Section
7.2.1. Indeed, in games with imperfect information, the notion of subgame
specifies that the initial situation should be known to all players (Selten,
1965), a property that epistemic logics are meant to be able to express.
###### Acknowledgements.
We thank anonymous reviewers for their valuable comments on a previous version
of this work. This project has received funding from the Sponsor European
Union’s Horizon 2020 research and innovation programme
https://ec.europa.eu/programmes/horizon2020/en under the Marie Sklodowska-
Curie grant agreement No Grant #709188.
## References
* (1)
* Alur et al. (2002) Rajeev Alur, Thomas A. Henzinger, and Orna Kupferman. 2002\. Alternating-time temporal logic. _J. ACM_ 49, 5 (2002), 672–713. https://doi.org/10.1145/585265.585270
* Belardinelli (2014) Francesco Belardinelli. 2014\. Reasoning about Knowledge and Strategies: Epistemic Strategy Logic. In _SR’14_. 27–33. https://doi.org/10.4204/EPTCS.146.4
* Belardinelli (2015) Francesco Belardinelli. 2015\. A Logic of Knowledge and Strategies with Imperfect Information. In _LAMAS’15_. 1–15.
* Belardinelli et al. (2017a) Francesco Belardinelli, Alessio Lomuscio, Aniello Murano, and Sasha Rubin. 2017a. Verification of Broadcasting Multi-Agent Systems against an Epistemic Strategy Logic. In _IJCAI’17_. 91–97. https://doi.org/10.24963/ijcai.2017/14
* Belardinelli et al. (2017b) Francesco Belardinelli, Alessio Lomuscio, Aniello Murano, and Sasha Rubin. 2017b. Verification of Multi-agent Systems with Imperfect Information and Public Actions. In _AAMAS’17_. 1268–1276.
* Berthon et al. (2017) Raphael Berthon, Bastien Maubert, Aniello Murano, Sasha Rubin, and Moshe Y. Vardi. 2017. Strategy Logic with imperfect information. In _LICS’17_. IEEE, 1–12. https://doi.org/10.1109/LICS.2017.8005136
* Berwanger et al. (2010) Dietmar Berwanger, Krishnendu Chatterjee, Martin De Wulf, Laurent Doyen, and Thomas A Henzinger. 2010\. Strategy construction for parity games with imperfect information. _Information and computation_ 208, 10 (2010), 1206–1220.
* Berwanger et al. (2018) Dietmar Berwanger, Anup Basil Mathew, and Marie van den Bogaard. 2018. Hierarchical information and the synthesis of distributed strategies. _Acta Inf._ 55, 8 (2018), 669–701. https://doi.org/10.1007/s00236-017-0306-5
* Bittner et al. (2012) Benjamin Bittner, Marco Bozzano, Alessandro Cimatti, and Xavier Olive. 2012. Symbolic Synthesis of Observability Requirements for Diagnosability. In _AAAI’12_.
* Bouyer (2017) Patricia Bouyer. 2017\. Games on graphs with a public signal monitoring. _arXiv preprint arXiv:1710.07163_ (2017).
* Bouyer (2018) Patricia Bouyer. 2018\. Games on Graphs with a Public Signal Monitoring. In _FOSSACS’18_. Springer, 530–547. https://doi.org/10.1007/978-3-319-89366-2_29
* Bouyer et al. (2015) Patricia Bouyer, Romain Brenguier, Nicolas Markey, and Michael Ummels. 2015. Pure Nash Equilibria in Concurrent Deterministic Games. _Logical Methods in Computer Science_ 11, 2 (2015). https://doi.org/10.2168/LMCS-11(2:9)2015
* Bouyer et al. (2017) Patricia Bouyer, Nicolas Markey, and Steen Vester. 2017\. Nash equilibria in symmetric graph games with partial observation. _Information and Computation_ 254 (2017), 238–258.
* Brenguier et al. (2017) Romain Brenguier, Arno Pauly, Jean-François Raskin, and Ocan Sankur. 2017. Admissibility in Games with Imperfect Information. In _CONCUR’17_ , Vol. 85.
* Bulling and Jamroga (2014) Nils Bulling and Wojciech Jamroga. 2014. Comparing variants of strategic ability: how uncertainty and memory influence general properties of games. _AAMAS’14_ 28, 3 (2014), 474–518.
* Chatterjee and Doyen (2010) Krishnendu Chatterjee and Laurent Doyen. 2010. The complexity of partial-observation parity games. In _International Conference on Logic for Programming Artificial Intelligence and Reasoning_. Springer, 1–14.
* Chatterjee and Doyen (2014a) Krishnendu Chatterjee and Laurent Doyen. 2014a. Games with a Weak Adversary. In _ICALP’14_. 110–121. https://doi.org/10.1007/978-3-662-43951-7_10
* Chatterjee and Doyen (2014b) Krishnendu Chatterjee and Laurent Doyen. 2014b. Partial-observation stochastic games: How to win when belief fails. _ACM Transactions on Computational Logic (TOCL)_ 15, 2 (2014), 16\. https://doi.org/10.1145/2579821
* Chatterjee et al. (2017) Krishnendu Chatterjee, Laurent Doyen, Emmanuel Filiot, and Jean-François Raskin. 2017\. Doomsday equilibria for omega-regular games. _Inf. Comput._ 254 (2017), 296–315. https://doi.org/10.1016/j.ic.2016.10.012
* Chatterjee et al. (2010a) Krishnendu Chatterjee, Thomas A Henzinger, and Nir Piterman. 2010a. Strategy logic. _Information and Computation_ 208 (2010).
* Chatterjee et al. (2010b) Krishnendu Chatterjee, Thomas A. Henzinger, and Nir Piterman. 2010b. Strategy Logic. _Inf. Comput._ 208, 6 (2010), 677–693. https://doi.org/10.1016/j.ic.2009.07.004
* Clarke et al. (1999) Edmund M Clarke, Orna Grumberg, and Doron Peled. 1999\. _Model checking_. MIT press.
* Condurache et al. (2016) Rodica Condurache, Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin. 2016\. The Complexity of Rational Synthesis. In _ICALP’16_. 121:1–121:15. https://doi.org/10.4230/LIPIcs.ICALP.2016.121
* Degorre et al. (2010) Aldric Degorre, Laurent Doyen, Raffaella Gentilini, Jean-François Raskin, and Szymon Toruńczyk. 2010. Energy and mean-payoff games with imperfect information. In _CSL’10_. Springer, 260–274.
* Dima and Tiplea (2011) Catalin Dima and Ferucio Laurentiu Tiplea. 2011. Model-checking ATL under Imperfect Information and Perfect Recall Semantics is Undecidable. _CoRR_ (2011). arXiv:1102.4225
* Doyen and Raskin (2011) Laurent Doyen and Jean-François Raskin. 2011\. Games with imperfect information: Theory and algorithms. _Lectures in Game Theory for Computer Scientists_ (2011), 185–212.
* Elgot and Rabin (1966) Calvin C. Elgot and Michael O. Rabin. 1966. Decidability and Undecidability of Extensions of Second (First) Order Theory of (Generalized) Successor. _JSL_ 31, 2 (1966), 169–181. https://doi.org/10.2307/2269808
* Emerson and Halpern (1986) E Allen Emerson and Joseph Y Halpern. 1986. ”Sometimes” and ”not never” revisited: on branching versus linear time temporal logic. _Journal of the ACM (JACM)_ 33, 1 (1986), 151–178.
* Fagin et al. (1995) Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi. 1995. _Reasoning about knowledge_. Vol. 4. MIT press Cambridge.
* Filiot et al. (2018) Emmanuel Filiot, Raffaella Gentilini, and Jean-François Raskin. 2018\. Rational Synthesis Under Imperfect Information. In _LICS’18_. ACM, 422–431.
* Finkbeiner and Schewe (2005) Bernd Finkbeiner and Sven Schewe. 2005. Uniform Distributed Synthesis. In _LICS’05_. 321–330. https://doi.org/10.1109/LICS.2005.53
* Finkbeiner and Schewe (2010) Bernd Finkbeiner and Sven Schewe. 2010. Coordination Logic. In _CSL’10_. 305–319. https://doi.org/10.1007/978-3-642-15205-4_25
* Fisman et al. (2010) Dana Fisman, Orna Kupferman, and Yoad Lustig. 2010\. Rational synthesis. In _TACAS’10_. Springer, 190–204. https://doi.org/10.1007/978-3-642-12002-2_16
* French (2001) Tim French. 2001\. Decidability of quantifed propositional branching time logics. In _AJCAI’01_. 165–176. https://doi.org/10.1007/3-540-45656-2_15
* Gastin et al. (2009) Paul Gastin, Nathalie Sznajder, and Marc Zeitoun. 2009\. Distributed synthesis for well-connected architectures. _FMSD_ 34, 3 (2009), 215–237. https://doi.org/10.1007/s10703-008-0064-7
* Genest et al. (2015) Blaise Genest, Doron Peled, and Sven Schewe. 2015\. Knowledge= observation+ memory+ computation. In _International Conference on Foundations of Software Science and Computation Structures_. Springer, 215–229.
* Guelev et al. (2011) Dimitar P. Guelev, Catalin Dima, and Constantin Enea. 2011\. An alternating-time temporal logic with knowledge, perfect recall and past: axiomatisation and model-checking. _Journal of Applied Non-Classical Logics_ 21, 1 (2011), 93–131. https://doi.org/10.3166/jancl.21.93-131
* Gutierrez et al. (2018) Julian Gutierrez, Giuseppe Perelli, and Michael Wooldridge. 2018\. Imperfect information in Reactive Modules games. _Inf. Comput._ 261, Part (2018), 650–675. https://doi.org/10.1016/j.ic.2018.02.023
* Halpern and Vardi (1989) Joseph Y. Halpern and Moshe Y. Vardi. 1989. The complexity of reasoning about knowledge and time. I. Lower bounds. _JCSS_ 38, 1 (1989), 195–237.
* Huang and Van Der Meyden (2014) Xiaowei Huang and Ron Van Der Meyden. 2014. A Temporal Logic of Strategic Knowledge. In _KR’14_.
* Jamroga and Bulling (2011) W. Jamroga and N. Bulling. 2011. Comparing variants of strategic ability. In _IJCAI’11_. AAAI Press, 252–257. https://doi.org/10.1023/A:1026171312755
* Jamroga and Murano (2014) Wojciech Jamroga and Aniello Murano. 2014. On module checking and strategies. In _AAMAS’14_. International Foundation for Autonomous Agents and Multiagent Systems, 701–708.
* Jamroga and Murano (2015) Wojciech Jamroga and Aniello Murano. 2015. Module checking of strategic ability. In _AAMAS’15_. International Foundation for Autonomous Agents and Multiagent Systems, 227–235.
* Jamroga and van der Hoek (2004) Wojciech Jamroga and Wiebe van der Hoek. 2004. Agents that Know How to Play. _Fundam. Inform._ 63, 2-3 (2004), 185–219.
* Knight and Maubert (2019) Sophia Knight and Bastien Maubert. 2019. Dealing with imperfect information in Strategy Logic. arXiv:arXiv:1908.02488 Presented at SR’15.
* Kupferman (1999) Orna Kupferman. 1999\. Augmenting branching temporal logics with existential quantification over atomic propositions. _JLC_ 9, 2 (1999), 135–147. https://doi.org/10.1093/logcom/9.2.135
* Kupferman et al. (2000a) Orna Kupferman, Parthasarathy Madhusudan, Pazhamaneri Subramaniam Thiagarajan, and Moshe Y. Vardi. 2000a. Open Systems in Reactive Environments: Control and Synthesis. In _CONCUR’00_. 92–107.
* Kupferman et al. (2016) Orna Kupferman, Giuseppe Perelli, and Moshe Y. Vardi. 2016\. Synthesis with rational environments. _Ann. Math. Artif. Intell._ 78, 1 (2016), 3–20. https://doi.org/10.1007/s10472-016-9508-8
* Kupferman and Vardi (1999) Orna Kupferman and Moshe Y. Vardi. 1999. Church’s problem revisited. _BSL_ (1999), 245–263.
* Kupferman and Vardi (2001) Orna Kupferman and Moshe Y. Vardi. 2001. Synthesizing distributed systems. In _LICS’01_. 389–398. https://doi.org/10.1109/LICS.2001.932514
* Kupferman et al. (2000b) Orna Kupferman, Moshe Y. Vardi, and Pierre Wolper. 2000b. An automata-theoretic approach to branching-time model checking. _JACM_ 47, 2 (2000), 312–360. https://doi.org/10.1145/333979.333987
* Kupferman et al. (2001) Orna Kupferman, Moshe Y Vardi, and Pierre Wolper. 2001\. Module checking. _Information and Computation_ 164, 2 (2001), 322–344.
* Laroussinie and Markey (2014) François Laroussinie and Nicolas Markey. 2014. Quantified CTL: Expressiveness and Complexity. _LMCS_ 10, 4 (2014). https://doi.org/10.2168/LMCS-10(4:17)2014
* Laroussinie and Markey (2015) François Laroussinie and Nicolas Markey. 2015. Augmenting ATL with strategy contexts. _Inf. Comput._ 245 (2015), 98–123. https://doi.org/10.1016/j.ic.2014.12.020
* Laroussinie et al. (2015) François Laroussinie, Nicolas Markey, and Arnaud Sangnier. 2015\. ATLsc with partial observation. In _GandALF’15_. 43–57. https://doi.org/10.4204/EPTCS.193.4
* Läuchli and Savioz (1987) Hans Läuchli and Christian Savioz. 1987. Monadic second order definable relations on the binary tree. _JSL_ 52, 01 (1987), 219–226. https://doi.org/10.2307/2273878
* Löding (2011) Christof Löding. 2011\. Automata on Infinite Trees. In _preliminary version for the handbook Automata: from Mathematics to Applications_ , Jean-Eric Pin (Ed.).
* Lomuscio and Raimondi (2006) Alessio Lomuscio and Franco Raimondi. 2006. MCMAS : A Model Checker for Multi-agent Systems. In _TACAS’06_ _(LNCS 4314)_. 450–454.
* Maubert and Murano (2018) Bastien Maubert and Aniello Murano. 2018. Reasoning about knowledge and strategies under hierarchical information. In _KR’18_.
* Mogavero et al. (2014) Fabio Mogavero, Aniello Murano, Giuseppe Perelli, and Moshe Y. Vardi. 2014. Reasoning About Strategies: On the Model-Checking Problem. _ACM Trans. Comput. Log._ 15, 4 (2014), 34:1–34:47. https://doi.org/10.1145/2631917
* Muller and Schupp (1995) David E. Muller and Paul E. Schupp. 1995. Simulating Alternating Tree Automata by Nondeterministic Automata: New Results and New Proofs of the Theorems of Rabin, McNaughton and Safra. _TCS_ 141, 1&2 (1995), 69–107. https://doi.org/10.1016/0304-3975(94)00214-4
* Pérez (2017) Guillermo A Pérez. 2017\. The fixed initial credit problem for partial-observation energy games is Ack-complete. _Inform. Process. Lett._ 118 (2017), 91–99.
* Peterson et al. (2001) Gary Peterson, John Reif, and Salman Azhar. 2001. Lower bounds for multiplayer noncooperative games of incomplete information. _CAMWA_ 41, 7 (2001), 957–992. https://doi.org/10.1016/S0898-1221(00)00333-3
* Peterson et al. (2002) Gary Peterson, John Reif, and Salman Azhar. 2002. Decision algorithms for multiplayer noncooperative games of incomplete information. _CAMWA_ 43, 1 (2002), 179–206. https://doi.org/10.1016/S0898-1221(01)00282-6
* Peterson and Reif (1979) Gary L. Peterson and John H. Reif. 1979. Multiple-Person Alternation. In _SFCS’79_. 348–363. https://doi.org/10.1109/SFCS.1979.25
* Pinchinat and Riedweg (2005) Sophie Pinchinat and Stéphane Riedweg. 2005. A decidable class of problems for control under partial observation. _IPL_ 95, 4 (2005), 454–460. https://doi.org/10.1016/j.ipl.2005.04.011
* Pnueli (1977) Amir Pnueli. 1977\. The Temporal Logic of Programs. In _FOCS_. 46–57.
* Pnueli and Rosner (1989) Amir Pnueli and Roni Rosner. 1989. On the synthesis of a reactive module. In _POPL_. 179–190.
* Pnueli and Rosner (1990) Amir Pnueli and Roni Rosner. 1990. Distributed reactive systems are hard to synthesize. In _FOCS’90_. 746–757. https://doi.org/10.1109/FSCS.1990.89597
* Puchala (2010) Bernd Puchala. 2010\. Asynchronous Omega-Regular Games with Partial Information. In _MFCS_. 592–603.
* Rabin (1969) Michael O Rabin. 1969\. Decidability of second-order theories and automata on infinite trees. _TAMS_ 141 (1969), 1–35. https://doi.org/10.1090/S0002-9947-1969-0246760-1
* Ramanujam and Simon (2010) Ramaswamy Ramanujam and Sunil Simon. 2010. A communication based model for games of imperfect information. In _International Conference on Concurrency Theory_. Springer, 509–523.
* Reif (1984) John H Reif. 1984\. The complexity of two-player games of incomplete information. _Journal of computer and system sciences_ 29, 2 (1984), 274–301. https://doi.org/10.1016/0022-0000(84)90034-5
* Schewe and Finkbeiner (2007) Sven Schewe and Bernd Finkbeiner. 2007. Distributed Synthesis for Alternating-Time Logics. In _ATVA’07_. 268–283. https://doi.org/10.1007/978-3-540-75596-8_20
* Schobbens (2004) Pierre-Yves Schobbens. 2004\. Alternating-time logic with imperfect recall. _Electr. Notes Theor. Comput. Sci._ 85, 2 (2004), 82–93. https://doi.org/10.1016/S1571-0661(05)82604-0
* Selten (1965) Reinhard Selten. 1965\. Spieltheoretische behandlung eines oligopolmodells mit nachfrageträgheit: Teil i: Bestimmung des dynamischen preisgleichgewichts. _Zeitschrift für die gesamte Staatswissenschaft/Journal of Institutional and Theoretical Economics_ H. 2 (1965), 301–324.
* Sistla (1983) A Prasad Sistla. 1983\. _Theoretical Issues in the Design and Certification of Distributed Systems._ Ph.D. Dissertation. Harvard University, Cambridge, MA, USA.
* Thomas (1992) Wolfgang Thomas. 1992\. Infinite Trees and Automaton-Definable Relations over omega-Words. _TCS_ 103, 1 (1992), 143–159. https://doi.org/10.1016/0304-3975(92)90090-3
* van der Meyden and Vardi (1998) Ron van der Meyden and Moshe Y. Vardi. 1998. Synthesis from knowledge-based specifications. In _CONCUR’98_. Springer, 34–49.
* van der Meyden and Wilke (2005) Ron van der Meyden and Thomas Wilke. 2005. Synthesis of Distributed Systems from Knowledge-Based Specifications. In _CONCUR’05_. 562–576.
* Vardi and Wolper (1994) Moshe Y. Vardi and Pierre Wolper. 1994. Reasoning about infinite computations. _IC_ 115, 1 (1994), 1–37.
* Zielonka (1998) Wieslaw Zielonka. 1998\. Infinite Games on Finitely Coloured Graphs with Applications to Automata on Infinite Trees. _TCS_ 200, 1-2 (1998), 135–183.
## Appendix A Proof of Proposition 4.12
First, for every LTL formula $\psi$ one can build a parity word automaton
$\mathcal{W}^{\psi}$ with two colours and $2^{O(|\psi|)}$ states (Vardi and
Wolper, 1994). Let $K_{\psi}\in\mathbb{N}$ be such that the number of states
of $\mathcal{W}^{\psi}$ is bounded by $2^{K_{\psi}|\psi|}$.
We also state a more precise version of Theorem 4.6: for every ATA
$\mathcal{A}$ with $n$ states and $l$ colours, one can build an NTA
$\mathcal{N}$ with at most $2^{O(nl\log(nl))}$ states and $O(nl)$ colours such
that $\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{N})$ (Muller and Schupp,
1995; Löding, 2011). We let $K_{1},K_{2}\in\mathbb{N}$ be such that the number
of states of $\mathcal{N}$ is bounded by $2^{K_{1}nl\log(nl)}$ and the number
of colours by $K_{2}nl$.
Proposition 4.12 follows directly from the following.
###### Proposition A.1.
Let $\Phi$ be a $\textnormal{{QCTL}}^{*}_{\textnormal{\scriptsize
i,$\tiny{\subseteq}$}}$ formula, $\mathcal{S}$ a CKS , and let
${\textnormal{AP}_{\exists}}={\textnormal{AP}_{\exists}}(\Phi)$. For every
subformula $\varphi$ of $\Phi$ and state $s\in\mathcal{S}$, it holds that:
* •
if $\mbox{sd}_{k}(\varphi)=0$, $\mathcal{A}_{s}^{\varphi}$ has at most
$f_{\mathcal{S}}^{\varphi}$ states and 2 colours,
* •
if $\mbox{sd}_{k}(\varphi)\geq 1$, $\mathcal{A}_{s}^{\varphi}$ has at most
$\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid f_{\mathcal{S}}^{\varphi}\log
f_{\mathcal{S}}^{\varphi}\big{)}$ states, and its number of colours is at most
$\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid f_{\mathcal{S}}^{\varphi}\log
f_{\mathcal{S}}^{\varphi}\big{)}$,
with
$f_{\mathcal{S}}^{\varphi}=(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}2^{K_{\psi}|\varphi|{\bf E}\mathrm{d}(\varphi)}$.
In addition, if $\mathcal{A}_{s}^{\varphi}$ has state set $Q$, for each $q\in
Q$ and $a\in 2^{\textnormal{AP}_{\exists}}$, we have
$|\delta(q,a)|\leq|\mathcal{S}||Q|^{|\mathcal{S}|}2^{H|\varphi|}$, where
$H=1+{\bf E}\mathrm{d}(\varphi)$.
###### Proof.
We prove the result by induction on $\varphi$.
$\bm{\varphi=p:}$ in this case
$\mbox{sd}_{k}(\varphi)=\exists\mathrm{d}(\varphi)={\bf
E}\mathrm{d}(\varphi)=0$. By construction, $\mathcal{A}_{s}^{\varphi}$ has one
state $q_{\iota}$ and two colours, so that the first part of the claim holds.
In addition, each formula of its transition function is of size one, so that
the second part of the claim also holds.
$\bm{\varphi=\neg\varphi^{\prime}:}$ Complementing an ATA does not change the
number of states, number of colours or size of formulas in the transition
function, so that the result follows by induction hypothesis and the fact that
$|\varphi^{\prime}|\leq|\varphi|$ and ${\bf E}\mathrm{d}(\varphi)={\bf
E}\mathrm{d}(\varphi^{\prime})$.
$\bm{\varphi=\varphi_{1}\vee\varphi_{2}:}$ To establish the claim about number
of states and colours we split cases. First we consider the case where
$\mbox{sd}_{k}(\varphi)=0$. In that case we also have
$\mbox{sd}_{k}(\varphi_{1})=\mbox{sd}_{k}(\varphi_{2})=0$. By induction
hypothesis, for $i\in\\{1,2\\}$, $\mathcal{A}_{s}^{\varphi_{i}}$ has at most
$f_{\mathcal{S}}^{\varphi_{i}}$ states and $2$ colours. These automata are
then narrowed down, but the narrowing operation leaves the size of formulas in
the transition function unchanged (in fact they may become smaller, but not
bigger, see (Kupferman and Vardi, 1999)). Therefore, by construction
$\mathcal{A}_{s}^{\varphi}$ has at most
$1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}$ states and two
colours.
Now we have that
$\displaystyle 1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}$
$\displaystyle=1+\sum_{i\in\\{1,2\\}}(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi_{i})}|\varphi_{i}||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi_{i})}2^{K_{\psi}|\varphi_{i}|{\bf
E}\mathrm{d}(\varphi_{i})}$
$\displaystyle=1+(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}\sum_{i\in\\{1,2\\}}2^{K_{\psi}|\varphi_{i}|{\bf
E}\mathrm{d}(\varphi)}$ $\displaystyle\leq
1+(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}2^{K_{\psi}(|\varphi_{1}|+|\varphi_{2}|){\bf
E}\mathrm{d}(\varphi)}$ $\displaystyle
1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}$
$\displaystyle\leq(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}2^{K_{\psi}(|\varphi_{1}|+|\varphi_{2}|+1){\bf
E}\mathrm{d}(\varphi)}$
We get that
(8) $1+f_{\mathcal{S}}^{\varphi_{1}}+f_{\mathcal{S}}^{\varphi_{2}}\leq
f_{\mathcal{S}}^{\varphi}$
which concludes the claim about the number of states.
Now for the case where $\mbox{sd}_{k}(\varphi)\geq 1$. By definition of
nondeterminisation depth, for at least one $i\in\\{1,2\\}$ we have
$\mbox{sd}_{k}(\varphi_{i})\geq 1$. Also, the number of colours used in
$\mathcal{A}_{s}^{\varphi}$ is the maximum between the number of colours used
in $\mathcal{A}_{s}^{\varphi_{1}}$ and those used in
$\mathcal{A}_{s}^{\varphi_{2}}$. By induction hypothesis it is the case that
$\mathcal{A}_{s}^{\varphi_{i}}$ has at most
$\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})-1\mid
f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$
colours if $\mbox{sd}_{k}(\varphi_{i})\geq 1$, or 2 if
$\mbox{sd}_{k}(\varphi_{i})=0$. Therefore, the number of colours in
$\mathcal{A}_{s}^{\varphi}$ is at most
$\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})-1\mid
f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$ for
some $i$, which is less than $\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid
f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$.
For the number of states $|Q|$ in $\mathcal{A}_{s}^{\varphi}$, we have that
$|Q|=1+|Q_{1}|+|Q_{2}|$, where $Q_{i}$ is the set of states of
$\mathcal{A}_{s}^{\varphi_{i}}$. By induction hypothesis we get
$\displaystyle|Q|$ $\displaystyle\leq
1+\sum_{i\in\\{1,2\\}}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})\mid
f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$
$\displaystyle\leq
1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid\sum_{i\in\\{1,2\\}}f_{\mathcal{S}}^{\varphi_{i}}\log
f_{\mathcal{S}}^{\varphi_{i}}\big{)}$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(\sum_{i\in\\{1,2\\}}f_{\mathcal{S}}^{\varphi_{i}}+1)\log
f_{\mathcal{S}}^{\varphi}\big{)}$ $\displaystyle|Q|$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid
f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}\mbox{\hskip
56.9055pt (using Equation~{}\eqref{eq-boum})}$
which concludes the claim about the number of states.
Concerning the size of formulas in the transition function, for all states
from $\mathcal{A}_{s}^{\varphi_{1}}$ and $\mathcal{A}_{s}^{\varphi_{2}}$ the
transition function is unchanged and the result thus holds by induction
hypothesis. For the remaining state $q_{\iota}$, we have by definition
$\delta(q_{\iota},a)=\delta^{1}(q_{\iota}^{1},a)\vee\delta^{2}(q_{\iota}^{2},a)$
and thus
$|\delta(q_{\iota},a)|=|\delta^{1}(q_{\iota}^{1},a)|+|\delta^{2}(q_{\iota}^{2},a)|+1$.
By induction hypothesis we get that
$\displaystyle|\delta(q_{\iota},a)|$
$\displaystyle\leq|\mathcal{S}||Q_{1}|^{|\mathcal{S}|}2^{H(\varphi_{1})|\varphi_{1}|}+|\mathcal{S}||Q_{2}|^{|\mathcal{S}|}2^{H(\varphi_{2})|\varphi_{2}|}+1$
$\displaystyle\leq|\mathcal{S}|2^{H(\varphi)(|\varphi_{1}|+|\varphi_{2}|)}(|Q_{1}|^{|\mathcal{S}|}+|Q_{2}|^{|\mathcal{S}|})$
$\displaystyle\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}(|Q_{1}|+|Q_{2}|)^{|\mathcal{S}|}$
And thus
$|\delta(q_{\iota},a)|\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}|Q|^{|\mathcal{S}|}$
as required.
$\bm{\varphi={\bf E}\psi:}$ The word automaton built for the LTL skeleton of
$\psi$ is in fact a Büchi automaton, and thus uses only two colours. The
number of colours used by $\mathcal{A}_{s}^{\varphi}$ is therefore the maximum
number of colours used by the automata $\mathcal{A}_{s}^{\varphi_{i}}$ built
for the maximal state subformulas $\varphi_{i}$ in $\psi$, and the result
follows by induction hypothesis.
Concerning the number of states, let $|Q_{\varphi}|$ (resp. $|Q_{i}|$,
$|Q_{\psi}|$) be the number of states in $\mathcal{A}_{s}^{\varphi}$ (resp.
$\mathcal{A}_{s}^{\varphi_{i}}$, $\mathcal{W}^{\psi}$). Note that the number
of states in $\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$ does not depend on
$s^{\prime}$. Recall that $\max(\psi)=\\{\varphi_{1},\ldots,\varphi_{n}\\}$ is
the set of maximal state subformulas of $\psi$, and let $\psi^{\prime}$ be the
LTL skeleton of $\psi$, i.e., the LTL formula obtained from $\psi$ by
replacing maximal state subformulas $\varphi_{i}$ with propositions
$p_{\varphi_{i}}$. We thus have
$\displaystyle|Q|$
$\displaystyle=|Q_{\psi}||\mathcal{S}|+2|\mathcal{S}|\sum_{i\in[n]}|Q_{i}|$
$\displaystyle\leq
2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|+2|\mathcal{S}|\sum_{i\in[n]}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi_{i})\mid
f_{\mathcal{S}}^{\varphi_{i}}\log f_{\mathcal{S}}^{\varphi_{i}}\big{)}$
$\displaystyle|Q|$ $\displaystyle\leq
2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid\sum_{i\in[n]}f_{\mathcal{S}}^{\varphi_{i}}\log
f_{\mathcal{S}}^{\varphi_{i}}\big{)}\right)$
And thus
(9) $|Q|\leq
2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid\log
f_{\mathcal{S}}^{\varphi}\sum_{i\in[n]}f_{\mathcal{S}}^{\varphi_{i}}\big{)}\right)$
Now observe that for each $i\in[n]$ we have that ${\bf
E}\mathrm{d}(\varphi_{i})\leq{\bf E}\mathrm{d}(\varphi)-1$, and
$\exists\mathrm{d}(\varphi_{i})=\exists\mathrm{d}(\varphi)$. Therefore,
$\displaystyle\sum_{i\in[n]}f_{\mathcal{S}}^{\varphi_{i}}$
$\displaystyle=(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}\sum_{i\in[n]}|\varphi_{i}||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi_{i})}2^{K_{\psi}|\varphi_{i}|{\bf
E}\mathrm{d}(\varphi_{i})}$
$\displaystyle\leq(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)-1}(\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf
E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}$
Using this in Equation (9) we get
$\displaystyle|Q|$ $\displaystyle\leq
2^{K_{\psi}|\psi^{\prime}|}|\mathcal{S}|\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)-1}(\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf
E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\log
f_{\mathcal{S}}^{\varphi}\big{)}\right)$ $\displaystyle\leq
2^{K_{\psi}|\psi^{\prime}|}\left(1+\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}(\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf
E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\log
f_{\mathcal{S}}^{\varphi}\big{)}\right)$ $\displaystyle\leq
2^{K_{\psi}|\psi^{\prime}|}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}(1+\sum_{i\in[n]}|\varphi_{i}|)2^{K_{\psi}({\bf
E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\log
f_{\mathcal{S}}^{\varphi}\big{)}$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}|\varphi|2^{K_{\psi}B}\log
f_{\mathcal{S}}^{\varphi}\big{)},$
where $B=({\bf
E}\mathrm{d}(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|+|\psi^{\prime}|$. To
conclude it only remains to show that $B\leq|\varphi|{\bf
E}\mathrm{d}(\varphi)$. Because $\varphi={\bf E}\psi$, it holds that ${\bf
E}\mathrm{d}(\varphi)\geq 1$. If ${\bf E}\mathrm{d}(\varphi)=1$, we have
$B=|\psi^{\prime}|\leq|\varphi|{\bf E}\mathrm{d}(\varphi)$. Now if ${\bf
E}\mathrm{d}(\varphi)\geq 2$, we have
$B=({\bf
E}\mathrm{d}(\varphi)-2)\sum_{i\in[n]}|\varphi_{i}|+|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|$
Clearly, $\sum_{i\in[n]}|\varphi_{i}|\leq|\varphi|$, and
$|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|\leq 2|\varphi|$, and the result
follows. Note that it could seem that
$|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|\leq|\varphi|$. It is true if one
defines the size of a formula as the number of connectors, but not if one also
counts atomic propositions, as we do here. However it is true that
$|\psi^{\prime}|+\sum_{i\in[n]}|\varphi_{i}|\leq 2|\varphi|$, independently of
the definition of formulas’ size.
It remains to establish the claim about the size of transition formulas. By
definition, for every state $q$ of $\mathcal{A}_{s}^{\varphi}$ that comes from
some $\mathcal{A}^{i}_{s^{\prime}}$ or
$\overline{\mathcal{A}^{i}_{s^{\prime}}}$, the transition function is
unchanged and thus the result follows by induction hypothesis and the fact
that narrowing and complementation do not increase the size of formulas in
transition functions. Now for the remaining states, for each
$(q^{\psi},s^{\prime})\in Q$ and every $a\in
2^{{\textnormal{AP}_{\exists}}(\Phi)}$, we have
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$
$\displaystyle\leq\sum_{a^{\prime}\in
2^{\max(\psi)}}\left(|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|+1+\sum_{\varphi_{i}\in
a^{\prime}}(|\delta^{i}_{s^{\prime}}(q^{i}_{s^{\prime}},a)|+1)+\sum_{\varphi_{i}\notin
a^{\prime}}(|\overline{\delta^{i}_{s^{\prime}}}(\overline{q^{i}_{s^{\prime}}},a)|+1)\right)$
Now by induction hypothesis, and because complementation does not increase the
size of formulas, we get:
(10) $|\delta((q^{\psi},s^{\prime}),a)|\leq\sum_{a^{\prime}\in
2^{\max(\psi)}}\left(|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|+2\sum_{i\in[n]}|\mathcal{S}|2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}\right)+2^{|\max(\psi)|}+2|\max(\psi)|2^{|\max(\psi)|},$
where $|Q_{i}|$ is the number of states in automaton
$\mathcal{A}_{s^{\prime}}^{\varphi_{i}}$. Now by definition,
$\displaystyle|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|$
$\displaystyle=\left(\sum_{q^{\prime}\in\Delta^{\psi}(q^{\psi},a^{\prime})}\sum_{s^{\prime\prime}\in
R(s^{\prime})}1\right)+|\Delta^{\psi}(q^{\psi},a^{\prime})||R(s^{\prime})|-1$
$\displaystyle|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|$
$\displaystyle\leq 2|\Delta^{\psi}(q^{\psi},a^{\prime})||R(s^{\prime})|-1$
We thus have
(11) $|\delta_{\psi}((q^{\psi},s^{\prime}),a^{\prime})|\leq
2|Q_{\psi^{\prime}}||\mathcal{S}|-1$
where $Q_{\psi^{\prime}}$ is the set of states of the word automaton
$\mathcal{W}^{\psi}$. Using this in Equation 10 we get:
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq
2^{|\max(\psi)|}\left(2|Q_{\psi^{\prime}}||\mathcal{S}|-1+2\sum_{i\in[n]}|\mathcal{S}|2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}\right)+2^{|\max(\psi)|}+2|\max(\psi)|2^{|\max(\psi)|}$
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq
2^{|\max(\psi)|+1}|\mathcal{S}|\left(|Q_{\psi^{\prime}}|+\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}\right)+2|\max(\psi)|2^{|\max(\psi)|}$
But for natural numbers $\\{a_{i},b_{i}\\}_{i\in[n]}$, it holds that
$\sum_{i\in[n]}2^{a_{i}}b_{i}=2^{\sum_{i\in[n]}a_{i}}\sum_{i\in[n]}b_{i}-\sum_{i\in[n]}2^{a_{i}}(2^{\sum_{j\neq
i}a_{j}}-1)b_{i}$
Applying this to $a_{i}=H(\varphi_{i})|\varphi_{i}|$ and
$b_{i}=|Q_{i}|^{|\mathcal{S}|}$ we obtain
$\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}|Q_{i}|^{|\mathcal{S}|}=2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}-\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq
i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}$
We thus get that
$|\delta((q^{\psi},s^{\prime}),a)|\leq
2^{|\max(\psi)|+1}|\mathcal{S}|\left(|Q_{\psi^{\prime}}|+2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}\right)+C,$
with
$\displaystyle C$
$\displaystyle=2|\max(\psi)|2^{|\max(\psi)|}-2^{|\max(\psi)|+1}|\mathcal{S}|\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq
i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}$
$\displaystyle=2^{|\max(\psi)|}\left(2|\max(\psi)|-2|\mathcal{S}|\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq
i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}\right)$
If $n=|\max(\psi)|>1$, i.e., there are at least two maximal state subformulas,
then $\sum_{j\neq i}H(\varphi_{j})|\varphi_{j}|>0$, hence
$2|\mathcal{S}|\sum_{i\in[n]}2^{H(\varphi_{i})|\varphi_{i}|}(2^{\sum_{j\neq
i}H(\varphi_{j})|\varphi_{j}|}-1)|Q_{i}|^{|\mathcal{S}|}\geq
4n=4|\max(\psi)|$, which implies that $C\leq 0$, and thus
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq
2^{|\max(\psi)|+1}|\mathcal{S}|\left(|Q_{\psi^{\prime}}|+2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}\right)$
$\displaystyle\leq
2^{|\max(\psi)|+1}|\mathcal{S}|2^{\sum_{i\in[n]}H(\varphi_{i})|\varphi_{i}|}\left(|Q_{\psi^{\prime}}|^{|\mathcal{S}|}+\sum_{i\in[n]}|Q_{i}|^{|\mathcal{S}|}\right)$
$\displaystyle\leq|\mathcal{S}|2^{|\max(\psi)|+1+(H(\varphi)-1)\sum_{i\in[n]}|\varphi_{i}|}\left(|Q_{\psi^{\prime}}|+\sum_{i\in[n]}|Q_{i}|\right)^{|\mathcal{S}|}$
$\displaystyle\leq|\mathcal{S}|2^{|\varphi|+(H(\varphi)-1)|\varphi|}|Q|^{|\mathcal{S}|}$
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$
$\displaystyle\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}|Q|^{|\mathcal{S}|}$
It remains to consider the case where $\max(\psi)=\\{\varphi_{1}\\}$. In that
case there are only two letters in the alphabet $2^{\max(\psi)}$, which are
$\emptyset$ and $\\{\varphi_{1}\\}$. The transition formulas then simplify and
one gets that
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$
$\displaystyle\leq|\delta_{\psi}((q^{\psi},s^{\prime}),\emptyset)|+1+|\overline{\delta^{1}_{s^{\prime}}}(\overline{q^{1}_{s^{\prime}}},a)|+1+|\delta_{\psi}((q^{\psi},s^{\prime}),\\{\varphi_{1}\\})|+1+|\delta^{1}_{s^{\prime}}(q^{1}_{s^{\prime}},a)|$
Using Equation (11) and the induction hypothesis we get
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$ $\displaystyle\leq
4|Q_{\psi^{\prime}}||\mathcal{S}|-2+2|\mathcal{S}|2^{H(\varphi_{1})|\varphi_{1}|}|Q_{1}|^{|\mathcal{S}|}+3$
$\displaystyle\leq
1+2|\mathcal{S}|(2|Q_{\psi^{\prime}}|+2^{H(\varphi_{1})|\varphi_{1}|}|Q_{1}|^{|\mathcal{S}|})$
$\displaystyle\leq
1+2|\mathcal{S}|2^{(H(\varphi)-1)|\varphi_{1}|}(|Q_{\psi^{\prime}}|^{|\mathcal{S}|}+|Q_{1}|^{|\mathcal{S}|})$
$\displaystyle\leq
1+|\mathcal{S}|2^{H(\varphi)|\varphi|}(|Q_{\psi^{\prime}}|^{|\mathcal{S}|}+|Q_{1}|^{|\mathcal{S}|})$
$\displaystyle|\delta((q^{\psi},s^{\prime}),a)|$
$\displaystyle\leq|\mathcal{S}|2^{H(\varphi)|\varphi|}|Q|^{|\mathcal{S}|}$
$\bm{\varphi=\exists}^{\bm{\textnormal{{o}}}}\bm{p.\,\varphi^{\prime}:}$ We
first establish the claim for states and colours, and we start with the case
$\mbox{sd}_{k}(\varphi)=\mbox{sd}_{k}(\varphi^{\prime})$. By definition we
necessarily have that $\mbox{sd}_{x}(\varphi^{\prime})=\mbox{nd}$, i.e.,
$\mathcal{A}_{s}^{\varphi^{\prime}}$ is nondeterministic, and
$\textnormal{{o}}=I_{\varphi^{\prime}}$, therefore there is no need to use
narrowing or nondeterminisation here. $\mathcal{A}_{s}^{\varphi}$ is obtained
by directly projecting $\mathcal{A}_{s}^{\varphi^{\prime}}$, an operation that
does not change the number of states or colours, so that the claim for states
and colours follows directly by induction hypothesis.
Now we consider the case where
$\mbox{sd}_{k}(\varphi)\neq\mbox{sd}_{k}(\varphi^{\prime})$, which implies
that $\mbox{sd}_{k}(\varphi)\geq 1$. Let $n$ be the number of states and $l$
the number of colours in $\mathcal{A}_{s}^{\varphi^{\prime}}$. In this case
$\mathcal{A}_{s}^{\varphi^{\prime}}$ is first narrowed down, which does not
change number of states or colours. The resulting automaton is then
nondeterminised, yielding an automaton with at most $2^{K_{1}nl\log nl}$
states and $K_{2}nl$ colours.
Again, we split cases: if $\mbox{sd}_{k}(\varphi^{\prime})=0$, by induction
hypothesis, $n\leq f_{\mathcal{S}}^{\varphi^{\prime}}$ and $l=2$. For the
number of colours, observing that
$\exists\mathrm{d}(\varphi)=\exists\mathrm{d}(\varphi^{\prime})+1$, we have
$\displaystyle K_{2}nl\leq 2K_{2}f_{\mathcal{S}}^{\varphi^{\prime}}$
$\displaystyle=2K_{2}(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi^{\prime})}|\varphi^{\prime}||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi^{\prime})}2^{K_{\psi}|\varphi^{\prime}|{\bf
E}\mathrm{d}(\varphi^{\prime})}$
$\displaystyle\leq(4K_{1}+2K_{2})^{\exists\mathrm{d}(\varphi)}|\varphi||\mathcal{S}|^{{\bf
E}\mathrm{d}(\varphi)}2^{K_{\psi}|\varphi|{\bf E}\mathrm{d}(\varphi)}$
$\displaystyle K_{2}nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid
f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$
For the number of states, we have that
$\displaystyle 2^{K_{1}nl\log nl}$ $\displaystyle\leq
2^{2K_{1}f_{\mathcal{S}}^{\varphi^{\prime}}\log(2f_{\mathcal{S}}^{\varphi^{\prime}})}\leq\mathrm{exp}\big{(}\mbox{sd}_{k}{\varphi}\mid
f_{\mathcal{S}}^{\varphi}\log(f_{\mathcal{S}}^{\varphi})\big{)}$
Now for the final case, if
$\mbox{sd}_{k}(\varphi)=\mbox{sd}_{k}(\varphi^{\prime})+1$ and
$\mbox{sd}_{k}(\varphi^{\prime})\geq 1$, by induction hypothesis
$n\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid
f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ and
$l\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid
f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$. For the number of colours in
$\mathcal{A}_{s}^{\varphi}$ we thus get
$\displaystyle K_{2}nl$ $\displaystyle\leq
K_{2}\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid
f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}2^{f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}}\big{)}$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid
2K_{2}f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ $\displaystyle K_{2}nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)-1\mid
f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$
Concerning the number of states, we observe that
$\displaystyle nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid
f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}2^{f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}}\big{)}$ $\displaystyle nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid
2f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ $\displaystyle K_{1}nl\log nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})-1\mid
2K_{1}f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}2^{2f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}}\big{)}$ $\displaystyle K_{1}nl\log nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid
4K_{1}f_{\mathcal{S}}^{\varphi^{\prime}}\log
f_{\mathcal{S}}^{\varphi^{\prime}}\big{)}$ $\displaystyle K_{1}nl\log nl$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi^{\prime})\mid
f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$ $\displaystyle
2^{K_{1}nl\log nl}$
$\displaystyle\leq\mathrm{exp}\big{(}\mbox{sd}_{k}(\varphi)\mid
f_{\mathcal{S}}^{\varphi}\log f_{\mathcal{S}}^{\varphi}\big{)}$
It only remains to establish the claim for the size of transition formulas.
Since $\mathcal{A}_{s}^{\varphi}$ is nondeterministic, formulas $\delta(q,a)$
are written in disjunctive normal form and for every direction $x\in
S_{\varphi}$ each disjunct contains exactly one element of $\\{x\\}\times Q$,
where $Q$ is the set of states in $\mathcal{A}_{s}^{\varphi}$. As a result,
each formula $\delta(q,a)$ is of size
$\displaystyle|\delta(q,a)|$
$\displaystyle\leq|Q|^{|S_{\varphi}|}(2|S_{\varphi}|-1)+|Q|^{|S_{\varphi}|}-1$
$\displaystyle\leq 2|S_{\varphi}||Q|^{|S_{\varphi}|}$
$\displaystyle|\delta(q,a)|$ $\displaystyle\leq
2^{H(\varphi)|\varphi|}|\mathcal{S}||Q|^{|\mathcal{S}|}$
∎
|
2024-09-04T02:54:58.721052 | 2020-03-06T13:14:33 | 2003.04738 | {
"authors": "Jens Hesse",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26135",
"submitter": "Jens Hesse",
"url": "https://arxiv.org/abs/2003.04738"
} | arxiv-papers | # EKOR strata on Shimura varieties with parahoric reduction
Jens Hesse Technische Universität Darmstadt<EMAIL_ADDRESS>
###### Abstract
We investigate the geometry of the special fiber of the integral model of a
Shimura variety with parahoric level at a given prime place.
To be more precise, we deal with the EKOR stratification which interpolates
between the Ekedahl-Oort and Kottwitz-Rapoport stratifications. In the Siegel
case we give a geometric description by suitably generalizing the theory of
$G$-zips of Moonen, Wedhorn, Pink and Ziegler to our context.
###### Contents
1. 1 Background
1. 1.1 Shimura data of Hodge type
2. 1.2 Bruhat-Tits buildings
3. 1.3 Bruhat-Tits group schemes
4. 1.4 Siegel integral models
5. 1.5 Local structure of the integral model
1. 1.5.1 Generizations and irreducible components
6. 1.6 The local model
1. 1.6.1 The Siegel case
2. 1.6.2 The relation between the integral and the local model
3. 1.6.3 The Pappas-Zhu construction
2. 2 EKOR strata and zips in the case of parahoric reduction
1. 2.1 The Ekedahl-Oort, Kottwitz-Rapoport and EKOR stratifications
1. 2.1.1 Iwahori-Weyl group and the admissible subset
2. 2.1.2 Kottwitz-Rapoport stratification
3. 2.1.3 Ekedahl-Oort stratification
4. 2.1.4 EKOR stratification
2. 2.2 $\overline{\mathcal{G}}_{K}$-zips in the Siegel case
1. 2.2.1 Preliminaries
2. 2.2.2 Lattice chains, zips, admissibility
3. 2.2.3 An explicit description of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$
4. 2.2.4 $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ in the Siegel case
5. 2.2.5 The example of $\operatorname{GSp}(4)$
Introduction
Shimura varieties are objects of arithmetic geometry (namely varieties over
number fields) that naturally arise in the search for generalized, non-abelian
reciprocity laws (i.e., in the Langlands program) and as moduli spaces of
abelian varieties (with certain extra structures on them). One way of
approaching these objects is to try to understand their mod-$p$ reduction
(which has to be carefully defined first). Insofar as a moduli interpretation
in the above sense exists and continues to exist likewise for the mod-$p$
reduction111There need not be a _literal_ moduli interpretation, but in any
event the stratifications in question derive from a close connection to moduli
problems., it allows us to stratify the moduli space according to several
invariants of the abelian varieties parametrized, e.g., the isomorphism
classes of their $p$-torsion. (An important observation is that these
stratifications genuinely live in the characteristic $p$ world, making use of
Frobenius endomorphisms and so on.) This, very roughly, is the general theme
everything in this article revolves around.
More precisely, we will be dealing with Shimura varieties of Hodge type and
parahoric level structure, at some fixed prime $v\mid p$ of the number field
over which the Shimura variety is defined. Under some reasonably mild
assumptions, cf. 1.17, Kisin and Pappas [KP15] constructed a canonical
integral model for such a Shimura variety. We try to understand some aspects
of the geometry of the special fiber of said integral model, namely the EKOR
strata (an interpolation between the Ekedahl-Oort strata, which in the case of
hyperspecial level are roughly the patches where the isomorphism class of the
$p$-torsion associated with the abelian variety is constant, and the Kottwitz-
Rapoport strata, which roughly are the patches where the Hodge filtration
looks constant) and defining them in a geometrical way.
Let us now go into more detail.
On the integral model $\mathscr{S}_{K}$ ($K$ parahoric level) we have a
“universal” abelian scheme (the quotation marks indicating that it is not
really universal for some moduli problem on $\mathscr{S}_{K}$, but it comes
from a universal abelian scheme via pullback) and we have various kinds of
Hodge tensors. We also have a “universal” isogeny chain of abelian schemes
tightly connected to the “universal” abelian scheme.
The overarching goal (and what we meant above by “defining the EKOR strata in
a geometrical way”) is to construct a “nice” algebraic stack
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ and a “nice” morphism
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$
from the mod-$p$ reduction of the Shimura variety to it, such that the fibers
are the EKOR strata. Shen, Yu and Zhang [SYZ19] solved this problem on
individual Kottwitz-Rapoport strata and globally after perfection, but not in
the form stated here (i.e., globally without passing to perfections). In the
Siegel case we propose a solution which specializes to that of Shen, Yu and
Zhang on Kottwitz-Rapoport strata, and should not be difficult to generalize
to many (P)EL cases. We show that
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$
is surjective. However, we have to leave the question of whether
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$
is smooth (which would be part of “nice”) an open conjecture.
For hyperspecial level, the EKOR stratification agrees with the Ekedahl-Oort
stratification, and the goal just set out is achieved by the stack of
$\overline{\mathcal{G}}_{K}$-zips, first defined in special cases by Moonen
and Wedhorn in [MW04] and then generally by Pink, Wedhorn and Ziegler in
[PWZ11, PWZ15]; the relation to Shimura varieties being established in
increasing generality in [MW04], by Viehmann and Wedhorn in [VW13], and
finally by Zhang in [Zha15].
One way of looking at the transition from hyperspecial level to general
parahoric level (at the very least in nice enough (P)EL cases) is from the
point of view of moduli problems of abelian varieties with extra structure,
where in the hyperspecial case we are really dealing just with that and in the
general case we are dealing with isogeny chains of abelian varieties with
extra structure, indexed by lattice chains coming from the Bruhat-Tits
building of the reductive $p$-adic Lie group in question. The basic idea in
generalizing zips from the hyperspecial to the general parahoric case then is
that one should be dealing with chains of zips in the old sense.
The zip of an abelian variety encodes the following information: the Hodge
filtration, the conjugate filtration, and the Cartier isomorphism relating the
two. In the general case, every abelian variety in the isogeny chain has a
Hodge filtration, a conjugate filtration and a Cartier isomorphism. Problems
now arise because we are dealing with $p$-primary isogenies on $p$-torsion
points, implying that the transition morphisms in these chains have non-
vanishing kernels. This introduces additional difficulty compared to the
hyperspecial case; there is a naive way of defining a zip stack, but
eventually we need to consider a certain admissible locus in it, which so far
suffers from the absence of a nice moduli description. Passing to perfections
however simplifies things and allows us to prove that the admissible locus is
closed. From here we arrive at the stack that we are really interested in by
dividing out a certain group action involving the unipotent radical of the
special fiber of the parahoric group scheme. A careful inspection shows that
on Kottwitz-Rapoport strata we arrive at the same result as in [SYZ19].
To sum up the results,
###### Theorem A:
In the Siegel case, there is an algebraic stack
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ and a surjective morphism
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$,
whose fibers are the EKOR strata and such that on Kottwitz-Rapoport strata,
one gets the stack and map constructed in [SYZ19].
For $\operatorname{GSp}(4)$ we do some calculations to illustrate the theory;
section 2.2.5.
### Acknowledgements
This article essentially is an extract of my doctoral thesis [Hes20] (another
extract222In particular, there is a large overlap between the “Background”
sections of the two articles., dealing with the foliation into central leaves,
is [Hes20a]). I thank Torsten Wedhorn for suggesting the topic of the
dissertation, his support and patiently answering my questions. Moreover I
thank Eva Viehmann and Paul Hamacher for their hospitality and helpful
discussions during a month-long stay in Munich at the TU München. I am also
grateful to Timo Richarz and Timo Henkel for numerous helpful discussions.
This research was supported by the Deutsche Forschungsgemeinschaft (DFG),
project number WE 2380/5.
## 1 Background
### 1.1 Shimura data of Hodge type
This article deals with aspects of the geometry of Shimura varieties (of Hodge
type), which are the (systems of) varieties associated with Shimura data (of
Hodge type).
###### Definition 1.1.
A _Shimura datum of Hodge type_ is a pair $(G,X)$, where $G$ is a reductive
algebraic group over $\mathbb{Q}$ and
$X\subseteq\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},G_{\mathbb{R}})$
is a $G(\mathbb{R})$-conjugacy class
($\mathbb{S}:=\operatorname{Res}_{\mathbb{C}/\mathbb{R}}\mathbb{G}_{m,\mathbb{C}}$
being the Deligne torus) subject to the following conditions:
1. (1)
For $h\in X$, the induced Hodge structure
$\mathbb{S}\xrightarrow{h}G_{\mathbb{R}}\xrightarrow{\mathrm{Ad}}\operatorname{GL}(\operatorname{Lie}(G_{\mathbb{R}}))$
satisfies
$\operatorname{Lie}(G_{\mathbb{C}})=\operatorname{Lie}(G_{\mathbb{C}})^{-1,1}\oplus\operatorname{Lie}(G_{\mathbb{C}})^{0,0}\oplus\operatorname{Lie}(G_{\mathbb{C}})^{1,-1}$.
2. (2)
$\operatorname{int}(h(i))\colon G^{\mathrm{ad}}_{\mathbb{R}}\to
G^{\mathrm{ad}}_{\mathbb{R}}$ is a Cartan involution, i.e., $\\{g\in
G^{\mathrm{ad}}(\mathbb{C})\;|\;gh(i)=h(i)\overline{g}\\}$ is compact. Another
way of phrasing this condition: Every finite-dimensional real representation
$V$ of $G^{\mathrm{ad}}_{\mathbb{R}}$ carries a
$G^{\mathrm{ad}}_{\mathbb{R}}$-invariant bilinear form $\varphi$ such that
$(u,v)\mapsto\varphi(u,h(i)v)$ is symmetric and positive definite. It is
enough to show that this holds for one _faithful_ finite-dimensional real
representation $V$.
3. (3)
$G^{\mathrm{ad}}$ _cannot_ be non-trivially written as $G^{\mathrm{ad}}\cong
H\times I$ over $\mathbb{Q}$ with $\mathbb{S}\to
G_{\mathbb{R}}\xrightarrow{\mathrm{proj}}H_{\mathbb{R}}$ trivial.
4. (4)
There exists an embedding
$(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$, where
$(\operatorname{GSp}(V),S^{\pm})$ is the Shimura datum associated with a
finite-dimensional symplectic $\mathbb{Q}$-vector space $V$ (see below). That
is, we have an embedding $G\hookrightarrow\operatorname{GSp}(V)$ of
$\mathbb{Q}$-group schemes such that the induced map
$\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},G_{\mathbb{R}})\hookrightarrow\operatorname{Hom}_{\mathbb{R}\text{-grp}}(\mathbb{S},\operatorname{GSp}(V_{\mathbb{R}}))$
restricts to a map $X\hookrightarrow S^{\pm}$.
###### Example 1.2.
Let $W$ be a finite-dimensional $\mathbb{R}$-vector space.
$\mathbb{R}$-group homomorphisms $\mathbb{S}\to\operatorname{GL}(W)$ then
correspond to Hodge decompositions of $W$, i.e., to decompositions
$W_{\mathbb{C}}=\oplus_{(p,q)\in\mathbb{Z}^{2}}W_{\mathbb{C}}^{p,q}$, such
that $W_{\mathbb{C}}^{p,q}$ is the complex conjugate of $W_{\mathbb{C}}^{q,p}$
for all $(p,q)\in\mathbb{Z}^{2}$. Under this correspondence,
$h\colon\mathbb{S}\to\operatorname{GL}(W)$ corresponds to the Hodge
decomposition $W_{\mathbb{C}}^{p,q}=\\{w\in W_{\mathbb{C}}\;|\;\forall
z\in\mathbb{S}(\mathbb{R})=\mathbb{C}^{\times}\colon
h(z)w=z^{-p}\bar{z}^{-q}w\\}$. Hodge decompositions of $W$ of type
$(-1,0)+(0,-1)$ correspond to complex structures on $W$: If
$h\colon\mathbb{S}\to\operatorname{GL}(W)$ yields such a Hodge decomposition,
then $h(i)$ gives an $\mathbb{R}$-endomorphism $J$ of $W$ with $J\circ
J=-\operatorname{id}_{W}$.
Let $V=(V,\psi)$ be a finite-dimensional symplectic $\mathbb{Q}$-vector space.
We say that a complex structure $J$ on $V_{\mathbb{R}}$ is positive (resp.
negative) if $\psi_{J}:=\psi_{\mathbb{R}}(\\_,J\\_)$ is a positive definite
(resp. negative definite) symmetric bilinear form on $V_{\mathbb{R}}$. Define
$S^{+}$ (resp. $S^{-}$) to be the set of positive (resp. negative) complex
structures on $(V_{\mathbb{R}},\psi_{\mathbb{R}})$ and $S^{\pm}:=S^{+}\sqcup
S^{-}$.
We can make this more concrete: A symplectic basis of
$(V_{\mathbb{R}},\psi_{\mathbb{R}})$ is a basis $e_{1},\dotsc,\allowbreak
e_{g},e_{-g},\dotsc,\allowbreak e_{-1}$, such that $\psi_{\mathbb{R}}$ is of
the form
$\begin{pmatrix}&\tilde{I}_{g}\\\ -\tilde{I}_{g}&\end{pmatrix}$
with respect to this basis, where $\tilde{I}_{g}=\begin{pmatrix}&&1\\\
&\iddots&\\\ 1&&\end{pmatrix}$ is the antidiagonal identity
matrix.333Occasionally (in particular when doing concrete matrix
calculations), it is more convenient to number the basis vectors
$1,\dotsc,g,-1,\dotsc,-g$ instead of $1,\dotsc,g,-g,\dotsc,-1$. Then the
standard symplectic form is given by $\left(\begin{smallmatrix}&I_{g}\\\
-I_{g}&\end{smallmatrix}\right)$, $I_{g}$ being the $g\times g$ identity
matrix.
Let $J$ be the endomorphism of $V_{\mathbb{R}}$ of the form
$\begin{pmatrix}&-\tilde{I}_{g}\\\ \tilde{I}_{g}&\end{pmatrix}$
with respect to this basis. Then $J\in S^{+}$ and what we have described is a
surjective map
$\\{\text{symplectic bases of
}(V_{\mathbb{R}},\psi_{\mathbb{R}})\\}\twoheadrightarrow S^{+}.$
In particular we see that
$\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}}):=\\{f\in\operatorname{GL}(V_{\mathbb{R}})\;|\;\psi_{\mathbb{R}}(f(\\_),f(\\_))=\psi_{\mathbb{R}}\\}$
(by virtue of acting simply transitively on the symplectic bases) acts
transitively on
$S^{+}\cong\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}})/\operatorname{SpO}(V_{\mathbb{R}},\psi_{\mathbb{R}},J)$
(where we define
$\operatorname{SpO}(V_{\mathbb{R}},\psi_{\mathbb{R}},J):=\operatorname{Sp}(V_{\mathbb{R}},\psi_{\mathbb{R}})\cap
O(V_{\mathbb{R}},\psi_{J})=U((V_{\mathbb{R}},J),\psi_{J})$ for a fixed choice
of $J\in S^{+}$) and therefore the general symplectic group
$\operatorname{GSp}(V_{\mathbb{R}},\psi_{\mathbb{R}}):=\\{f\in\operatorname{GL}(V_{\mathbb{R}})\;|\;\psi_{\mathbb{R}}(f(\\_),f(\\_))=c\cdot\psi_{\mathbb{R}}\text{
for some }c\in\mathbb{R}^{\times}\\}$ acts transitively on $S^{\pm}$ (note
that the element of the form $e_{\pm i}\mapsto e_{\mp i}$ of
$\operatorname{GSp}(V_{\mathbb{R}},\psi_{\mathbb{R}})$ for any given choice of
symplectic basis $\left(e_{i}\right)_{i}$ permutes $S^{+}$ and $S^{-}$).
###### Definition 1.3.
Condition (1) of Definition 1.1 implies that the action of
$\mathbb{G}_{m,\mathbb{R}}$ (embedded in $\mathbb{S}$ in the natural way) on
$\operatorname{Lie}(G_{\mathbb{R}})$ is trivial, so that $h$ induces a
homomorphism
${w\colon\mathbb{G}_{m,\mathbb{R}}\to\operatorname{Cent}(G_{\mathbb{R}})}$.
This homomorphism is independent of the choice of $h\in X$ and is called the
_weight homomorphism_ of $(G,X)$.
Moreover, we denote by $\\{\mu\\}$ the the $G(\mathbb{C})$-conjugacy class of
the cocharacter
$\mu_{h}:=h\circ(\operatorname{id}_{\mathbb{G}_{m,\mathbb{C}}},1)\colon\mathbb{G}_{m,\mathbb{C}}\to\mathbb{G}_{m,\mathbb{C}}^{2}\cong\mathbb{S}_{\mathbb{C}}\to
G_{\mathbb{C}}$, where $h$ is as above. Obviously, the conjugacy class
$\\{\mu\\}$ is independent of the particular choice of $h\in X$.
###### Remark 1.4.
Let $L/\mathbb{Q}$ be a field extension such that $G_{L}$ contains a split
maximal torus $T$. Let $W:=\operatorname{Norm}_{G(L)}(T)/T$ be the Weyl group.
Then the natural map
$W\backslash\operatorname{Hom}_{L\text{-grp}}(\mathbb{G}_{m,L},T)\to
G(L)\backslash\operatorname{Hom}_{L\text{-grp}}(\mathbb{G}_{m,L},G_{L})$
is bijective.
Since the left hand side remains unchanged if we go from $L=\bar{\mathbb{Q}}$
(where as usual $\bar{\mathbb{Q}}$ denotes an algebraic closure of
$\mathbb{Q}$) to $L=\mathbb{C}$, we see that $\\{\mu\\}$ contains a
cocharacter defined over $\bar{\mathbb{Q}}$ and that we may then also consider
$\\{\mu\\}$ as a $G(\bar{\mathbb{Q}})$-conjugacy class.
###### Definition 1.5.
The _reflex field_ $\mathbf{E}=\mathbf{E}(G,X)$ of $(G,X)$ is the field of
definition of $\\{\mu\\}$, i.e., the fixed field in $\bar{\mathbb{Q}}$ of
$\\{\gamma\in\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\;|\;\gamma(\\{\mu\\})=\\{\mu\\}\\}$.
###### Example 1.6.
The reflex field of the Shimura datum
$(\operatorname{GSp}_{2g,\mathbb{Q}},S^{\pm})$ of Example 1.2 is $\mathbb{Q}$.
To wit, one of the cocharacters in the conjugacy class $\\{\mu\\}$ is
$\mu(z)=\left(\begin{smallmatrix}z&&&&&\\\ &\ddots&&&&\\\ &&z&&&\\\ &&&1&&\\\
&&&&\ddots&\\\ &&&&&1\end{smallmatrix}\right).$
###### Notation 1.7.
We denote the ring of (rational) adeles by
$\mathbb{A}:=\mathbb{A}_{\mathbb{Q}}$, the subring of finite adeles by
$\mathbb{A}_{f}:=\mathbb{A}_{\mathbb{Q},f}$ and the subring of finite adeles
away from some fixed prime $p$ by $\mathbb{A}_{f}^{p}$.
###### Definition and Remark 1.8.
Let $K\subseteq G(\mathbb{A}_{f})$ be a compact open subgroup. The _Shimura
variety of level $K$ associated with $(G,X)$_ is the double coset space
$\operatorname{Sh}_{K}(G,X):=G(\mathbb{Q})\backslash(X\times(G(\mathbb{A}_{f})/K)).$
A priori, this is just a set, but if $K$ is sufficiently small (i.e., “neat”
in the sense of [Bor69, Pin90]), $\operatorname{Sh}_{K}(G,X)$ can be
canonically written as a finite disjoint union of hermitian symmetric
domains.444If $K$ fails to be sufficiently small, one might very reasonably
argue that our definition of the Shimura variety of level $K$ really is the
definition of the _coarse_ Shimura variety and that one should be working with
stacks instead. Since we will only be interested in sufficiently small level,
this is inconsequential for us. In particular, this gives
$\operatorname{Sh}_{K}(G,X)$ the structure of a complex manifold.
In fact, by the theorem of Baily-Borel, this complex manifold attains the
structure of a quasi-projective complex variety in a canonical way.
By work of Deligne, Milne and Borovoi, this variety is defined already (and
again in a canonical way) over the reflex field $\mathbf{E}$. So in
particular, it is defined over a number field independent of $K$. This is
important when varying $K$ and it is the reason why we consider the whole
Shimura variety instead of its connected components over $\mathbb{C}$ on their
own. It is possible for the Shimura variety to have multiple connected
components over $\mathbb{C}$ while being connected over $\mathbf{E}$.
More detailed explanations may be found in [Mil05].
### 1.2 Bruhat-Tits buildings
Let $K$ be a complete discrete valuation field with ring of integers
$\mathcal{O}$, uniformizer $\varpi$ and perfect residue field
$\kappa:=\mathcal{O}/\varpi$.
###### Notation 1.9.
For a (connected) reductive group $G$ over $K$, we denote by
$\mathcal{B}(G,K)$ the extended (or enlarged) and by
$\mathcal{B}^{\mathrm{red}}(G,K)$ the reduced (i.e., non-extended) Bruhat-Tits
building of $G$ over $K$ [BT84]. Moreover,
$\mathcal{B}^{\mathrm{abstract}}(G,K)$ denotes the underlying abstract
simplicial complex.
###### Remark 1.10.
Let $V$ be a finite-dimensional $K$-vector space.
As described in [KP15, 1.1.9] (originally in [BT84a]), the points of
$\mathcal{B}(\operatorname{GL}(V),K)$ correspond to graded periodic lattice
chains $(\mathcal{L},c)$, i.e.,
* •
$\emptyset\neq\mathcal{L}$ is a totally ordered set of full
$\mathcal{O}$-lattices in $V$ stable under scalar multiplication (i.e.,
$\Lambda\in\mathcal{L}\iff\varpi\Lambda\in\mathcal{L}$),
* •
$c\colon\mathcal{L}\to\mathbb{R}$ is a strictly decreasing function such that
$c(\varpi^{n}\Lambda)=c(\Lambda)+n$.
###### Remark 1.11.
Fix such an $\mathcal{L}$ and let $\Lambda^{0}\in\mathcal{L}$. Then every
homothety class of lattices has a unique representative $\Lambda$ such that
$\Lambda\subseteq\Lambda^{0}$ and $\Lambda\not\subseteq\varpi\Lambda^{0}$.
Consider such representatives $\Lambda^{i}$ for all of the distinct homothety
classes of lattices that make up $\mathcal{L}$. Because $\mathcal{L}$ is
totally ordered and $\Lambda^{i}\not\subseteq\varpi\Lambda^{0}$, it follows
that $\Lambda^{i}\supseteq\varpi\Lambda^{0}$ for all $i$ and that
$\left\\{\Lambda^{i}/\varpi\Lambda^{0}\right\\}_{i}$ is a flag of non-trivial
linear subspaces of $\Lambda^{0}/\varpi\Lambda^{0}\cong\kappa^{n}$, where
$n:=\dim V$. Consequently, the number $r$ of homothety classes is in
$\\{1,\dotsc,n\\}$; it is called the _period length_ (or _rank_) of
$\mathcal{L}$. Numbering the $\Lambda^{i}$ in descending order we hence obtain
$r$ lattices $\Lambda^{0},\Lambda^{1},\dotsc,\Lambda^{r-1}$ such that
$\Lambda^{0}\supsetneqq\Lambda^{1}\supsetneqq\dotsb\supsetneqq\Lambda^{r-1}\supsetneqq\varpi\Lambda^{0}$
(1.12)
and $\mathcal{L}$ is given by the the strictly descending sequence of lattices
$\Lambda^{qr+i}=\varpi^{q}\Lambda^{i},\quad q\in\mathbb{Z},\;0\leq i<r.$
###### Remark 1.13.
Let $V$ be a finite-dimensional symplectic $K$-vector space.
$\mathcal{B}(\operatorname{GSp}(V),K)$ embeds into the subset of
$\mathcal{B}(\operatorname{GL}(V),K)$ consisting of those $(\mathcal{L},c)$
such that $\Lambda\in\mathcal{L}\implies\Lambda^{\vee}\in\mathcal{L}$.
Passing to the underlying abstract simplicial complexes means forgetting about
the grading $c$ and
$\mathcal{B}^{\mathrm{abstract}}(\operatorname{GSp}(V),K)=\\{\mathcal{L}\in\mathcal{B}^{\mathrm{abstract}}(\operatorname{GL}(V),K)\;|\;\Lambda\in\mathcal{L}\implies\Lambda^{\vee}\in\mathcal{L}\\}.$
If $\mathcal{L}\in\mathcal{B}^{\mathrm{abstract}}(\operatorname{GSp}(V),K)$
and $\\{\Lambda^{i}\\}_{i}$ is as in Remark 1.11, then there is an involution
$t\colon\mathbb{Z}\to\mathbb{Z}$ with
$\left(\Lambda^{i}\right)^{\vee}=\Lambda^{t(i)}$, $t(i+qr)=t(i)-qr$, and
$i<j\implies t(i)>t(j)$. So $-a:=t(0)>t(1)>\dotsb>t(r)=-a-r$, which implies
$t(i)=-i-a$. Thus $i_{0}-t(i_{0})=2i_{0}+a\in\\{0,1\\}$ for some unique
$i_{0}\in\mathbb{Z}$. Hence, upon renumbering the $\Lambda^{i}$, we may assume
that $a\in\\{0,1\\}$.
We therefore have
$\displaystyle\varpi\Lambda^{0}\subsetneqq\Lambda^{r-1}\subsetneqq\Lambda^{r-2}\subsetneqq\dotsb\subsetneqq\Lambda^{0}\subseteq\left(\Lambda^{0}\right)^{\vee}=\Lambda^{-a}\subsetneqq\left(\Lambda^{1}\right)^{\vee}=\Lambda^{-1-a}$
$\displaystyle\subsetneqq\dotsb\subsetneqq\left(\Lambda^{r-1}\right)^{\vee}=\Lambda^{-r+1-a}\subseteq\Lambda^{-r}=\varpi^{-1}\Lambda^{0}.$
###### Example 1.14.
See also section 2.2.5 for some elaborations on the building of
$\operatorname{GSp}_{4}(\mathbb{Q}_{p})$.
### 1.3 Bruhat-Tits group schemes
###### Notation 1.15.
Let $E$ be a finite field extension of $\mathbb{Q}_{p}$.
Denote by $\breve{E}$ the completion of the maximal unramified extension of
$E$ (hence $\breve{E}=E\cdot\breve{\mathbb{Q}}_{p}$).
###### Remark 1.16.
If $E/\mathbb{Q}_{p}$ is unramified, then ${{\cal
O}_{\breve{E}}}=W(\bar{\mathbb{F}}_{p})$, $\bar{\mathbb{F}}_{p}$ denoting an
algebraic closure of $\mathbb{F}_{p}$ and
$W\colon\mathrm{Ring}\to\mathrm{Ring}$ being the ($p$-adic) Witt vectors
functor. This generalizes to the ramified case using _ramified Witt vectors_
instead, see e.g. [Haz78, Chap. IV, (18.6.13)] or [Ahs11, Chapter 1].
Let $(G,X)$ be a Shimura datum of Hodge type, let
$(G,X)\hookrightarrow(\operatorname{GSp}(V),S^{\pm})$ be an embedding as in
Definition 1.1 (4), and let $x\in\mathcal{B}(G,\mathbb{Q}_{p})$ be a point in
the Bruhat-Tits building of $G$ over $\mathbb{Q}_{p}$.
We consider the associated Bruhat-Tits scheme ${\cal G}_{x}$, i.e., the affine
smooth model of $G_{\mathbb{Q}_{p}}$ over $\mathbb{Z}_{p}$ such that ${\cal
G}_{x}(\breve{\mathbb{Z}}_{p})\subseteq G(\breve{\mathbb{Q}}_{p})$ is the
stabilizer of the facet of $x$ in ${\cal
B}(G,\breve{\mathbb{Q}}_{p})\overset{\text{\cite[cite]{[\@@bibref{}{landvogt}{}{},
Prop.\leavevmode\nobreak\
2.1.3]}}}{=}\mathcal{B}(G,\mathbb{Q}^{\mathrm{ur}}_{p})$. Let $K_{p}:={\cal
G}_{x}(\mathbb{Z}_{p})\subseteq G(\mathbb{Q}_{p})$ and let $K^{p}\subseteq
G(\mathbb{A}_{f}^{p})$ be a sufficiently small open compact subgroup. Define
$K:=K_{p}K^{p}\subseteq G(\mathbb{A}_{f})$.
###### Assumptions 1.17.
From now on, we will always make the following assumptions:
* •
$\mathcal{G}_{x}=\mathcal{G}_{x}^{\circ}$ is connected.
* •
$G$ splits over a tamely ramified extension of $\mathbb{Q}_{p}$.
* •
$p\nmid\\#\pi_{1}(G^{\mathrm{der}})$.
###### Notation 1.18.
In order not to make notation overly cumbersome, we usually denote the base
change $G_{\mathbb{Q}_{p}}$ of $G$ to $\mathbb{Q}_{p}$ by $G$ again. (Later,
we will almost exclusively be dealing with $G_{\mathbb{Q}_{p}}$.)
### 1.4 Siegel integral models
With notation as above let
$\displaystyle N_{p}$
$\displaystyle:=\operatorname{Stab}_{\operatorname{GSp}(V)(\mathbb{Q}_{p})}(\mathcal{L})\quad\text{(as
before)},$ $\displaystyle J_{p}$
$\displaystyle:=\operatorname{Stab}_{\operatorname{GL}(V^{\S})(\mathbb{Q}_{p})}(\Lambda^{\S})\cap\operatorname{GSp}(V^{\S})(\mathbb{Q}_{p}).$
Let $N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$ and
$J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$ be sufficiently
small open compact subgroups, and $N:=N_{p}N^{p}$, $J:=J_{p}J^{p}$.
In this subsection, we are going to describe integral models of
$\operatorname{Sh}_{N}(\operatorname{GSp}(V),S^{\pm})$ and of
$\operatorname{Sh}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ over
$\mathbb{Z}_{(p)}$ and relate the two.
###### Remark 1.19.
By [RZ96, Definition 6.9], the integral model
$\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})$ is given by the moduli
problem $(\mathbb{Z}_{(p)}\text{-scheme})\ni
S\mapsto\left\\{(A,\bar{\lambda},\eta^{p})\right\\}/{\scriptstyle\cong}$,
where:
1. (a)
$A=\left(A_{\Lambda}\right)_{\Lambda\in\mathcal{L}}$ is an $\mathcal{L}$-set
of abelian schemes, i.e.,
* •
for every $\Lambda\in\mathcal{L}$, an abelian $S$-scheme up to
$\mathbb{Z}_{(p)}$-isogeny $A_{\Lambda}$ (i.e., $A_{\Lambda}$ is an object of
the category $(\text{abelian }S\text{-schemes})\otimes\mathbb{Z}_{(p)}$, where
the category $\mathcal{A}\otimes R$ for $\mathcal{A}$ an preadditive category
and $R$ a ring has the same objects as $\mathcal{A}$ and
$\operatorname{Hom}_{\mathcal{A}\otimes
R}(X,Y)=\operatorname{Hom}(X,Y)\otimes_{\mathbb{Z}}R$ for all objects $X,Y$),
* •
for every inclusion $\Lambda_{1}\subseteq\Lambda_{2}$ a
$\mathbb{Z}_{(p)}$-isogeny $\rho_{\Lambda_{2},\Lambda_{1}}\colon
A_{\Lambda_{1}}\to A_{\Lambda_{2}}$,
* •
$\rho_{\Lambda_{3},\Lambda_{1}}=\rho_{\Lambda_{3},\Lambda_{2}}\circ\rho_{\Lambda_{2},\Lambda_{1}}$
if $\Lambda_{1}\subseteq\Lambda_{2}\subseteq\Lambda_{3}$ in $\mathcal{L}$,
* •
the height of $\rho_{\Lambda_{2},\Lambda_{1}}$ is
$\log_{p}|\Lambda_{2}/\Lambda_{1}|$. Here $\rho_{\Lambda_{2},\Lambda_{1}}$
gives rise to a well-defined homomorphism of $p$-divisible groups, and what we
mean is that the kernel of this homomorphism (which is a finite locally free
commutative group scheme, which we also refer to simply as the kernel of
$\rho_{\Lambda_{2},\Lambda_{1}}$) is to have order
$|\Lambda_{2}/\Lambda_{1}|$.
* •
For every $\Lambda\in\mathcal{L}$, there is an isomorphism (called
_periodicity isomorphism_) $\theta_{\Lambda}\colon A_{\Lambda}\to
A_{p\Lambda}$ such that
$\rho_{\Lambda,p\Lambda}\circ\theta_{\Lambda}=[p]\colon A_{\Lambda}\to
A_{\Lambda}$ is the multiplication-by-$p$ isogeny.
2. (b)
$\bar{\lambda}\colon A\to\tilde{A}$ is a $\mathbb{Q}$-homogeneous principal
polarization, i.e., a $\underline{\mathbb{Q}^{\times}}$-orbit of a principal
polarization $\lambda\colon A\to\tilde{A}$. Here $\tilde{A}$ is the
$\mathcal{L}$-set of abelian schemes over $S$ up to prime-to-$p$ isogeny given
by $\tilde{A}_{\Lambda}:=(A_{\Lambda^{\vee}})^{\vee}$. And being a
polarization $\lambda$ means being a quasi-isogeny of $\mathcal{L}$-sets
$\lambda\colon A\to\tilde{A}$ such that
$A_{\Lambda}\xrightarrow{\lambda_{\Lambda}}\tilde{A}_{\Lambda}=(A_{\Lambda^{\vee}})^{\vee}\xrightarrow{\varrho_{\Lambda^{\vee},\Lambda}^{\vee}}(A_{\Lambda})^{\vee}$
is a polarization of $A_{\Lambda}$ for all $\Lambda$. If $\lambda_{\Lambda}$
can be chosen to be an isomorphism up to prime-to-$p$ isogeny for all
$\Lambda$, then we speak of a principal polarization. In that case, when
referring to $\lambda_{\Lambda}$, we mean a $\lambda_{\Lambda}$ which is an
isomorphism up to prime-to-$p$ isogeny.
3. (c)
$\eta^{p}$ is a level-$N^{p}$-structure, i.e. (if $S$ is connected), it is a
$\pi_{1}(S,s)$-invariant $N^{p}$-orbit of symplectic similitudes
$\eta^{p}\colon V_{\mathbb{A}_{f}^{p}}\to H_{1}(A_{s},\mathbb{A}_{f}^{p})$
(where $s$ is some geometric basepoint and $H_{1}(A_{s},\mathbb{A}_{f}^{p})$
with its $\pi_{1}(S,s)$-action corresponds to the Tate
$\mathbb{A}_{f}^{p}$-module of $A$ (cf. [RZ96, 6.8]), which is a smooth
$\mathbb{A}_{f}^{p}$-sheaf). Note that this forces the abelian schemes
$A_{\Lambda}$ to be $(\dim_{\mathbb{Q}}V)$-dimensional.
###### Definition 1.20.
Set $\Lambda^{\S}_{\mathbb{Z}_{(p)}}:=\Lambda^{\S}_{\mathbb{Z}_{p}}\cap
V^{\S}_{\mathbb{Q}}=\prod_{i=-(r-1)-a}^{r-1}\Lambda_{\mathbb{Z}_{(p)}}^{i}$.
We choose a lattice $\Lambda^{\S}_{\mathbb{Z}}\subseteq V^{\S}$ such that
$\Lambda^{\S}_{\mathbb{Z}}\otimes_{\mathbb{Z}}\mathbb{Z}_{(p)}=\Lambda^{\S}_{\mathbb{Z}_{(p)}}$
and $\Lambda^{\S}_{\mathbb{Z}}\subseteq(\Lambda^{\S}_{\mathbb{Z}})^{\vee}$.
###### Remark 1.21.
Set
$d:=\bigl{|}\left(\Lambda_{\mathbb{Z}}^{\S}\right)^{\vee}/\Lambda_{\mathbb{Z}}^{\S}\bigr{|}$.
By [Kis10, 2.3.3, 3.2.4], the integral model
$\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$ is given by the
moduli problem $(\mathbb{Z}_{(p)}\text{-schemes})\ni
S\mapsto\left\\{(A^{\S},\lambda^{\S},\epsilon^{p})\right\\}/{\scriptstyle\cong}$,
where
1. (a)
$A^{\S}$ is an abelian scheme over $S$ up to $\mathbb{Z}_{(p)}$-isogeny,
2. (b)
$\lambda^{\S}\colon A^{\S}\to\left(A^{\S}\right)^{\vee}$ is a polarization of
degree $d$ (i.e., the polarization of the (well-defined) associated
$p$-divisible group has degree $d$),
3. (c)
$\epsilon^{p}$ is a level-$J^{p}$-structure, i.e. (if $S$ is connected), it is
a $\pi_{1}(S,s)$-invariant $J^{p}$-orbit of symplectic similitudes
$\epsilon^{p}\colon V^{\S}_{\mathbb{A}_{f}^{p}}\to
H_{1}(A^{\S}_{s},\mathbb{A}_{f}^{p})$. Note that this forces the abelian
schemes $A^{\S}$ to be $(\dim_{\mathbb{Q}}V^{\S})$-dimensional.
This completes the descriptions of the moduli problems, and we turn to the
question of the relationship between the two. Consider (for appropriate
$N^{p},J^{p}$; see below) the morphism
$\chi\colon\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$
given on $S$-valued points by sending $(A,\bar{\lambda},\eta^{p})$ to
$(A^{\S},\lambda^{\S},\epsilon^{p})$, where
1. (a)
$\displaystyle A^{\S}:=\prod_{i=-(r-1)-a}^{r-1}A_{\Lambda^{i}}$,
2. (b)
$\displaystyle\lambda^{\S}:=\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{\Lambda^{i}}\right)$,
3. (c)
$\epsilon^{p}$ is the product $\prod_{i=-(r-1)-a}^{r-1}\eta^{p}$, to be
interpreted as the product over $\eta^{p}\colon V_{\mathbb{A}_{f}^{p}}\to
H_{1}(A_{\Lambda^{i},s},\mathbb{A}_{f}^{p})\cong
H_{1}(A_{s},\mathbb{A}_{f}^{p})$, where the isomorphism
$H_{1}(A_{\Lambda^{i},s},\mathbb{A}_{f}^{p})\cong
H_{1}(A_{s},\mathbb{A}_{f}^{p})$ is by definition the identity for some fixed
$i=i_{0}$ and otherwise induced by the transition map
$\rho_{\Lambda^{i},\Lambda^{i_{0}}}$. We need that $N^{p}$ is mapped into
$J^{p}$ by $\operatorname{GSp}(V)\hookrightarrow\operatorname{GSp}(V^{\S})$
for this to make sense.
###### Lemma 1.22.
Let $S$ be a scheme, $\ell\neq p$ prime numbers. If $\ell$ does not appear as
a residue characteristic of $S$, then the Tate module functors
$\displaystyle H_{1}(\\_,\mathbb{Z}_{\ell})$
$\displaystyle\colon(\text{abelian }S\text{-schemes})\to(\text{\~{A}©tale
}\mathbb{Z}_{\ell}\text{-local systems on }S),$ $\displaystyle
H_{1}(\\_,\mathbb{Q}_{\ell})$ $\displaystyle\colon(\text{abelian
}S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{Q}_{\ell}\text{-local systems
on }S)$
(cf. [Gro74, III, 5.4 and 6.2] for precise definitions) are faithful.
If only $p$ and $0$ appear as residue characteristics of $S$, then the Tate
module functor
$H_{1}(\\_,\mathbb{A}_{f}^{p})\colon(\text{abelian
}S\text{-schemes})\to(\text{\~{A}©tale }\mathbb{A}_{f}^{p}\text{-local systems
on }S)$
is faithful.
###### Proof:
First note that the statements about $H_{1}(\\_,\mathbb{Q}_{\ell})$ and
$H_{1}(\\_,\mathbb{A}_{f}^{p})$ follows from the statement about
$H_{1}(\\_,\mathbb{Z}_{\ell})$, which is why it is enough to only look at
$H_{1}(\\_,\mathbb{Z}_{\ell})$.
A homomorphism of abelian $S$-schemes $f\colon A\to B$ vanishes if and only if
it vanishes over every (geometric) fiber of $S$: Indeed, if it vanishes
fiberwise, then it is flat by the fiber criterion for flatness. Applying that
criterion again we see that the closed immersion and fiberwise isomorphism
$\ker(f)\hookrightarrow A$ is flat, which means that is an isomorphism.
This way we are reduced to the case where $R$ is an (algebraically closed)
field of characteristic different from $\ell$. In this setting the
faithfulness is well-known (the salient point being that the $\ell$-primary
torsion is dense). □
###### Lemma 1.23.
Let $H$ be a totally disconnected locally compact555By (our) definition,
locally compact implies Hausdorff. group (i.e., a locally profinite group) and
let $N\subseteq H$ be a compact subgroup. Then
$N=\bigcap_{\begin{subarray}{c}N\subseteq J\\\ J\subseteq H\text{ open compact
subgrp.}\end{subarray}}J.$
Note that this is (a variant of) a well-known theorem by van Dantzig if
$N=\\{1\\}$ [Dan36].
###### Proof:
We make use of the following fact [AT08, Prop. 3.1.7]: A Hausdorff space is
locally compact and totally disconnected if and only if the open compact sets
form a basis of the topology. (Van Dantzig’s theorem is the group version of
this, which talks only about a neighborhood basis of the identity and open
compact _subgroups_.)
First we show that $N$ is contained in some open compact subset $K\subseteq
H$. For every $x\in N$ choose a compact open neighborhood $x\in K_{x}\subseteq
H$. This is possible by the fact cited above. Then there is a finite subset
$I\subseteq N$ such that $N\subseteq\bigcup_{x\in I}K_{x}=:K$.
Next, for every $x\in N$ choose an open neighborhood of the identity $U_{x}$
such that $xU_{x}K\subseteq K$. With $N\subseteq U:=\bigcup_{x\in N}xU_{x}$ we
obtain $UK\subseteq K$. Replacing $U$ by $U\cap U^{-1}$, we may moreover
assume it is symmetric. The subgroup generated by $U$ is open (hence closed)
and contained in $K$, hence is an open compact subgroup.
Thus $N$ even is contained in an open compact sub _group_ ; in other words, we
may assume that $H$ is compact, i.e., is a profinite group.
Then $H/N$ is compact666Hausdorff quotient spaces of compact spaces are
compact again, but for “locally compact” the analogous statement is not true
in general! and totally disconnected777Take $x,y\in H$ such that $xN\neq yN$.
We show that any subspace $S\subseteq H/N$ containing both $xN$ and $yN$ is
disconnected. Let $U\subseteq H/N$ be a neighborhood of $xN$ not containing
$yN$. Let $x\in V\subseteq\pi^{-1}(U)$ be open and compact, where $\pi\colon
H\to H/N$ is the projection. Then $yN\notin\pi(V)\subseteq H/N$ is open and
compact (hence closed) and we have $S=(\pi(V)\cap S)\sqcup S\setminus\pi(V)$
where both $\pi(V)\cap S$ and $S\setminus\pi(V)$ are open in $S$. This shows
that $S$ is disconnected. (i.e., is a Stone space). By the fact cited above,
$H/N\supseteq\\{1\\}=\bigcap_{L\subseteq H/N\text{ open compact subset}}L.$
Observe that the quotient map $H\to H/N$ is proper to deduce
$N=\bigcap_{\begin{subarray}{c}N\subseteq M\\\ M\subseteq H\text{ open compact
subset}\end{subarray}}M.$
Say $M$ is an open and compact subset of $H$ containing $N$. As we have shown
above, there is an open compact subgroup $J\subseteq H$ in between $N$ and
$M$, and this is all we need to complete the proof. □
###### Proposition 1.24.
For every compact open subgroup
$N^{p}\subseteq\operatorname{GSp}(V)(\mathbb{A}_{f}^{p})$
$\chi\colon\mathscr{S}_{N}(\operatorname{GSp}(V),S^{\pm})\to\mathscr{S}_{J}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$
is a well-defined morphism for all compact open subgroups $N^{p}\subseteq
J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$ and is a closed
immersion for all sufficiently small compact open subgroups $N^{p}\subseteq
J^{p}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{A}_{f}^{p})$.
###### Proof:
The fact that it’s well-defined is clear from the construction.
To show the second statement, as in [Del71, Prop. 1.15], it is enough to show
that
$\mathscr{S}_{N_{p}N^{p}}(\operatorname{GSp}(V),S^{\pm})\to\varprojlim_{J^{p}}\mathscr{S}_{J_{p}J^{p}}(\operatorname{GSp}(V^{\S}),S^{\S,\pm})$
is a closed immersion, i.e., a proper monomorphism.
We begin by proving that it is a monomorphism, i.e., injective on $S$-valued
points ($S$ arbitrary $\mathbb{Z}_{(p)}$-scheme). So, say
$(A_{1},\lambda_{1},\eta_{1}^{p})$ and $(A_{2},\lambda_{2},\eta_{2}^{p})$ both
map to $(A^{\S},\lambda^{\S},\epsilon_{J^{p}}^{p})$. That means precisely that
there is an isomorphism of abelian $S$-schemes up to
$\mathbb{Z}_{(p)}$-isogeny
$\phi\colon\prod_{i=-(r-1)-a}^{r-1}A_{1,\Lambda^{i}}\xrightarrow{\cong}\prod_{i=-(r-1)-a}^{r-1}A_{2,\Lambda^{i}}$
such that
$\phi^{\vee}\circ\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{2,\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{2,\Lambda^{i}}\right)\circ\phi=\prod_{i=-(r-1)-a}^{r-1}\left(\rho_{1,\left(\Lambda^{i}\right)^{\vee},\Lambda^{i}}^{\vee}\circ\lambda_{1,\Lambda^{i}}\right)$
and
$H_{1}(\phi,\mathbb{A}_{f}^{p})\circ\epsilon_{1,J^{p}}^{p}=\epsilon_{2,J^{p}}^{p}\mod{J^{p}}.$
We claim that $\phi$ comes from isomorphisms
$\phi_{i}\colon A_{1,\Lambda^{i}}\xrightarrow{\cong}A_{2,\Lambda^{i}}.$
Certainly there is but one candidate for $\phi_{i}$: define $\phi_{i}$ to be
the composition
$A_{1,\Lambda^{i}}\xrightarrow{\mathrm{incl}}\prod_{i=-(r-1)-a}^{r-1}A_{1,\Lambda^{i}}\xrightarrow{\phi}\prod_{i=-(r-1)-a}^{r-1}A_{2,\Lambda^{i}}\xrightarrow{\mathrm{proj}}A_{2,\Lambda^{i}}.$
Our claim then is that
$\phi=\prod_{i=-(r-1)-a}^{r-1}\phi_{i}.$
Apply $H^{1}(\\_,\mathbb{A}_{f}^{p})$ on both sides. For the left hand side,
we have
$H_{1}(\phi,\mathbb{A}_{f}^{p})=\epsilon_{2,J^{p}}^{p}\circ\left(\epsilon_{1,J^{p}}^{p}\right)^{-1}\mod{J^{p}}.$
and the right hand side of this equation is block diagonal. So
$H_{1}(\phi,\mathbb{A}_{f}^{p})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{A}_{f}^{p})\mod{J^{p}}.$
Since (by Lemma 1.23)
$N^{p}=\bigcap_{\begin{subarray}{c}N_{\ell}\subseteq J_{\ell}\\\
J_{\ell}\subseteq\operatorname{GSp}(V^{\S})(\mathbb{Q}_{\ell})\text{ cpt. open
subgrp.}\end{subarray}}J_{\ell},$
it follows that (with $\ell\neq p$)
$H_{1}(\phi,\mathbb{Q}_{\ell})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{Q}_{\ell})\mod{N_{\ell}},$
hence (since $N_{\ell}$ acts block-diagonally) that
$H_{1}(\phi,\mathbb{Q}_{\ell})=\prod_{i=-(r-1)-a}^{r-1}H_{1}(\phi_{i},\mathbb{Q}_{\ell})$.
Since $H_{1}(\\_,\mathbb{Q}_{\ell})$ is faithful (Lemma 1.22), this implies
$\phi=\prod_{i=-(r-1)-a}^{r-1}\phi_{i}$, as desired.
Next, consider the extension by zero of
$\left(H_{1}(\rho_{1/2,\Lambda^{j},\Lambda^{i}},\mathbb{A}_{f}^{p})\right)_{i,j}$
(where for “$1/2$” either “$1$” or “$2$” can be plugged in) to a map
$H_{1}(A^{\S},\mathbb{A}_{f}^{p})\to H_{1}(A^{\S},\mathbb{A}_{f}^{p})$. Under
the isomorphism given by the $J^{p}$-level structure this corresponds, up to
the $J^{p}$-action, to the map $V^{\S}_{\mathbb{A}_{f}^{p}}\to
V^{\S}_{\mathbb{A}_{f}^{p}}$ given by mapping the $i$’th copy of
$V_{\mathbb{A}_{f}^{p}}$ identically to the $j$’th copy and the rest to zero.
Thus $\rho_{1/2,i,j}$ yield the same up to $J^{p}$ after applying
$H_{1}(\\_,\mathbb{A}_{f}^{p})$, hence they are equal in the
$\mathbb{Z}_{(p)}$-isogeny category.
Consequently, $\chi$ is a monomorphism.
For properness, we will use the valuative criterion. Let $R$ be a discrete
valuation ring with field of fractions $K$ and assume that a $K$-point
$A^{\S}=\prod_{i=-(r-1)-a}^{r-1}A_{\Lambda^{i}}$ with its additional
structures coming from $(A_{\Lambda^{i}})_{i}$ extends to an $R$-point
$\mathcal{A}^{\S}$. Consider the map $A^{\S}\to A_{\Lambda^{i_{0}}}\to A^{\S}$
where the first map is a projection and the second an inclusion. By the Néron
mapping property, this extends to a map $\mathcal{A}^{\S}\to\mathcal{A}^{\S}$.
Define $\mathcal{A}_{\Lambda^{i_{0}}}$ to be the image of this map.
The Néron mapping property also allows us to extend the transition isogenies
$\rho_{\Lambda^{i_{0}},\Lambda^{j_{0}}}\colon\allowbreak{A_{\Lambda^{j_{0}}}\to
A_{\Lambda^{i_{0}}}}$, $i_{0}\leq j_{0}$, the periodicity isomorphisms, and
the polarization.
Since $\pi_{1}(\operatorname{Spec}K)$ surjects onto
$\pi_{1}(\operatorname{Spec}R)$ (see [Stacks, Tag 0BQM]), extending the level
structure away from $p$ is trivial. □
### 1.5 Local structure of the integral model
#### 1.5.1 Generizations and irreducible components
Let $\mathscr{X}\to\operatorname{Spec}\mathcal{O}_{\breve{E}}$ be a flat
scheme locally of finite type; denote the special fiber by
$X\to\operatorname{Spec}\bar{\mathbb{F}}_{p}$ and the generic fiber by
$\mathcal{X}\to\operatorname{Spec}\breve{E}$. We assume that $\mathcal{X}$ is
locally integral (e.g. smooth).
For example, we can consider
$(\mathscr{X},X,\mathcal{X})=(\mathscr{S}^{-}_{K}(G,X)_{{{\cal
O}_{\breve{E}}}},\mathscr{S}^{-}_{K}(G,X)_{{{\cal
O}_{\breve{E}}}}\otimes_{{{\cal
O}_{\breve{E}}}}\bar{\mathbb{F}}_{p},\allowbreak{\operatorname{Sh}_{K}(G,X)\otimes_{E}\breve{E}})$.
Let $\bar{x}\in X(\bar{\mathbb{F}}_{p})$.
###### Lemma 1.25.
There is a generization $x$ of $\bar{x}$ which lies in the generic fiber
$\mathcal{X}$, and is a closed point in there, i.e., $x\in\mathcal{X}(L)$ for
a finite extension $L/\breve{E}$.
###### Definition 1.26.
We shall call such a point $x$ a _closed point generization_ of $\bar{x}$ for
short.
###### Proof:
Due to flatness (going-down) there is _some_ generization in the generic
fiber; call it $x_{0}$.
By [Stacks, Tag 053U] the following set is dense (and in particular non-empty)
in the closure of $\\{x_{0}\\}$ in $\mathcal{X}$:
$\left\\{x\in\mathscr{X}\;|\;x\text{ is a specialization of }x_{0}\text{ and a
closed point generization of }\bar{x}\right\\}.$
□
###### Lemma 1.27.
Notation as in the preceding lemma.
The specialization $x\leadsto\bar{x}$ can be realized by an ${\cal
O}_{L}$-valued point of $\mathscr{X}$.
###### Proof:
First off, by [EGA2, 7.1.9], it can be realized by a morphism
$\operatorname{Spec}R=\\{\eta,s\\}\to\mathscr{X}$ of ${{\cal
O}_{\breve{E}}}$-schemes, where $R$ is a discrete valuation ring such that
$L\cong\kappa(\eta)=\operatorname{Quot}(R)$ as field extensions of
$\kappa(x)$.
We hence get local homomorphisms of local rings ${{\cal
O}_{\breve{E}}}\to{\cal O}_{\mathscr{X},\bar{x}}\to R$.
Thus the discrete valuation on $L$ defined by $R$ extends the discrete
valuation on $\breve{E}$. But there is but one such extension and its
valuation ring is ${\cal O}_{L}$ (by definition). □
###### Lemma 1.28.
Mapping $x$ to the unique irreducible component of $\mathscr{X}$ that contains
$x$ establishes a surjection from the set of closed point generizations $x$ of
$\bar{x}$ to the set of irreducible components of $\mathscr{X}$ containing
$\bar{x}$.
###### Proof:
If $x_{0}\in\mathcal{X}$ is a generization of $\bar{x}$, then $x_{0}$ lies in
a unique irreducible component of $\mathscr{X}$ because $\mathcal{X}$ is
locally irreducible. Hence the map described above is well-defined.
Now for surjectivity: Given an irreducible component $C$ of $\mathscr{X}$
containing $\bar{x}$, let $x_{0}\in C$ be the generic point. Then $x_{0}$ must
be in the generic fiber (else we would be able to find a generization in the
generic fiber by going-down). Now go through the proof of Lemma 1.25 with this
particular choice of $x_{0}$. □
### 1.6 The local model
To give a very rough idea of what the _local model_ to be discussed in this
section is supposed to accomplish: It should be an $\mathcal{O}_{E}$-scheme
that is étale-locally isomorphic to $\mathscr{S}_{K}(G,X)$, but easier to
understand by virtue of being of a more “linear-algebraic flavor”. In
actuality however, the theory of local models quickly gets quite complicated
once one departs from the simplest examples.
#### 1.6.1 The Siegel case
We do start with the simplest example.
We consider the standard Iwahori subgroup
$I_{p}\subseteq\operatorname{GSp}_{2g}(\mathbb{Z}_{p})$, defined as the
preimage of the standard Borel subgroup of
$\operatorname{GSp}_{2g}(\mathbb{F}_{p})$. In terms of the building (cf.
Remark 1.13), it corresponds to the lattice chain
$\mathcal{L}_{\mathrm{full}}$ given by
$\displaystyle\Lambda^{0}=\mathbb{Z}_{p}^{2g}$
$\displaystyle\supsetneqq\Lambda^{1}=\mathbb{Z}_{p}^{2g-1}\oplus
p\mathbb{Z}_{p}\supsetneqq\Lambda^{2}=\mathbb{Z}_{p}^{2g-2}\oplus
p\mathbb{Z}_{p}^{2}$ (1.29)
$\displaystyle\supsetneqq\dotsb\supsetneqq\Lambda^{2g-1}=\mathbb{Z}_{p}\oplus
p\mathbb{Z}_{p}^{2g-1}\supsetneqq p\Lambda^{0}=p\mathbb{Z}_{p}^{2g}$
of period length $2g$.
Consider a subset $J=\\{j_{0}>\dotsb>j_{m-1}\\}\subseteq\\{1,\dotsc,2g\\}$
such that for each $j\in J$ with $1\leq j\leq 2g-1$ also $2g-j\in J$, and let
$K_{p}$ be the parahoric subgroup associated with the partial lattice chain
$\mathcal{L}\subseteq\mathcal{L}_{\mathrm{full}}$ obtained from
$\left\\{\Lambda^{j}\;|\;j\in J\right\\}$.
Define a scheme $\tilde{\mathscr{S}}_{K}(G,X)$ over $\mathscr{S}_{K}(G,X)$ as
follows:
$\tilde{\mathscr{S}}_{K}(G,X)(S)=\left\\{(A,\bar{\lambda},\eta^{p},\tau)\;\middle|\;\begin{tabular}[]{@{}l@{}}$(A,\bar{\lambda},\eta^{p})\in\mathscr{S}_{K}(\operatorname{GSp}_{2g},S^{\pm})(S)$,\\\
$\tau\colon
H_{\mathrm{dR}}^{1}(A)\xrightarrow{\sim}\mathcal{L}\otimes\mathcal{O}_{S}$
isomorphism of lattice chains\end{tabular}\right\\}$
for every $\mathbb{Z}_{p}$-scheme $S$.
By [RZ96, Appendix to Chap. 3],
$\tilde{\mathscr{S}}_{K}(G,X)\to\mathscr{S}_{K}(G,X)$ is a Zariski torsor
under the automorphism group of $\mathcal{L}$, i.e., the Iwahori group scheme.
This motivates the definition of the local model
$M^{\mathrm{loc}}_{K_{p}}\to\operatorname{Spec}\mathbb{Z}_{p}$ as the “moduli
space of Hodge filtrations”; more precisely:
###### Remark 1.30.
(See [Gör03, 91].) $M^{\mathrm{loc}}_{K_{p}}(S)$ is the set of isomorphism
classes of commutative diagrams
$\Lambda^{j_{0}}_{S}$$\mathcal{F}^{j_{0}}$$\Lambda^{j_{1}}_{S}$$\mathcal{F}^{j_{1}}$$\dotsb$$\dotsb$$\Lambda^{j_{0}}_{S}$$\mathcal{F}^{j_{0}}$$\Lambda^{j_{m-1}}_{S}$$\mathcal{F}^{j_{m-1}}$$\cdot
p$
with $\Lambda^{j}_{S}:=\Lambda^{j}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{S}$,
$\mathcal{F}^{j}\subseteq\Lambda^{j}_{S}$ locally direct summand of rank $g$,
such that for all $j\in J$,
$\mathcal{F}^{j}\to\Lambda^{j}_{S}\overset{\psi}{\cong}(\Lambda^{2g-j}_{S})^{*}\to(\mathcal{F}^{2g-j})^{*}$
vanishes, $\psi$ being the symplectic pairing.
By Grothendieck-Messing theory, one obtains a diagram
$\tilde{\mathscr{S}}_{K}(G,X)$$\mathscr{S}_{K}(G,X)$$M^{\mathrm{loc}}_{K}$smooth
of rel. dim.
$\dim\operatorname{Aut}(\mathcal{L})$$\operatorname{Aut}(\mathcal{L})$-torsor
Since both morphisms in this diagram are smooth of the same dimension, it
follows that for every finite field extension $\mathbb{F}_{q}/\mathbb{F}_{p}$
and every point $x\in\mathscr{S}_{K}(G,X)(\mathbb{F}_{q})$, there exists a
point $y\in M^{\mathrm{loc}}_{K}(\mathbb{F}_{q})$ and an isomorphism
$\mathcal{O}_{\mathscr{S}_{K}(G,X),x}^{h}\cong\mathcal{O}_{M^{\mathrm{loc}}_{K},y}^{h}$
of henselizations.
In many (P)EL situations one has similar descriptions with the obvious extra
structures. Sometimes however the so-called “naive” local models so obtained
additionally need to be flattened, which leaves one without any self-evident
moduli interpretation.
#### 1.6.2 The relation between the integral and the local model
Generalizing the Siegel example, we axiomatically characterize the
relationship between the integral model of the Shimura variety and its local
model: One wants a _local model diagram_ , i.e., a diagram of
$\mathcal{O}_{E}$-schemes functorial in $K$
$\tilde{\mathscr{S}}_{K}(G,X)$$\mathscr{S}_{K}(G,X)$$M^{\mathrm{loc}}_{K}$equivariant
and smooth of rel. dim.
$\dim\mathcal{G}_{\mathcal{O}_{E}}$$\mathcal{G}_{\mathcal{O}_{E}}$-torsor
(1.31)
where $M^{\mathrm{loc}}_{K}$ is a projective flat $\mathcal{O}_{E}$-scheme
with an action of $\mathcal{G}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{E}$ and
generic fiber the canonical model of $G_{\bar{\mathbb{Q}}_{p}}/P_{\mu^{-1}}$
over $E$.
By Kisin-Pappas [KP15] we do actually have such a diagram in our situation.
#### 1.6.3 The Pappas-Zhu construction
In [PZ13], Pappas and Zhu give a construction of the local model in quite a
general context, in particular with no assumptions going beyond our running
assumptions 1.17.
###### Remark 1.32.
To this end, they construct an affine smooth group scheme
$\underline{\mathcal{G}}_{K}\to\mathbb{A}^{1}_{\mathbb{Z}_{p}}=\operatorname{Spec}\mathbb{Z}_{p}[t]$
with the following key properties:
1. (1)
$\underline{\mathcal{G}}_{K}$ has connected fibers,
2. (2)
$\underline{\mathcal{G}}_{K}$ is reductive over
$\operatorname{Spec}\mathbb{Z}_{p}[t^{\pm 1}]$,
3. (3)
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto
p}\mathbb{Z}_{p}\cong\mathcal{G}_{K}$, in particular
* •
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto
p}\mathbb{Q}_{p}\cong G_{\mathbb{Q}_{p}}$ and
* •
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{F}_{p}:=\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto
0}\mathbb{F}_{p}\cong\mathcal{G}_{K}\otimes\mathbb{F}_{p}$,
4. (4)
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{Q}_{p}[\mkern-2.0mu[t]\mkern-2.0mu]$
is parahoric for
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{Q}_{p}(\mkern-4.0mu(t)\mkern-4.0mu)$,
5. (5)
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{F}_{p}[\mkern-2.0mu[t]\mkern-2.0mu]$
is parahoric for
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t]}\mathbb{F}_{p}(\mkern-4.0mu(t)\mkern-4.0mu)$.
###### Definition and Remark 1.33.
Let $X_{\mu}$ be the canonical model of
$G_{\bar{\mathbb{Q}}_{p}}/P_{\mu^{-1}}$ over $E$, where for a cocharacter
$\nu$ one defines $P_{\nu}:=\\{g\in G\;|\;\lim_{t\to
0}\nu(t)g\nu(t)^{-1}\text{ exists}\\}$.
Let $S_{\mu}$ be the closed subvariety of
$\operatorname{Gr}_{G}\times_{\mathbb{Q}_{p}}E$ with
$S_{\mu}(\bar{\mathbb{Q}}_{p})=G(\bar{\mathbb{Q}}_{p}[\mkern-2.0mu[t]\mkern-2.0mu])\mu(t)G(\bar{\mathbb{Q}}_{p}[\mkern-2.0mu[t]\mkern-2.0mu])/G(\bar{\mathbb{Q}}_{p}[\mkern-2.0mu[t]\mkern-2.0mu]).$
Then $S_{\mu}$ can be $G_{E}$-equivariantly identified with $X_{\mu}$.
###### Definition 1.34.
The local model $M^{\mathrm{loc}}_{G,\mu,K}$ now is defined to be the Zariski
closure of $X_{\mu}\subseteq\operatorname{Gr}_{G}\times_{\mathbb{Q}_{p}}E$ in
$\operatorname{Gr}_{\underline{\mathcal{G}}_{K},\mathbb{Z}_{p}}\otimes_{\mathbb{Z}_{p}}\mathcal{O}_{E}$,
where
$\operatorname{Gr}_{\underline{\mathcal{G}}_{K},\mathbb{Z}_{p}}:=\operatorname{Gr}_{\underline{\mathcal{G}}_{K},\mathbb{A}_{\mathbb{Z}_{p}}^{1}}\otimes_{\mathbb{A}^{1}_{\mathbb{Z}_{p}},\,u\mapsto
p}\mathbb{Z}_{p}$ is a base change of the global affine Graßmannian as defined
in [PZ13].
## 2 EKOR strata and zips in the case of parahoric reduction
###### Notation 2.1.
We still fix a Shimura datum $(G,X)$ of Hodge type, a parahoric subgroup
$K_{p}\subseteq G(\mathbb{Q}_{p})$ (associated with a Bruhat-Tits group scheme
$\mathcal{G}=\mathcal{G}_{K}=\mathcal{G}_{K_{p}}\to\operatorname{Spec}\mathbb{Z}_{p}$
associated with a facet $\mathfrak{f}$) and a sufficiently small open compact
subgroup $K^{p}\subseteq G(\mathbb{A}_{f}^{p})$. Define
$\overline{\mathcal{G}}_{K}:=\mathcal{G}_{K}\otimes_{\mathbb{Z}_{p}}\kappa$.
We also keep up our standard assumptions 1.17.
We now want to discuss the EKOR stratification on the special fiber of the
integral model with parahoric level structure. The EKOR stratification
interpolates between the Ekedahl-Oort (EO) and the Kottwitz-Rapoport (KR)
stratification (see Remark 2.22 below for a precise formulation). We begin by
explaining the basics about these stratifications and the combinatorics
involved in the first section of this chapter.
### 2.1 The Ekedahl-Oort, Kottwitz-Rapoport and EKOR stratifications
#### 2.1.1 Iwahori-Weyl group and the admissible subset
###### Notation 2.2.
1. (1)
We fix an Iwahori subgroup $I_{p}\subseteq K_{p}$, i.e., $I_{p}$ is the group
of $\mathbb{Z}_{p}$-points of the parahoric group scheme $\mathcal{I}$
associated with an alcove $\mathfrak{a}$ (facet of maximal dimension) such
that $\mathfrak{f}\subseteq\overline{\mathfrak{a}}$. As usual, we also define
$\breve{I}:=\mathcal{I}(\breve{\mathbb{Z}}_{p})\subseteq\breve{K}$.
2. (2)
Let $T\subseteq G$ be a maximal torus such that $T_{\breve{\mathbb{Q}}_{p}}$
is contained in a Borel subgroup of $G_{\breve{\mathbb{Q}}_{p}}$888Note that
by Steinberg’s theorem, $G_{\breve{\mathbb{Q}}_{p}}$ is quasi-split. [Ser97,
Chap. III, § 2] and let $S$ be the maximal $\breve{\mathbb{Q}}_{p}$-split
torus contained in $T_{\breve{\mathbb{Q}}_{p}}$. We can and do choose $T$ such
that the alcove $\mathfrak{a}$ is contained in the apartment associated with
$S$. By $N$ we denote the normalizer of $T$.
3. (3)
Let $(V,R)$ be the relative root system of
$(G_{\breve{\mathbb{Q}}_{p}},T_{\breve{\mathbb{Q}}_{p}})$, i.e., $V$ is the
$\mathbb{R}$-vector space
$X^{*}_{\breve{\mathbb{Q}}_{p}}(T_{\breve{\mathbb{Q}}_{p}})\otimes_{\mathbb{Z}}\mathbb{R}$
and $R\subseteq X^{*}_{\breve{\mathbb{Q}}_{p}}(T_{\breve{\mathbb{Q}}_{p}})$ is
(as usual) such that we have a decomposition
$\mathfrak{g}:=\operatorname{Lie}(G_{\bar{\mathbb{Q}}_{p}})=\operatorname{Lie}(T_{\bar{\mathbb{Q}}_{p}})\oplus\bigoplus_{\alpha\in
R}\mathfrak{g}_{\alpha}.$
Contrary to the absolute situation, $\dim\mathfrak{g}_{\alpha}$ may be greater
than $1$.
###### Definition 2.3.
1. (1)
The _(finite relative) Weyl group_ of $G$ (over $\breve{\mathbb{Q}}_{p}$) is
$W:=N(\breve{\mathbb{Q}}_{p})/T(\breve{\mathbb{Q}}_{p})$. It is the Weyl group
of the root system $(V,R)$, i.e., the group generated by the orthogonal
reflections through the hyperplanes defined by the elements of $R$.
2. (2)
As described in [Lan00, 1.2.3], one defines a set of affine roots
$R_{\mathrm{aff}}\supseteq R$ on $V$ using the valuation on
$\breve{\mathbb{Q}}_{p}$. By
$W_{a}\subseteq\mathrm{Aff}(V^{*})=\operatorname{GL}(V^{*})\ltimes V^{*}$ we
denote the _affine Weyl group_ of the affine root system
$(V,R_{\mathrm{aff}})$, i.e., the group generated by the orthogonal
reflections through the affine hyperplanes defined by the elements of
$R_{\mathrm{aff}}$.
3. (3)
$\widetilde{W}:=N(\breve{\mathbb{Q}}_{p})/(T(\breve{\mathbb{Q}}_{p})\cap\breve{I})$
is the _Iwahori-Weyl group_.
4. (4)
$W_{K}:=(N(\breve{\mathbb{Q}}_{p})\cap\breve{K})/(T(\breve{\mathbb{Q}}_{p})\cap\breve{I})\subseteq\widetilde{W}$.
(Recall that $\breve{K}=\mathcal{G}(\breve{\mathbb{Z}}_{p})$.)
###### Remarks 2.4.
1. (1)
We have $W\subseteq W_{a}$. With the systems of generators indicated above,
$W$ and $W_{a}$ become (affine) Coxeter groups; in particular we can talk
about reduced words and have length functions, cf. [BB05].
2. (2)
$W_{I}$ is the trivial group.
###### Proposition 2.5.
[HR08, Prop. 8] The Bruhat-Tits decomposition
$G(\breve{\mathbb{Q}}_{p})=\bigcup_{w\in\widetilde{W}}\breve{K}w\breve{K}$
identifies
$\breve{K}\backslash G(\breve{\mathbb{Q}}_{p})/\breve{K}\cong
W_{K}\backslash\widetilde{W}/W_{K}.$
###### Proposition 2.6.
[HR08, Prop. 13] Let $\breve{K}$ be the maximal parahoric subgroup of
$G(\breve{\mathbb{Q}}_{p})$ associated with a special vertex in the apartment
corresponding to $S$. Then $W_{K}\to W$ is an isomorphism and
$\widetilde{W}\cong W\ltimes
X_{*}(T)_{\operatorname{Gal}(\bar{\mathbb{Q}}_{p}/\breve{\mathbb{Q}}_{p})}$.999Notation:
Let $\Gamma$ be a group and $M$ a $\mathbb{Z}[\Gamma]$-module. Then
$M_{\Gamma}:=\mathbb{Z}\otimes_{\mathbb{Z}[\Gamma]}M=M/\langle\gamma
m-m\;|\;\gamma\in\Gamma,\;m\in M\rangle$ is the module of
$\Gamma$-coinvariants of $M$.
###### Notation 2.7.
We denote the map
$X_{*}(T)_{\operatorname{Gal}(\bar{\mathbb{Q}}_{p}/\breve{\mathbb{Q}}_{p})}\to\widetilde{W}$
of the proposition by $\nu\mapsto t_{\nu}$.
###### Proposition 2.8.
[HR08, Lemma 14] Let $\Omega\subseteq\widetilde{W}$ be the subgroup consisting
of those elements that preserve the base alcove $\mathfrak{a}$.
There is an exact sequence
$1\to W_{a}\to\widetilde{W}\to\Omega\to 1,$
with a canonical right splitting (namely the inclusion
$\Omega\hookrightarrow\widetilde{W}$), i.e., $\widetilde{W}\cong
W_{a}\rtimes\Omega$.
###### Definition 2.9.
The semidirect product decomposition of the preceding proposition means that
$\widetilde{W}$ is a “quasi-Coxeter” group. In practical terms, this means:
1. (1)
We define a length function $\ell$ on $\widetilde{W}$ as follows:
$\ell(w_{a},\omega):=\ell(w_{a})$ for all $w_{a}\in W_{a}$ and
$\omega\in\Omega$, where on the right hand side we use the length function of
the affine Coxeter group $W_{a}$.
Note that $\Omega=\ell^{-1}(0)$.
2. (2)
Likewise, we extend the Bruhat partial order from $W_{a}$ to $\widetilde{W}$
by defining
$(w_{a,1},\omega_{1})\leq(w_{a,2},\omega_{2})\;:\Longleftrightarrow\;w_{a,1}\leq
w_{a,2}\text{ and }\omega_{1}=\omega_{2}.$
Note that $w_{1}\leq w_{2}$ ($w_{1},w_{2}\in\widetilde{W}$) implies
$\ell(w_{1})\leq\ell(w_{2})$.
###### Definition 2.10.
1. (1)
Let $\\{\mu\\}$ be a $W_{\mathrm{abs}}$-conjugacy class of geometric
cocharacters of $T$ (cf. Remark 1.4),
$W_{\mathrm{abs}}:=N(\bar{\mathbb{Q}}_{p})/T(\bar{\mathbb{Q}}_{p})$ being the
absolute Weyl group. Let $\bar{\mu}\in
X_{*}(T)_{\operatorname{Gal}(\bar{\mathbb{Q}}_{p}/\breve{\mathbb{Q}}_{p})}$ be
the image of a cocharacter in $\\{\mu\\}$ whose image in
$X_{*}(T)\otimes_{\mathbb{Z}}\mathbb{R}$ is contained in the closed Weyl
chamber corresponding to some Borel subgroup of $G$ containing $T$ and defined
over $\breve{\mathbb{Q}}_{p}$.
2. (2)
$\mathrm{Adm}(\mu):=\mathrm{Adm}(\\{\mu\\}):=\\{w\in\widetilde{W}\;|\;w\leq
qt_{\bar{\mu}}q^{-1}=t_{q\bar{\mu}}\text{ for some }q\in W\\}$ is the
_$\\{\mu\\}$ -admissible subset_ of $\widetilde{W}$.
3. (3)
$\mathrm{Adm}(\\{\mu\\})^{K}:=W_{K}\mathrm{Adm}(\\{\mu\\})W_{K}\subseteq\widetilde{W}$.
4. (4)
$\mathrm{Adm}(\\{\mu\\})_{K}:=\mathrm{KR}(K,\\{\mu\\}):=W_{K}\backslash\mathrm{Adm}(\\{\mu\\})^{K}/W_{K}\subseteq
W_{K}\backslash\widetilde{W}/W_{K}$.
5. (5)
Define ${}^{K}\widetilde{W}\subseteq\widetilde{W}$ to be the set of
representatives of minimal length for the quotient
$W_{K}\backslash\widetilde{W}$.
6. (6)
${}^{K}\mathrm{Adm}(\\{\mu\\}):=\mathrm{EKOR}(K,\\{\mu\\}):=\mathrm{Adm}(\\{\mu\\})^{K}\cap{}^{K}\widetilde{W}\subseteq{}^{K}\widetilde{W}$.
###### Lemma 2.11.
(See [SYZ19, Thm. 1.2.2].)
${}^{K}\mathrm{Adm}(\\{\mu\\})=\mathrm{Adm}(\\{\mu\\})\cap{}^{K}\widetilde{W}$.
#### 2.1.2 Kottwitz-Rapoport stratification
Recall from Section 1.6.2 that we have an integral model and a local model
diagram
$\mathscr{S}_{K}\leftarrow\tilde{\mathscr{S}}_{K}\to M^{\mathrm{loc}}_{K}$
or, equivalently, a (smooth) morphism of stacks
$\mathscr{S}_{K}\to[\mathcal{G}_{K}\backslash M^{\mathrm{loc}}_{K}]$ (over
$\mathcal{O}_{E}$).
As explained in Section 1.6.3, by the construction in [PZ13], the special
fiber $M^{\mathrm{loc}}_{K}\otimes\kappa$ of $M^{\mathrm{loc}}_{K}$ is a
closed subscheme of the affine flag variety
$\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\kappa}=\mathcal{F}l_{\underline{\mathcal{G}}_{K}\otimes\kappa[\mkern-2.0mu[t]\mkern-2.0mu]}$,
which is the ind-projective ind-scheme over $\kappa$ given as the fpqc
sheafification (which exists in this case!) of the presheaf
$R\mapsto\underline{\mathcal{G}}_{K}(R(\mkern-4.0mu(t)\mkern-4.0mu))/\underline{\mathcal{G}}_{K}(R[\mkern-2.0mu[t]\mkern-2.0mu])$.
###### Definition 2.12.
Define
$L^{+}(\underline{\mathcal{G}}_{K}\otimes\kappa[\mkern-2.0mu[t]\mkern-2.0mu])$
to be the $\kappa$-functor sending a $\kappa$-algebra $R$ to
$\underline{\mathcal{G}}_{K}(R[\mkern-2.0mu[t]\mkern-2.0mu])$.
We let
$L^{+}(\underline{\mathcal{G}}_{K}\otimes\kappa[\mkern-2.0mu[t]\mkern-2.0mu])$
act on
$\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\kappa}$
from the left and call this action $a$ (within this subsection). The orbits of
this action on
$\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\bar{\kappa}}$
are the _Schubert cells_.
###### Remarks 2.13.
1. (1)
The Schubert cells can be indexed by $W_{K}\backslash\widetilde{W}/W_{K}$ by
Proposition 2.5 with the following in mind: Strictly speaking, using the
Bruhat-Tits decomposition here, we arrive at something involving the Iwahori-
Weyl group of
$\underline{\mathcal{G}}_{K}\otimes\bar{\kappa}(\mkern-4.0mu(t)\mkern-4.0mu)$.
However, by [PZ13, 9.2.2], this is isomorphic to the Iwahori-Weyl group of
$G_{\breve{\mathbb{Q}}_{p}}$.
2. (2)
$M^{\mathrm{loc}}_{K}\otimes\bar{\kappa}$ is a union of Schubert cells, namely
of those indexed by
$\mathrm{KR}(K,\\{\mu\\}):=W_{K}\backslash(W_{K}\mathrm{Adm}(\\{\mu\\})W_{K})/W_{K}$,
cf. [PZ13, Theorem 9.3].
###### Remark 2.14.
By construction, $M^{\mathrm{loc}}_{K}$ has an action $b$ of
$\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}[t],t\mapsto
p}\mathcal{O}_{E}\cong\mathcal{G}_{K}\otimes\mathcal{O}_{E}$.
For $w\in\widetilde{W}$ choose a representative $\dot{w}\in
L\underline{\mathcal{G}}_{K}(\bar{\kappa})$ (with Remark 2.13 (1) in mind) and
let
$e_{0}\in\operatorname{Gr}_{\underline{\mathcal{G}}_{K}\otimes_{\mathbb{Z}_{p}}\kappa}$
be the distinguished base point (associated with the identity). For $w\in
W_{K}\mathrm{Adm}(\\{\mu\\})W_{K}$, the orbit map of $\dot{w}\cdot e_{0}$ for
the action $a$ factors through the homomorphism
$L^{+}(\underline{\mathcal{G}}_{K}\otimes\bar{\kappa}[\mkern-2.0mu[t]\mkern-2.0mu])\to\mathcal{G}_{K}\otimes\kappa\cong\underline{\mathcal{G}}_{K}\otimes\bar{\kappa}$.
The orbits associated with the two $\mathcal{G}_{K}\otimes\kappa$-actions $a$
and $b$ on $M^{\mathrm{loc}}_{K}\otimes\kappa$ agree. The orbits of the
$\mathcal{G}_{K}\otimes\kappa$-action on $M^{\mathrm{loc}}_{K}\otimes\kappa$
are indexed by $\mathrm{KR}(K,\\{\mu\\})$.
###### Definition 2.15.
The stratifications thus obtained on $M^{\mathrm{loc}}_{K}\otimes\kappa$ and
$\mathscr{S}_{K}\otimes\kappa$ are called _Kottwitz-Rapoport stratifications_.
That is to say that Kottwitz-Rapoport strata on $\mathscr{S}_{K}\otimes\kappa$
are by definition pullbacks of Kottwitz-Rapoport strata on
$M^{\mathrm{loc}}_{K}$, which in turn are
$\mathcal{G}_{K}\otimes\kappa$-orbits.
#### 2.1.3 Ekedahl-Oort stratification
The Ekedahl-Oort stratification is only defined in the case of good reduction,
i.e., if $K_{p}$ is hyperspecial or, equivalently, if $\mathcal{G}_{K}$ is a
_reductive_ model of $G_{\mathbb{Q}_{p}}$. Then $G_{\mathbb{Q}_{p}}$ splits
over $\breve{\mathbb{Q}}_{p}$ (by definition of “hyperspecial”, cf. [Tit79,
1.10.2]).
We therefore put ourselves in the situation of good reduction for this
subsection.
###### Remark 2.16.
Then $W$ as defined in Definition 2.3 (1) agrees with the absolute Weyl group
of $G_{\mathbb{Q}_{p}}=\mathcal{G}_{K}\otimes\mathbb{Q}_{p}$, which in turn
agrees with the absolute Weyl group of
$\overline{\mathcal{G}}_{K}:=\mathcal{G}_{K}\otimes\kappa$, cf. [VW13, App.
A.5].
###### Definition 2.17.
Define $I$ to be the type (interpreted as a subset of simple reflections) of
the parabolic subgroup of $G_{\mathbb{Q}_{p}}$ defined by $\mu^{-1}$ (cf.
Remark 1.33), and ${}^{I}W\subseteq W$ to be the system of representatives of
the quotient group $W_{I}\backslash W$ containing the element of least length
of every coset.
###### Theorem 2.18.
[MW04, PWZ15, PWZ11, Zha18] There is a smooth algebraic stack
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{\kappa}:=\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\mu}_{\kappa}$
over $\kappa$ with underlying topological space ${}^{I}W$ together with a
smooth morphism
$\mathscr{S}_{K}\otimes\kappa\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\mu}_{\kappa}.$
The stratification of $\mathscr{S}_{K}\otimes\kappa$ thus obtained is the
_Ekedahl-Oort stratification_.
#### 2.1.4 EKOR stratification
###### Definition 2.19.
Let $L$ be a valued field extension of $\mathbb{Q}_{p}$ with ring of integers
$\mathcal{O}$, maximal ideal $\mathfrak{m}$ and residue field $\lambda$.
The _pro-unipotent radical_ of $\mathcal{G}_{K}(\mathcal{O})$ is
$\mathcal{G}_{K}(\mathcal{O})_{1}:=\\{g\in\mathcal{G}_{K}(\mathcal{O})\;|\;(g\mod\mathfrak{m})\in\bar{R}_{K}(\lambda)\\},$
where $\bar{R}_{K}$ is the unipotent radical of
$\mathcal{G}_{K}\otimes_{\mathbb{Z}_{p}}\lambda$.
In particular, if $K$ is hyperspecial, then
$\mathcal{G}_{K}(\mathcal{O})_{1}=\ker(\mathcal{G}_{K}(\mathcal{O})\to\mathcal{G}_{K}(\lambda))$.
Also,
$\overline{\breve{K}}:=\breve{K}/\breve{K}_{1}\cong\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}(\bar{\mathbb{F}}_{p})$,
where $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the maximal reductive
quotient of $\overline{\mathcal{G}}_{K}:=\mathcal{G}_{K}\otimes\kappa$.
###### Remark 2.20.
[HR17, after Cor. 6.2] We have a commutative diagram
$G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$${}^{K}\widetilde{W}$$W_{K}\backslash\widetilde{W}/W_{K}.$$\breve{K}\backslash
G(\breve{\mathbb{Q}}_{p})/\breve{K}$
Consider the map
$v_{K}\colon\mathscr{S}_{K}\otimes\kappa\to
G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1}),$
which is the composition of the central leaves map
$\Upsilon_{K}\colon\mathscr{S}_{K}\otimes\kappa\to
G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$ (see [Hes20a]) with the
projection $G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}\to
G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$.
The Kottwitz-Rapoport map
$\lambda_{K}\colon{\mathscr{S}_{K}\otimes\kappa}\to\breve{K}\backslash
G(\breve{\mathbb{Q}}_{p})/\breve{K}$ factors through this map.
The fibers of $v_{K}$ are called _EKOR strata_. By [HR17, Thm. 6.15], they are
locally closed subsets of $\mathscr{S}_{K}\otimes\kappa$.
###### Remarks 2.21.
1. (1)
One can explicitly express the image of a EKOR stratum under a change-of-
parahoric map as a union of EKOR strata on the target [HR17, Prop. 6.11].
2. (2)
The closure of an EKOR stratum is a union of EKOR strata and one can
explicitly describe the associated order relation [HR17, Thm. 6.15].
###### Remark 2.22.
In the hyperspecial case, the EKOR stratification agrees with the Ekedahl-Oort
stratification. In the Iwahori case, it agrees with the Kottwitz-Rapoport
stratification
(${}^{K}\widetilde{W}=\widetilde{W}=W_{K}\backslash\widetilde{W}/W_{K}$ in
that case).
By definition, the EKOR stratification always is a refinement of the Kottwitz-
Rapoport stratification. So one way of approaching the EKOR stratification is
by looking at a fixed Kottwitz-Rapoport stratum and trying to understand how
it is subdivided into EKOR strata.
To get this started, let us recall some calculations from the proof of [HR17,
Thm. 6.1].
Fixing a Kottwitz-Rapoport stratum means restricting our view to
$\breve{K}w\breve{K}/\breve{K}_{\sigma}$ rather than the whole of
$G(\breve{\mathbb{Q}}_{p})/\breve{K}_{\sigma}$, for some fixed
$w\in\mathrm{KR}(K,\\{\mu\\})$. The EKOR strata in the Kottwitz-Rapoport
stratum associated with $w$ are therefore indexed by
$\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$.
Define $\sigma^{\prime}:=\sigma\circ\operatorname{Ad}(w)$ and consider the
bijection
$\displaystyle\breve{K}/(\breve{K}\cap w^{-1}\breve{K}w)_{\sigma^{\prime}}$
$\displaystyle\xrightarrow{\sim}\breve{K}w\breve{K}/\breve{K}_{\sigma},$
$\displaystyle k$ $\displaystyle\mapsto wk,$ $\displaystyle
k_{2}\sigma(k_{1})$ $\displaystyle\mapsfrom k_{1}wk_{2}.$
Let $J$ be the set of simple affine reflections in $W_{K}$, let $\bar{B}$ be
the image of $\breve{I}$ in $\overline{\breve{K}}$ and
$\bar{T}\subseteq\bar{B}$ the maximal torus. Set $J_{1}:=J\cap w^{-1}Jw$.
###### Proposition 2.23.
(See [Mor93, Lemma 3.19].) The image of $\breve{K}\cap w^{-1}\breve{K}w$ in
$\overline{\breve{K}}$ is $\bar{P}_{J_{1}}$, i.e., the standard parabolic
subgroup of $\overline{\breve{K}}$ associated with $J_{1}$.
###### Remark 2.24.
He and Rapoport invoke Carter’s book [Car93] at this point, which primarily
pertains to the case of (usual) BN-pairs attached to reductive groups. Morris
[Mor93] shows that the relevant results carry over likewise to the case of
generalized (or affine) BN-pairs.
Then we get a map
$\displaystyle\breve{K}w\breve{K}/\breve{K}_{\sigma}\to\breve{K}/(\breve{K}\cap
w^{-1}\breve{K}w)_{\sigma^{\prime}}$
$\displaystyle\to\overline{\breve{K}}/(\bar{P}_{J_{1}})_{\sigma^{\prime}}$
$\displaystyle\to\overline{\breve{K}}/(\bar{L}_{J_{1}})_{\sigma^{\prime}}(\bar{U}_{J_{1}})_{\sigma^{\prime}}\to\overline{\breve{K}}/(\bar{L}_{J_{1}})_{\sigma^{\prime}}(\bar{U}_{J_{1}}\times\bar{U}_{\sigma^{\prime}(J_{1})}),$
which factors through a bijection
$\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})\xrightarrow{\sim}\overline{\breve{K}}/(\bar{L}_{J_{1}})_{\sigma^{\prime}}(\bar{U}_{J_{1}}\times\bar{U}_{\sigma^{\prime}(J_{1})})\cong\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}(\bar{\mathbb{F}}_{p})/{{\scriptstyle\cong}}.$
Here, $\mathcal{Z}_{w}$ is the (connected) algebraic zip datum
$\mathcal{Z}_{w}=(\overline{\mathcal{G}}^{\mathrm{rdt}},\bar{P}_{J_{1}},\bar{P}_{\sigma^{\prime}(J_{1})},\sigma^{\prime})$,
as described in [SYZ19]. In [SYZ19], Shen, Yu and Zhang show that this
observation “globalizes” (with the drawback that “global” here still just
refers to the Kottwitz-Rapoport stratum101010They also give another
“globalization”; the drawback there being that it only works after
perfection.) in a pleasant way. To wit, one gets a smooth morphism [SYZ19,
Theorem A]
$\zeta_{w}\colon\overline{\mathscr{S}}_{K}^{w}\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}_{\kappa}$
(the source being a Kottwitz-Rapoport stratum).
### 2.2 $\overline{\mathcal{G}}_{K}$-zips in the Siegel case
Here we work with the Siegel Shimura datum, cf. Example 1.2.
#### 2.2.1 Preliminaries
###### Notation 2.25.
Fix $p\neq 2$,111111As in [RZ96], the principal reason for this restriction is
our use of the equivalence between alternating and skew-symmetric. See
Definition 2.30 (e). $g\in\mathbb{Z}_{\geq 1}$ and a subset
$J\subseteq\mathbb{Z}$ with $J=-J$ and $J+2g\mathbb{Z}=J$. Associated with $J$
is the partial lattice chain $\left\\{\Lambda^{j}\;|\;j\in J\right\\}$, where
$\Lambda^{j}$ are defined as in equation (1.29). Let $K_{p}$ be the
corresponding parahoric subgroup of $\operatorname{GSp}_{2g}(\mathbb{Q}_{p})$,
i.e., the stabilizer of said lattice chain. It contains the Iwahori subgroup
$I_{p}$ associated with the full lattice chain (1.29). For the maximal torus
$T$ we take the usual diagonal (split) torus.
###### Remark 2.26.
The Weyl group is
$\displaystyle W$ $\displaystyle=\\{\pi\in S_{2g}=\operatorname{Aut}(\\{\pm
1,\pm 2,\dotsc,\pm g\\})\;|\;\pi(-n)=-\pi(n)\text{ for }n=1,2,\dotsc,g\\}$
$\displaystyle\cong S_{g}\ltimes\\{\pm 1\\}^{g}.$
Here the transposition $(n\quad m)$ of
$S_{g}=\operatorname{Aut}(\\{1,2,\dotsc,g\\})$ corresponds to the element
${(n\quad m)(-n\quad{-m})}$ of $\operatorname{Aut}(\\{\pm 1,\pm 2,\dotsc,\pm
g\\})$ and the element of $\\{\pm 1\\}^{g}$ which has a $-1$ in position $i$
and $1$ everywhere else corresponds to $(i\quad{-i})$.
The affine Weyl group is $W_{a}=W\ltimes Y_{0}$ and the Iwahori-Weyl group
$\widetilde{W}=W\ltimes Y$ with
$\displaystyle\mathbb{Z}^{g+1}$ $\displaystyle\cong
Y=\\{(\nu_{1},\dotsc,\nu_{g},\nu_{-g},\dotsc,\nu_{-1})\in\mathbb{Z}^{2g}:\nu_{1}+\nu_{-1}=\dotsb=\nu_{g}+\nu_{-g}\\}$
$\displaystyle\supseteq
Y_{0}=\\{(\nu_{1},\dotsc,\nu_{g},\nu_{-g},\dotsc,\nu_{-1})\in\mathbb{Z}^{2g}:0=\nu_{1}+\nu_{-1}=\dotsb=\nu_{g}+\nu_{-g}\\}\cong\mathbb{Z}^{g}.$
The simple affine roots (whose walls bound the base alcove $\mathfrak{a}$) are
$\displaystyle 1-2e_{-1}+e_{0}=1+2e_{1}-e_{0},$ $\displaystyle
e_{-1}-e_{-2}=e_{2}-e_{1},e_{-2}-e_{-3},\dotsc,e_{-(g-1)}-e_{-g},$
$\displaystyle 2e_{-g}-e_{0}=e_{0}-2e_{g},$
where $e_{1},\dotsc,e_{g},e_{-g},\dotsc,e_{-1}\colon T\to\mathbb{G}_{m}$ are
the obvious cocharacters and $e_{0}=e_{1}+e_{-1}=\dotsb=e_{g}+e_{-g}$.
The reflections corresponding to the simple affine roots are
$((1\quad{-1}),\left(\begin{smallmatrix}-1\\\ 0\\\ \vdots\\\ 0\\\
1\end{smallmatrix}\right)),(-1\quad{-2})(1\quad
2),\dotsc,(-g\quad{-(g-1)})(g\quad{g-1}),(g\quad{-g}).$
The length zero subgroup $\Omega\subseteq\widetilde{W}$ is generated by
$((w_{0},\epsilon),y)\in(S_{g}\ltimes\\{\pm 1\\}^{g})\ltimes Y$, where
$w_{0}\in S_{g}$ is the longest element, $\epsilon=(-1,-1,\dotsc,-1)$ and
$y=(0^{g},1^{g})$.
###### Remark 2.27.
One also can choose $\dotsb\subseteq
p\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}^{2g-1}\subseteq\mathbb{Z}_{p}^{2g}\subseteq\dotsb$
instead of $\dotsb\subseteq\mathbb{Z}_{p}^{2g-1}\oplus
p\mathbb{Z}_{p}\subseteq\mathbb{Z}_{p}^{2g}\subseteq\dotsb$ as the standard
lattice chain. Then the simple affine roots would be
$1-2e_{1}+e_{0},e_{1}-e_{2}=e_{2}-e_{1},e_{2}-e_{3},\dotsc,e_{g-1}-e_{g},2e_{g}-e_{0}.$
###### Remark 2.28.
$\widetilde{W}=W\ltimes Y=N(\mathbb{Q}_{p})/T(\mathbb{Z}_{p})$ and
$N(\mathbb{Q}_{p})\to W\ltimes Y$ has a section $W\ltimes Y\to
N(\mathbb{Q}_{p})$, which sends $(\pi,\underline{\nu})\in W\ltimes Y$ to
$T_{\underline{\nu}}P_{w}$, where
$T_{\underline{\nu}}=\left(\begin{smallmatrix}p^{\nu_{1}}&&&&\\\
&p^{\nu_{2}}&&&\\\ &&\ddots&&\\\ &&&p^{\nu_{-2}}&\\\
&&&&p^{\nu_{-1}}\end{smallmatrix}\right)$ and $P_{w}$ is the permutation
matrix with $P_{w}(e_{i})=e_{w(i)}$.
###### Remark 2.29.
Using the results of [KR00] we also easily can compute
$\mathrm{Adm}(\\{\mu\\})$. One potential source of confusion at this point is
that, due to our choice of the base alcove (cf. Remark 2.27), in our setup we
need to use $\omega_{i}:=(0^{2g-i},1^{i})$ instead of
$\omega_{i}:=(1^{i},0^{2g-i})$ (notation of [KR00]), cf. [Yu08, 1268]. With
that convention in place, we have that $x\in\widetilde{W}$ is
$\\{\mu\\}$-admissible ($\mu=(1^{g},0^{g})$) if and only if
$(0,\dotsc,0)\leq x(\omega_{i})-\omega_{i}\leq(1,\dotsc,1)\quad\text{for all
}0\leq i<2g$
(component-wise comparison).
#### 2.2.2 Lattice chains, zips, admissibility
###### Definition 2.30.
Let $S$ be a $\mathbb{Z}_{p}$-scheme.
A _Siegel lattice chain in the weak sense on $S$ of type $J$_ is a tuple
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$,
where
1. (a)
for all $j\in J$, $\mathcal{V}^{j}$ is a vector bundle on $S$ of rank $2g$,
2. (b)
$\mathcal{L}$ is a line bundle on $S$,
3. (c)
for all $i,j\in J$ with $j>i$,
$\alpha_{j,i}\colon\mathcal{V}^{j}\to\mathcal{V}^{i}$ is a vector bundle
homomorphism, such that the $\bigl{(}\alpha_{j,i}\bigr{)}$ satisfy the obvious
cocycle condition (and we also define $\alpha_{i,i}:=\operatorname{id}$),
4. (d)
for all $j\in J$,
$\theta_{j}\colon\mathcal{V}^{j}\xrightarrow{\sim}\mathcal{V}^{j-2g}$ is a
vector bundle isomorphism such that the $\bigl{(}\theta_{j}\bigr{)}$ are
compatible with the $\bigl{(}\alpha_{j,i}\bigr{)}$ in that
$\theta_{i}\circ\alpha_{j,i}=\alpha_{j-2g,i-2g}\circ\theta_{j}$ and
$\alpha_{j,j-2g}=p\cdot\theta_{j}$,
5. (e)
for all $j\in J$ a vector bundle isomorphism
$\psi_{j}\colon\mathcal{V}^{j}\xrightarrow{\sim}(\mathcal{V}^{-j})^{*}\otimes\mathcal{L}$
compatible with $\bigl{(}\theta_{j}\bigr{)}$ and
$\bigl{(}\alpha_{j,i}\bigr{)}$, such that $-\psi_{j}(x,y)=\psi_{-j}(y,x)$ for
all $(x,y)\in\mathcal{V}^{j}\times\mathcal{V}^{-j}$.121212By
“$(x,y)\in\mathcal{V}^{j}\times\mathcal{V}^{-j}$” we of course mean that there
is an open subset $U\subseteq S$ such that
$(x,y)\in(\mathcal{V}^{j}\times\mathcal{V}^{-j})(U)$.
We also have a _standard_ Siegel lattice chain in the weak sense on
$\operatorname{Spec}\mathbb{Z}_{p}$ (and hence by base change on every
$\mathbb{Z}_{p}$-scheme $S$) of type $J$, namely the one given by the lattice
chain $\left\\{\Lambda^{j}\;|\;j\in J\right\\}$. We can think of the standard
Siegel lattice chain as either having varying $\mathcal{V}^{j}$ with the
$\alpha_{j,i}$ being the obvious inclusion maps (e.g. (if $\\{0,1\\}\subseteq
J$), $\mathcal{V}^{1}={\mathbb{Z}_{p}^{2g-1}\oplus
p\mathbb{Z}_{p}}\xrightarrow{\alpha_{1,0}=\mathrm{inclusion}}\mathbb{Z}_{p}^{2g}=\mathcal{V}^{0}$)
or as having constant $\mathcal{V}^{j}=\mathbb{Z}_{p}^{2g}$ with the
$\alpha_{j,i}$ being diagonal matrices with all entries either $p$ or $1$
(e.g.,
$\mathcal{V}^{1}=\mathbb{Z}_{p}^{2g}\xrightarrow{\alpha_{1,0}=\operatorname{diag}(1,1,\dotsc,1,p)}\mathbb{Z}_{p}^{2g}=\mathcal{V}^{0}$).
Usually the latter point of view is more convenient.
A _Siegel lattice chain on $S$ of type $J$_ (or _Siegel lattice chain in the
strong sense on $S$ of type $J$_) then is a Siegel lattice chain in the weak
sense on $S$ of type $J$ that Zariski-locally on $S$ is isomorphic to the
standard chain.
###### Remarks 2.31.
1. (1)
Let
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$
be a Siegel lattice chain in the weak sense on $S$ of type $J$. Then
$\tilde{\psi_{j}}:=(\tilde{\alpha}_{j,-j}^{*}\otimes\operatorname{id}_{\mathcal{L}})\circ\psi_{j}\colon\mathcal{V}^{j}\otimes\mathcal{V}^{j}\to\mathcal{L}$
is alternating.
Here $\tilde{\alpha}_{j,-j}$ is defined as follows: Let $n\in\mathbb{Z}$ be
maximal with $j-2gn\geq-j$. Then
$\tilde{\alpha}_{j,-j}:=\alpha_{j-2gn,-j}\circ\theta_{j-2g(n-1)}\circ\dotsb\circ\theta_{j}$.
2. (2)
Note that this means that $\tilde{\psi}_{j}$ is (twisted) symplectic if $-j\in
j+2g\mathbb{Z}$, i.e., if $j\in g\mathbb{Z}$.
###### Proof:
(of (1)) Let $x,y\in\mathcal{V}^{j}$. Then
$\displaystyle\tilde{\psi}_{j}(x,y)$
$\displaystyle=\psi_{j}(x,\tilde{\alpha}_{j,-j}(y))$
$\displaystyle=\psi_{j}(x,(\alpha_{j-2gn,-j}\circ\theta_{j-2g(n-1)}\circ\dotsb\circ\theta_{j})(y))$
$\displaystyle=\psi_{2gn-j}(\alpha_{j,2gn-j}(x),(\theta_{j-2g(n-1)}\circ\dotsb\circ\theta_{j})(y))$
$\displaystyle=\psi_{2g(n-1)-j}((\theta_{2gn-j}\circ\alpha_{j,2gn-j})(x),(\theta_{j-2g(n-2)}\circ\dotsb\circ\theta_{j})(y))$
$\displaystyle=\dotsb$
$\displaystyle=\psi_{-j}((\theta_{-j+2g}\circ\dotsb\circ\theta_{2gn-j}\circ\alpha_{j,2gn-j})(x),y)$
$\displaystyle=-\psi_{j}(y,(\theta_{-j+2g}\circ\dotsb\circ\theta_{2gn-j}\circ\alpha_{j,2gn-j})(x))$
$\displaystyle=-\tilde{\psi}_{j}(y,x).$
□
###### Reminder 2.32.
$\mathcal{G}_{K}$ is the automorphism group of the standard Siegel lattice
chain.
The following definition is a generalization of [VW13, Definition 3.1] in the
Siegel case.
###### Definition 2.33.
Let $S$ be an $\mathbb{F}_{p}$-scheme.
A $\overline{\mathcal{G}}_{K}$-zip over $S$ is a tuple
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet},\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet},\varphi_{\mathcal{L}})$,
where
1. (a)
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$
is a Siegel lattice chain on $S$ of type $J$,
2. (b)
for all $j\in J$, $\mathcal{C}^{j}\subseteq\mathcal{V}^{j}$ are locally direct
summands of rank $g$ compatible with
$\alpha_{\bullet\bullet},\theta_{\bullet}$, such that
$\mathcal{C}^{j}\hookrightarrow\mathcal{V}^{j}\overset{\psi_{j}}{\cong}(\mathcal{V}^{-j})^{*}\otimes\mathcal{L}\to(\mathcal{C}^{-j})^{*}\otimes\mathcal{L}$
vanishes. (cf. Remark 1.30 for the origins of this condition.)
3. (c)
$\mathcal{D}^{\bullet}\subseteq\mathcal{V}^{\bullet}$ satisfies the same
conditions as $\mathcal{C}^{\bullet}\subseteq\mathcal{V}^{\bullet}$,
4. (d)
$\varphi_{0}^{j}\colon(\mathcal{C}^{j})^{(p)}\xrightarrow{\sim}\mathcal{V}^{j}/\mathcal{D}^{j}$
and
$\varphi_{1}^{j}\colon(\mathcal{V}^{j}/\mathcal{C}^{j})^{(p)}\xrightarrow{\sim}\mathcal{D}^{j}$
are isomorphisms of vector bundles compatible with $\alpha_{\bullet\bullet}$
and $\theta_{\bullet}$ and
$\varphi_{\mathcal{L}}\colon\mathcal{L}^{(p)}\xrightarrow{\sim}\mathcal{L}$ is
an isomorphism of line bundles, such that
$(\mathcal{C}^{j})^{(p)}$$(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{*,(p)}\otimes\mathcal{L}^{(p)}$$(\mathcal{D}^{-j})^{*}\otimes\mathcal{L}$$\mathcal{V}^{j}/\mathcal{D}^{j}$$\varphi_{0}^{j}$$(\varphi_{1}^{-j})^{*}\otimes\varphi_{\mathcal{L}}^{-1}$$\psi_{j}^{(p)}$$\psi_{j}$
commutes, i.e.,
${\psi_{j}(\varphi_{0}^{j}(\\_),\varphi_{1}^{-j}(\\_))=\varphi_{\mathcal{L}}\circ\psi_{j}^{(p)}(\\_,\\_)\colon}{(\mathcal{C}^{j})^{(p)}\times(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{(p)}\to\mathcal{L}^{(p)}\to\mathcal{L}}.$
Since $\varphi_{\mathcal{L}}$ evidently is uniquely determined by the other
data, we sometimes leave it out.
We obtain a fibered category
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}\to\mathrm{Sch}_{\mathbb{F}_{p}}$.
###### Remark 2.34.
$\psi_{j}$ gives rise to isomorphisms
$\displaystyle\mathcal{C}^{j}$
$\displaystyle\xrightarrow{\sim}(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{*}\otimes\mathcal{L},$
$\displaystyle\mathcal{V}^{j}/\mathcal{C}^{j}$
$\displaystyle\xrightarrow{\sim}(\mathcal{C}^{-j})^{*}\otimes\mathcal{L},$
$\displaystyle\mathcal{D}^{j}$
$\displaystyle\xrightarrow{\sim}(\mathcal{V}^{-j}/\mathcal{D}^{-j})^{*}\otimes\mathcal{L},$
$\displaystyle\mathcal{V}^{j}/\mathcal{D}^{j}$
$\displaystyle\xrightarrow{\sim}(\mathcal{D}^{-j})^{*}\otimes\mathcal{L}.$
This way
$\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ and
$\mathcal{D}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{D}^{\bullet}$
become Siegel lattice chains in the weak(!) sense of type $J$. The Cartier
isomorphism then is an isomorphism in the category of Siegel lattice chains in
the weak sense of type $J$. Over an algebraically closed field, we call the
isomorphism type of the Siegel lattice chain in the weak sense
$\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ the
_Kottwitz-Rapoport type of the $\overline{\mathcal{G}}_{K}$-zip_.
We also define a linearly rigidified version of
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ as follows.
###### Definition 2.35.
We define the fibered category
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}\to\mathrm{Sch}_{\mathbb{F}_{p}}$
just like $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ but with the extra
condition that
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$
be the standard Siegel lattice chain (rather than just locally isomorphic to
it).
###### Lemma 2.36.
We always have a closed embedding of
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ into a product of
(classical) $\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$’s, and
therefore $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ is a scheme.
###### Proof:
Set $J^{\prime}:=J\cap\\{0,\dotsc,2g-1\\}$. Let
$\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}=E_{(1^{g},0^{g})}\backslash(\operatorname{GL}_{2g}\times\operatorname{GL}_{2g})$
be the $\mathbb{F}_{p}$-scheme of trivialized $\operatorname{GL}_{2g}$-zips
(so that
$[\operatorname{GL}_{2g}\backslash\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}]=\operatorname{GL}_{2g}\text{-}\mathrm{Zip}$)
with respect to the the cocharacter $(1^{g},0^{g})$, and $\prod_{j\in
J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ the product of
$\\#J^{\prime}$ copies of this scheme. On $J^{\prime}$ we define $-j:=2g-j$
for $1\leq j\leq 2g-1$ and $-0:=0$.
Then we get a monomorphism
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}\hookrightarrow\prod_{j\in
J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ (2.37)
by sending
$(\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet})$
to
$\left(\mathcal{C}^{j},\mathcal{D}^{j},\varphi_{0}^{j},\varphi_{1}^{j}\right)_{j\in
J^{\prime}}$.
The extra conditions for an element of $\prod_{j\in
J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ to be in
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ are as in Definition
2.33:
1. (1)
$\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet}$
are compatible with the transition maps (or, to put it differently,
${(\mathcal{C}^{j}\oplus\mathcal{V}^{j}/\mathcal{C}^{j})^{(p)}}\xrightarrow[\cong]{\varphi_{0}^{j}\oplus\varphi_{1}^{j}}{\mathcal{V}^{j}/\mathcal{D}^{j}\oplus\mathcal{D}^{j}}$
is compatible with the transition maps),
2. (2)
$\mathcal{C}^{j}\hookrightarrow\mathcal{V}^{j}\overset{\psi_{j}}{\cong}(\mathcal{V}^{-j})^{*}\to(\mathcal{C}^{-j})^{*}$
vanishes.
3. (3)
$\mathcal{D}^{j}\hookrightarrow\mathcal{V}^{j}\overset{\psi_{j}}{\cong}(\mathcal{D}^{-j})^{*}\to(\mathcal{D}^{-j})^{*}$
vanishes.
4. (4)
There is a (necessarily unique) isomorphism
$\varphi_{\mathcal{L}}\colon\mathcal{L}^{(p)}\xrightarrow{\sim}\mathcal{L}=\mathcal{O}_{S}$
of line bundles, such that
$(\mathcal{C}^{j})^{(p)}$$(\mathcal{V}^{-j}/\mathcal{C}^{-j})^{*,(p)}$$(\mathcal{D}^{-j})^{*}$$\mathcal{V}^{j}/\mathcal{D}^{j}$$\varphi_{0}^{j}$$(\varphi_{1}^{-j})^{*}\otimes\varphi_{\mathcal{L}}^{-1}$$\psi_{j}^{(p)}$$\psi_{j}$
commutes.
We claim that the conditions are closed on $\prod_{j\in
J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ (hence the
monomorphism is a closed immersion).
To see this, we recall the construction of the scheme
$\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ as executed in [MW04,
(3.10), (3.11), (4.3)].
Recall our notational convention regarding the parabolic subgroup associated
with a cocharacter $\chi$ from Definition 1.33. As in [MW04], we denote by
$\mathrm{Par}_{\chi}$ the scheme of parabolic subgroups of type $\chi$.
There is a group scheme $H$ defined by the cartesian diagram
$H$$\mathcal{P}_{((-1)^{g},0^{g})}/\mathcal{U}_{((-1)^{g},0^{g})}$$\square$$\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$$\mathrm{Par}_{((-1)^{g},0^{g})}$$(\;)^{(p)}\circ\operatorname{pr}_{1}$
where $\mathcal{P}_{((-1)^{g},0^{g})}\to\mathrm{Par}_{((-1)^{g},0^{g})}$ is
the universal parabolic group scheme and $\mathcal{U}_{((-1)^{g},0^{g})}$ its
unipotent radical, such that
$\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$ is an $H$-Zariski torsor
over $\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$,
where
$\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}\to\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$
is given by $(C,D,\varphi_{0},\varphi_{1})\mapsto(C,D)$.
Clearly, compatibility of $\mathcal{C}^{\bullet},\mathcal{D}^{\bullet}$ with
the transition maps is a closed condition on $\prod_{j\in
J^{\prime}}\mathrm{Par}_{((-1)^{g},0^{g})}\times\mathrm{Par}_{(1^{g},0^{g})}$
and then also on $\prod_{j\in
J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$. Similar for the
conditions (2) and (3).
Locally, we can choose complements (not necessarily compatible with the
transition maps) and then $\varphi_{\bullet}^{j}$ yield sections $g^{j}$ of
$\operatorname{GL}_{2g}$ as in [MW04, definition of $g\in G(S)$ in the proof
of (4.3)]. The $g^{j}$ are well-defined up to
$\mathcal{U}_{((-1)^{g},0^{g})}^{(p)}\times\mathcal{U}_{(1^{g},0^{g})}$, and
we want them to be compatible with the transition maps coming from the Siegel
lattice chains in the weak sense
$\mathcal{C}^{j}\oplus\mathcal{V}^{j}/\mathcal{C}^{j}$ and
$\mathcal{V}^{j}/\mathcal{D}^{j}\oplus\mathcal{D}^{j}$, respectively. With our
complements in place, these transition maps correspond to maps
$\mathcal{V}^{j}\to\mathcal{V}^{j-n}$. The question of whether $g^{j}$ is
compatible with these maps is independent of the choice of complements
(basically because the transition maps $\mathcal{V}^{j}\to\mathcal{V}^{j-n}$
depend on the choice of complements similar to how $g^{j}$ depends on that
choice).
So in effect we can view the conditions on
$\varphi_{0}^{\bullet},\varphi_{1}^{\bullet}$ of (1) as closed conditions on
$\prod_{j\in
J^{\prime}}\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim\sim}$, where
$\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim\sim}\to\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim}$
(an fpqc quotient map) additionally comes with complementary spaces of $C,D$
($\operatorname{GL}_{2g}\text{-}\mathrm{Zip}^{\sim\sim}=\tilde{X}_{\tau}$ in
the notation of [MW04, proof of (4.3)]).
We also can reformulate condition (4) in those terms. □
###### Corollary 2.38.
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ is the algebraic quotient
stack
$[\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}]$.
Here by definition an element $\phi\in\overline{\mathcal{G}}_{K}$ acts on
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ by replacing
$(\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet},\varphi_{\mathcal{L}})$
by
$(\phi\mathcal{C}^{\bullet},\phi\mathcal{D}^{\bullet},\phi\varphi_{0}^{\bullet}\phi^{-(p)},\phi\varphi_{1}^{\bullet}\phi^{-(p)},\varphi_{\mathcal{L}})$.
###### Definition 2.39.
We let an element
$(X,Y)\in\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ act on
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ by replacing
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet},\mathcal{C}^{\bullet},\mathcal{D}^{\bullet},\varphi_{0}^{\bullet},\varphi_{1}^{\bullet})$
by
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet},X\mathcal{C}^{\bullet},Y\mathcal{D}^{\bullet},Y\varphi_{0}^{\bullet}X^{-(p)},Y\varphi_{1}^{\bullet}X^{-(p)}).$
###### Notation 2.40.
Let $\mathscr{S}_{K}\to\operatorname{Spec}\mathbb{Z}_{p}$ be the integral
model of the Siegel Shimura variety of level $K$ (where $K=K_{p}K^{p}$ with
$K^{p}$ sufficiently small), and recall $\tilde{\mathscr{S}}_{K}$ from Section
1.6.1. Moreover, define
$\overline{\mathscr{S}}_{K}:=\mathscr{S}_{K}\otimes_{\mathbb{Z}_{p}}\mathbb{F}_{p}$
and
$\tilde{\overline{\mathscr{S}}}_{K}:=\tilde{\mathscr{S}}_{K}\otimes_{\mathbb{Z}_{p}}\mathbb{F}_{p}$.
So $\tilde{\overline{\mathscr{S}}}_{K}\to\overline{\mathscr{S}}_{K}$ is a
$\overline{\mathcal{G}}_{K}$-torsor.
###### Remark 2.41.
We have morphisms
$\tilde{\overline{\mathscr{S}}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$
(take first de Rham cohomology with Frobenius and Verschiebung) and
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}\to\overline{M}^{\mathrm{loc}}_{K}$
(take the $\mathcal{C}^{\bullet}$-filtration) and therefore
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}_{K}].$
###### Remark 2.42.
In particular, $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$ has a
Kottwitz-Rapoport stratification, which agrees with the notion of Kottwitz-
Rapoport type as defined in Remark 2.34.
For $w\in\mathrm{KR}(K,\\{\mu\\})$ denote the associated Kottwitz-Rapoport
stratum by $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{w}$, i.e., we
interpret $w$ as a $\bar{\mathbb{F}}_{p}$-valued point of
$[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}_{K}]$ and
form $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{w}$ as a fiber product.
###### Construction 2.43.
Fix $w\in\mathrm{Adm}(\\{\mu\\})^{K}\subseteq\widetilde{W}$ (so that
$W_{K}wW_{K}\in\mathrm{KR}(K,\\{\mu\\})$). We define a standard
$\overline{\mathcal{G}}_{K}$-zip of KR type $W_{K}wW_{K}$.
Using Remark 2.28, we interpret $w$ as an element of
$N(\mathbb{Q}_{p})\subseteq G(\mathbb{Q}_{p})$. The admissibility condition
implies that we can interpret it as an endomorphism $w^{\bullet}$ of the
standard lattice chain $\mathcal{V}^{\bullet}$ over
$\mathbb{Z}_{p}$.131313Take up the second point of view described in
Definition 2.30 regarding $\mathcal{V}^{\bullet}$. Define
$\underline{\nu}^{(0)}:=\underline{\nu}$,
$\underline{\nu}^{(1)}:=\underline{\nu}+\left(\begin{smallmatrix}0\\\
\vdots\\\ 0\\\ 0\\\ -1\end{smallmatrix}\right)+w\left(\begin{smallmatrix}0\\\
\vdots\\\ 0\\\ 0\\\ 1\end{smallmatrix}\right)$,
$\underline{\nu}^{(2)}:=\underline{\nu}+\left(\begin{smallmatrix}0\\\
\vdots\\\ 0\\\ -1\\\ -1\end{smallmatrix}\right)+w\left(\begin{smallmatrix}0\\\
\vdots\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)$, and so on. Then
$w^{j}=T_{\underline{\nu}^{(j)}}P_{w}$ for $0\leq j<2g$. From the formulation
of the admissibility condition as in Remark 2.29, we see that
$w\in\mathrm{Adm}(\\{\mu\\})^{K}$ is equivalent to the condition that
$\underline{\nu}^{(j)}$ be a permutation of $(1^{g},0^{g})$ for all relevant
$j$.
We denote the standard Siegel lattice chain over $\mathbb{Z}_{p}$ by
$\mathscr{V}^{\bullet}$ and its base change to $\mathbb{F}_{p}$ by
$\mathcal{V}^{\bullet}$. Define
$\mathscr{C}_{w}^{\bullet}:=pw^{\bullet,-1}\mathscr{V}^{\bullet}$ and
$\mathscr{D}_{w}^{\bullet}:=\sigma(w^{\bullet})\mathscr{V}^{\bullet}$. Then
$\mathcal{C}_{w}^{\bullet}:=\mathscr{C}_{w}^{\bullet}\otimes\mathbb{F}_{p}=\ker(w^{\bullet}\colon\mathcal{V}^{\bullet}\to\mathcal{V}^{\bullet})$,
so
$(\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet})^{(p)}\xrightarrow{\sim}\mathcal{D}_{w}^{\bullet}:=\mathscr{D}_{w}^{\bullet}\otimes\mathbb{F}_{p}$
via $\sigma(w^{\bullet})$ and
$(\mathcal{C}_{w}^{\bullet})^{(p)}\xrightarrow{\sim}\mathcal{V}^{\bullet}/\mathcal{D}_{w}^{\bullet}$
via $p^{-1}\sigma(w^{\bullet})$.
This defines a standard element $\widetilde{\mathrm{Std}}(w)$ of
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}(\mathbb{F}_{p})$ and a
standard element $\mathrm{Std}(w)$ of
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}_{w}(\mathbb{F}_{p})$.
###### Definition and Remark 2.44.
(See also [SYZ19, Lemma 3.3.2].)
$\mathcal{G}_{w}:=\operatorname{Aut}(\mathscr{C}_{w}^{\bullet}\subseteq\mathscr{V}^{\bullet})$
is a Bruhat-Tits group scheme with generic fiber $G_{\mathbb{Q}_{p}}$ and
$\breve{\mathbb{Z}}_{p}$-points $\breve{K}\cap w^{-1}\breve{K}w$; and
similarly for
$\mathcal{G}_{\sigma(w)^{-1}}:=\operatorname{Aut}(\mathscr{D}_{w}^{\bullet}\subseteq\mathscr{V}^{\bullet})$
with $\breve{K}\cap\sigma(w)\breve{K}\sigma(w)^{-1}$.
###### Definition 2.45.
We keep $w\in\mathrm{Adm}(\\{\mu\\})^{K}\subseteq\widetilde{W}$ fixed and
define
$\widetilde{E}_{w}\subseteq\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$
to be the stabilizer of $\widetilde{\mathrm{Std}}(w)$.
So $\widetilde{E}_{w}$ consists of those
$(X^{\bullet},Y^{\bullet})\in\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$
such that $X^{\bullet}\mathcal{C}_{w}^{\bullet}=\mathcal{C}_{w}^{\bullet}$,
$Y^{\bullet}\mathcal{D}_{w}^{\bullet}=\mathcal{D}_{w}^{\bullet}$, and
$Y^{\bullet}\circ\varphi_{j}^{\bullet}\circ
X^{\bullet,-(p)}=\varphi_{j}^{\bullet}$ for $j=0,1$.
In the notation of [SYZ19, Lemma 3.3.2] we have
$\widetilde{E}_{w}=\overline{\mathcal{G}}_{w}\times_{\overline{\mathcal{G}}_{w}^{L,(p)}}\overline{\mathcal{G}}_{\sigma(w)^{-1}}.$
(2.46)
Here $\overline{\mathcal{G}}_{w}^{L}$ is the image of
$\overline{\mathcal{G}}_{w}$ in
$\mathrm{DiagAut}(\mathcal{C}_{w}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet})$
(the automorphisms of
$\mathcal{C}_{w}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet}$
respecting both $\mathcal{C}_{w}^{\bullet}$ and
$\mathcal{V}^{\bullet}/\mathcal{C}_{w}^{\bullet}$).
The orbit of $\widetilde{\mathrm{Std}}(w)$ in
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$ is the fppf quotient
$(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})/\widetilde{E}_{w}$,
cf. [DG80, II, § 5, no. 3].
###### Lemma 2.47.
We have commutative diagrams
$\overline{\mathcal{G}}_{w}$$\overline{\mathcal{G}}$$\overline{\mathcal{G}}^{\mathrm{rdt}}$$\bar{P}_{J_{1}}$
and
$\overline{\mathcal{G}}_{\sigma(w)^{-1}}$$\overline{\mathcal{G}}$$\overline{\mathcal{G}}^{\mathrm{rdt}}$$\bar{P}_{\sigma^{\prime}(J_{1})}$
and
$\overline{\mathcal{G}}_{w}^{L}$$\overline{\mathcal{G}}$$\overline{\mathcal{G}}^{\mathrm{rdt}}$$\bar{L}_{J_{1}}$.
###### Proof:
This follows from Proposition 2.23. □
###### Lemma 2.48.
The image of $\widetilde{E}_{w}$ under
$\overline{\mathcal{G}}\times\overline{\mathcal{G}}\to\overline{\mathcal{G}}^{\mathrm{rdt}}\times\overline{\mathcal{G}}^{\mathrm{rdt}}$
is $E_{\mathcal{Z}_{w}}$.
###### Proof:
This follows from Lemma 2.47. □
###### Lemma 2.49.
Assume $0\in J$. The
$\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbit of
$\widetilde{\mathrm{Std}}(w)$ for $w\in\mathrm{Adm}(\\{\mu\\})^{K}$ depends
only on $W_{K}wW_{K}$.
###### Proof:
Let $x,y\in W_{K}\subseteq W$. As above we get endomorphisms
$x^{\bullet},y^{\bullet}$ of $\mathcal{V}^{\bullet}$, which in this case are
in fact automorphisms. Now
$\widetilde{\mathrm{Std}}(w)=((y^{\bullet})^{-1},\sigma(x^{\bullet}))\cdot\widetilde{\mathrm{Std}}(w)$.
□
###### Definition 2.50.
Define $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ to be the
union of the
$\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbits of the
standard zips $\widetilde{\mathrm{Std}}(w)$ for
$w\in\mathrm{Adm}(\\{\mu\\})^{K}$. Here an orbit by definition is the image of
the orbit map endowed with the reduced subscheme structure, and—as we prove
just below—the union of orbits just referred to is a closed subset, which we
again endow with the reduced subscheme structure.
Define
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}:=[\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}]\subseteq[\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}]=\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$.
###### Lemma 2.51.
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ is a closed subset
of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$.
###### Proof:
This being a purely topological question, we may freely pass to perfections,
which will be convenient since Dieudonné theory is simpler over perfect
rings. By “perfection” we mean the inverse perfection in the terminology of
[BG18, Section 5].
Consider therefore
$(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}$ as a
sheaf on $\mathrm{Perf}_{\mathbb{F}_{p}}$, the fpqc site of affine perfect
$\mathbb{F}_{p}$-schemes. Again denoting the standard Siegel lattice chain
over $\mathbb{Z}_{p}$ by $\mathscr{V}^{\bullet}$ and its base change to
$\mathbb{F}_{p}$ by $\mathcal{V}^{\bullet}$, we can describe the elements of
$(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}(R)=\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}(R)$,
where $R$ is a perfect $\mathbb{F}_{p}$-algebra as being given by
> homomorphisms
> $\mathcal{V}_{R}^{\bullet,(p)}\xrightarrow{F^{\bullet}}\mathcal{V}_{R}^{\bullet}\xrightarrow{V^{\bullet}}\mathcal{V}_{R}^{\bullet,(p)}$
> such that
> $\ker(F^{\bullet})=:\mathcal{C}^{\bullet,(p)}=\operatorname{im}(V^{\bullet})$
> and
> $\operatorname{im}(F^{\bullet})=:\mathcal{D}^{\bullet}=\ker(V^{\bullet})$
> and $\psi_{j}(F^{j}\\_,\\_)=u\sigma(\psi_{j}(\\_,V^{-j}\\_))$ for some $u\in
> R^{\times}$ and $\mathcal{C}^{\bullet,(p)},\mathcal{D}^{\bullet}$ have the
> same rank (namely $g$).
To see that $\mathcal{C}^{\bullet,(p)},\mathcal{D}^{\bullet}$ are direct
summands of $\mathcal{V}_{R}^{\bullet,(p)},\mathcal{V}_{R}^{\bullet}$ (which
makes the last part of the characterization given above meaningful), one
argues as in [Lau14, Lemma 2.4] (since both are finitely presented, it is
enough to show flatness and to that end, one looks at the fiber dimensions).
Define a presheaf $\mathcal{X}$ on $\mathrm{Sch}_{\mathbb{Z}_{p}}$ in the same
way but for the following changes: $\mathcal{V}^{\bullet}$ is replaced by
$\mathscr{V}^{\bullet}$, and we impose the condition that both compositions
$F^{\bullet}\circ V^{\bullet}$ and $V^{\bullet}\circ F^{\bullet}$ are
multiplication by $p$, and the $\ker=\operatorname{im}$-conditions are only
required to hold modulo $p$. We also slightly reformulate these
$\ker=\operatorname{im}$-conditions: We impose the condition that the
reductions $\bar{F}^{\bullet},\bar{V}^{\bullet}$ be fiberwise of rank $g$ over
$R/p$. (Note that the argument that
$\mathcal{C}^{\bullet,(p)},\mathcal{D}^{\bullet}$ are direct summands only
works over reduced rings.)
Then $\mathcal{X}$ is a separated $\mathbb{Z}_{p}$-scheme. To see this, we
build it up from scratch as follows. $\operatorname{End}(\mathscr{V}^{j})$
obviously is a $\mathbb{Z}_{p}$-scheme (an affine space), hence so is
$\operatorname{Hom}(\mathscr{V}^{j,(p)},\mathscr{V}^{j})$ since
$\mathscr{V}_{j}^{(p)}\cong\mathscr{V}_{j}$.
$\operatorname{Hom}(\mathscr{V}^{\bullet,(p)},\mathscr{V}^{\bullet})$ is a
locally closed subscheme of a finite product of such schemes. Homomorphisms
$\mathscr{V}^{\bullet,(p)}\xrightarrow{F^{\bullet}}\mathscr{V}^{\bullet}\xrightarrow{V^{\bullet}}\mathscr{V}^{\bullet,(p)}$
such that both compositions are multiplication by $p$ form a closed subscheme
$\mathcal{X}^{\prime}$ of
$\operatorname{Hom}(\mathscr{V}^{\bullet,(p)},\mathscr{V}^{\bullet})\times\operatorname{Hom}(\mathscr{V}^{\bullet},\mathscr{V}^{\bullet,(p)})$.
In the special fiber $\mathcal{X}^{\prime}_{\mathbb{F}_{p}}$ we now consider
the $\ker=\operatorname{im}$-conditions and show that they define an open
subscheme $\bar{\mathcal{X}}^{\prime\prime}$. Then
$\mathcal{X}=\mathcal{X}^{\prime}\times_{\mathcal{X}^{\prime}_{\mathbb{F}_{p}}}\bar{\mathcal{X}}^{\prime\prime}$.
Indeed, the extra conditions are that all $F^{\bullet},V^{\bullet}$ have some
non-vanishing $g$-minor—evidently open conditions.
The upshot is that we defined a $\mathbb{Z}_{p}$-scheme $\mathcal{X}$ such
that
$(\mathcal{X}\times_{\mathbb{Z}_{p}}\mathbb{F}_{p})^{\mathrm{perf}}=(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}$
and such that we have an obvious morphism
$\tilde{\mathscr{S}}_{K}\to\mathcal{X}$, which takes a principally polarized
isogeny chain of abelian schemes to the evaluation of the Dieudonné crystal
on the trivial thickening.141414This makes use of the crystalline-de Rham
comparison to make a trivialization of the de Rham cohomology into a
trivialization of the crystalline cohomology.
Observe that $\mathcal{X}$ also has a natural
$\mathcal{G}_{K}\times\mathcal{G}_{K}$-action: We interpret $\mathcal{G}_{K}$
as $\operatorname{Aut}(\mathscr{V}^{\bullet})$ and the action of
$(X^{\bullet},Y^{\bullet})$ transforms $(F^{\bullet},V^{\bullet})$ into
$(Y^{\bullet}\circ F^{\bullet}\circ X^{\bullet,-(p)},X^{(p)}\circ
V^{\bullet}\circ Y^{\bullet,-1})$. The identity
$(\mathcal{X}\times_{\mathbb{Z}_{p}}\mathbb{F}_{p})^{\mathrm{perf}}=(\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim})^{\mathrm{perf}}$
is an identity of
$\overline{\mathcal{G}}_{K}^{\mathrm{perf}}\times\overline{\mathcal{G}}_{K}^{\mathrm{perf}}$-varieties.
Now we claim that
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}=\left(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}}\right)^{\mathrm{perf}}$
topologically, where $\overline{\mathcal{X}_{\mathbb{Q}_{p}}}$ is the flat
closure of the generic fiber in $\mathcal{X}$. This of course implies
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}\subseteq\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}$
being closed.
Both sets are constructible, so it suffices to check it on a very dense
subset, say the $\bar{\mathbb{F}}_{p}$-valued points.
Using Lemmas 1.25 and 1.27, we see that
$(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}})(\bar{\mathbb{F}}_{p})$
consists precisely of those elements
$\bar{x}\in\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}^{\sim}(\bar{\mathbb{F}}_{p})$
such that there exists a finite field extension $L/\breve{\mathbb{Q}}_{p}$ and
a point $x\in\mathcal{X}(\mathcal{O}_{L})$ lifting $\bar{x}$. (We’ll also say
that $\bar{x}$ is _liftable_ in this situation.)
Since $\mathcal{G}_{K}$ is flat over $\mathbb{Z}_{p}$, this liftability
condition for $\mathcal{G}_{K}$ (in lieu of $\mathcal{X}$) is always
satisfied. Consequently,
$(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}})(\bar{\mathbb{F}}_{p})$
is stable under the
$\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-action.
Also, the standard zips clearly are liftable. Thus,
$(\mathcal{X}_{\mathbb{F}_{p}}\times_{\mathcal{X}}\overline{\mathcal{X}_{\mathbb{Q}_{p}}})(\bar{\mathbb{F}}_{p})\supseteq\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}(\bar{\mathbb{F}}_{p})$.
For the converse inclusion, there are injective maps from
$\mathcal{X}(\mathcal{O}_{L})$ to $\mathcal{X}(L)$ to $\mathcal{G}_{K}(L)$
such that the corresponding Schubert cell (in the local model) is indexed by
the image mod
$\mathcal{G}_{K}(\mathcal{O}_{L})\times\mathcal{G}_{K}(\mathcal{O}_{L})^{\mathrm{op}}$,
cf. Proposition 2.5.151515Note that
$\mathcal{G}_{K}(\mathcal{O}_{L})\backslash\mathcal{G}_{K}(L)/\mathcal{G}_{K}(\mathcal{O}_{L})\cong
W_{K}\backslash\widetilde{W}/W_{K}$ for every strictly henselian discretely
valued field $L$ by [HR08, Prop. 8]. (And also in the construction of
$\widetilde{W}$ and $W_{K}$ any such field, not just
$L=\breve{\mathbb{Q}}_{p}$, can be used.) This proves it since we know which
Schubert cells belong to the local model. □
###### Remark 2.52.
Regarding the orbit closure relations for
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$, let us point out
that
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}\to\bar{M}^{\mathrm{loc}}_{K}$
is $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-equivariant,
where the action of
$\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$ on
$M^{\mathrm{loc}}_{K}$ factors through the first projection map, and this map
is a bijection on orbits. Writing $w^{\prime}\preceq w$ if
$(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\mathrm{Std}(w^{\prime})\subseteq\overline{(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\mathrm{Std}(w)}$,
it follows from these observations that $w^{\prime}\leq w$ implies
$w^{\prime}\preceq w$. Here $\leq$ is the Bruhat order on
$W_{K}\backslash\widetilde{W}/W_{K}$ as explained in [PRS13, section 4.2].
It appears reasonable to suspect that $\preceq$ and $\leq$ in fact agree.
###### Conjecture 2.53.
The closure of
${(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\widetilde{\mathrm{Std}}(w)}$
is given by the disjoint union of
${(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})\cdot\widetilde{\mathrm{Std}}(w^{\prime})}$
for $w^{\prime}\leq w$.
###### Lemma 2.54.
The map
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}$
factors through $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}$.
###### Proof:
It is sufficient to check this on $k=\bar{\mathbb{F}}_{p}$-valued points.
The map
$\overline{\mathscr{S}}_{K}(k)\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}(k)$
factors through
$\Upsilon_{K}\colon\overline{\mathscr{S}}_{K}(k)\to\bigcup_{w\in\mathrm{KR}(K,\\{\mu\\})}\breve{K}w\breve{K}/\breve{K}_{\sigma}$
with
$\breve{K}w\breve{K}/\breve{K}_{\sigma}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}(k)$
given by sending $xwy$ to $(\bar{y}^{-1},\sigma(\bar{x}))\cdot\mathrm{Std}(w)$
(similar to Lemma 2.49). □
#### 2.2.3 An explicit description of
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$
In order to get a better feeling for the passage from
$\overline{\mathcal{G}}_{K}$ to the maximal reductive quotient
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}=\overline{\mathcal{G}}_{K}/R_{u}\overline{\mathcal{G}}_{K}$
(with $R_{u}\overline{\mathcal{G}}_{K}$ being the unipotent radical of
$\overline{\mathcal{G}}_{K}$), which is key in the definition of the EKOR
stratification, we describe $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ in
explicit, linear-algebraic terms in the Siegel case.
Let
$(\mathcal{V}^{\bullet},\mathcal{L},\alpha_{\bullet\bullet},\theta_{\bullet},\psi_{\bullet})$
be the standard Siegel lattice chain on $S$ of type $J$. Assume $0\in J$. In
what follows, we sometimes use $j$ as a shorthand for $\mathcal{V}^{j}$.
By a _symmetric transition map_ , we mean a transition map from $j^{\prime}$
to $j^{\prime\prime}$, where $n\in\mathbb{Z}$, $j^{\prime},j^{\prime\prime}\in
J$, $ng\geq j^{\prime}\geq j^{\prime\prime}>(n-2)g$, and
$j^{\prime}+j^{\prime\prime}\in 2g\mathbb{Z}$. We will also call this the
symmetric transition map of $(j^{\prime},n)$ (or of $j^{\prime}$ if $n$
doesn’t matter).
By a _one-sided transition map_ , we mean a transition map from $j^{\prime}$
to $j^{\prime\prime}$, where $n\in\mathbb{Z}$, $j^{\prime},j^{\prime\prime}\in
J$, $ng\geq j^{\prime}\geq j^{\prime\prime}\geq(n-1)g$. Call it right-anchored
if $j^{\prime}=ng$ and left-anchored if $j^{\prime\prime}=(n-1)g$. We then
also speak of the right-anchored transition map of $j^{\prime\prime}$ and the
left-anchored transition map of $j^{\prime}$, respectively.
The kernels of the symmetric transition maps are symplectic subbundles of
$\mathcal{O}_{S}^{2g}$ (even of the form $\mathcal{O}_{S}^{I}$, where
$I\subseteq\\{\pm 1,\dotsc,\pm g\\}$ is symmetric (i.e., $-I=I$)), and the
kernels of the one-sided transition maps are totally isotropic subbundles
(even of the form $\mathcal{O}_{S}^{I}$, where $I\subseteq\\{1,\dotsc,g\\}$ or
$I\subseteq\\{-1,\dotsc,-g\\}$).
Let $\mathcal{O}_{S}^{I_{j}}$ be the kernel of the symmetric transition map of
$j$. Then $I_{j}\sqcup I_{-j}=\\{\pm 1,\dotsc,\pm g\\}$.
Every kernel of a one-sided transition map is a subbundle of a kernel of an
anchored transition map inside of which it is complemented by the kernel of
another one-sided transition map.
The kernel of the left-anchored transition map of $j$ is a subbundle of the
kernel of the symmetric transition map of $-j$ inside of which it is
complemented by the kernel of the right-anchored transition map of $-j$.
Likewise, the kernel of the right-anchored transition map of $j$ is a
subbundle of the kernel of the symmetric transition map of $j$ inside of which
it is complemented by the kernel of the left-anchored transition map of $-j$.
Now consider the standard symplectic bundle $\mathcal{O}_{S}^{2g}$ together
with the kernels of all the symmetric transition maps and all the one-sided
transition maps. So we have a symplectic bundle with a bunch of symplectic
subbundles coming in complementary pairs, some of which come with a further
decomposition into complementary Lagrangians, some of which come with further
decompositions into complementary subbundles (of course still totally
isotropic). We will also call these kernels _distinguished subspaces_.
Below we prove that $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the
automorphism group scheme $\mathcal{A}$ of these data. Clearly, $\mathcal{A}$
is reductive; in fact it is a Levi subgroup of a parabolic of
$\operatorname{GSp}_{2g}$.
We have a map $\overline{\mathcal{G}}_{K}\to\mathcal{A}$; the image of an
$S$-point $f^{\bullet}$ under $\overline{\mathcal{G}}_{K}\to\mathcal{A}$ on
the kernel of a transition map starting at $j$ is given by $f^{j}$. Note that
$f^{j}=\tau\circ f^{j}$ on $\ker(\tau)$ for every transition map $\tau$
starting at $j$.
$\overline{\mathcal{G}}_{K}\to\mathcal{A}$ has a natural section
$\mathcal{A}\to\overline{\mathcal{G}}_{K}$, where in the image all the $f^{j}$
are the same as automorphisms of $\mathcal{O}_{S}^{2g}$. (This is well-
defined!)
###### Proposition 2.55.
$\mathcal{A}=\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$.
###### Proof:
Let us show that $\mathcal{K}:=\ker(\overline{\mathcal{G}}_{K}\to\mathcal{A})$
is unipotent. Consider $\overline{\mathcal{G}}_{K}$ as a subgroup of
$\prod_{j\in
J/2g\mathbb{Z}}\operatorname{GL}_{2g}\subseteq\operatorname{GL}_{N}$. We claim
that said kernel is contained in $\prod_{j\in J/2g\mathbb{Z}}U^{(j)}$,
$U^{(j)}$ being a conjugate of the standard unipotent subgroup
$\left(\begin{smallmatrix}1&\ast&\ast&\dotsb&\ast\\\ &1&\ast&\dotsb&\ast\\\
&&\ddots&\dotsb&\vdots\end{smallmatrix}\right)$ of $\operatorname{GL}_{2g}$.
Indeed, say $f^{\bullet}$ is in the kernel. Then $f^{j}$ acts as the identity
on the kernel of the symmetric transition map of $j$ and $f^{-j}$ acts as the
identity on the kernel of the symmetric transition map of $-j$. On the image
of the symmetric transition map $\tau_{j}$ of $j$, $f^{-j}$ agrees with
$\tau_{j}\circ f^{j}$. Note that
$\operatorname{im}(\tau_{j})=\ker(\tau_{-j})$. So $\tau_{j}\circ f^{j}$ is the
identity on $\ker(\tau_{-j})$. Hence, if $x\in\ker(\tau_{-j})$, then
$x=\tau_{j}(x)$ and $f^{j}(x)\equiv x\mod\ker(\tau_{j})$. Thus with respect to
the decomposition $\ker(\tau_{j})\oplus\ker(\tau_{-j})$, $f^{j}$ is of the
form $\begin{pmatrix}1&\ast\\\ &1\end{pmatrix}$.
Now we have $\overline{\mathcal{G}}_{K}=\mathcal{A}\ltimes\mathcal{K}$, in
particular
$\overline{\mathcal{G}}_{K}\cong\mathcal{A}\times_{\mathbb{F}_{p}}\mathcal{K}$
as schemes. Since both $\overline{\mathcal{G}}_{K}$ and $\mathcal{A}$ are
reduced and connected, so is $\mathcal{K}$.
All in all, we see that $\mathcal{A}$ is indeed
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ and
$\mathcal{K}=R_{u}\overline{\mathcal{G}}_{K}$ is the unipotent radical of
$\overline{\mathcal{G}}_{K}$. □
###### Example 2.56.
* •
If $J=\mathbb{Z}$, then
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}=\mathbb{G}_{m}^{g+1}$ is the
standard maximal torus of $\operatorname{GSp}_{2g}$.
* •
If $g=2$ and $J=2\mathbb{Z}$, then $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$
is the automorphism group of the standard twisted symplectic space
$\mathbb{F}_{p}^{4}$ with its standard Lagrangian decomposition, i.e.,
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\cong\operatorname{GL}_{2}\times\mathbb{G}_{m}$.
* •
If $g=2$ and $J/2g\mathbb{Z}=\\{-1,0,1\\}$, then
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is the automorphism group of the
standard twisted symplectic space $\mathbb{F}_{p}^{4}$ with its standard
decomposition in twisted symplectic subspaces and the totally isotropic
rank-$1$ subbundles generated by $e_{\pm 1}$, i.e.,
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\cong\operatorname{GL}_{2}\times\mathbb{G}_{m}$.
* •
Let $g=8$. We have the local Dynkin diagram
where we labelled the simple affine roots as follows: $1-2e_{-1}+e_{0}$ is
labelled 0, $e_{-i}-e_{-(i+1)}$ is labelled $i$ for $1\leq i\leq 7$, and
$2e_{-8}-e_{0}$ is labelled 8.
Consider $J/2g\mathbb{Z}=\\{0,\pm 3,\pm 5\\}$. Then the Dynkin diagram of
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ should (according to [Tit79,
3.5.1]) be the one we get by removing $0,3,5$ and the adjacent edges. So we
expect something along the lines161616i.e., having the same Dynkin diagram as
of
$\operatorname{GSp}(6)\times\operatorname{GL}(2)\times\operatorname{GL}(3)$.
We have the following (bases of) kernels of symmetric transition maps:
$\\{\pm 1,\pm 2,\pm 3\\},\\{\pm 4,\pm 5,\pm 6,\pm 7,\pm 8\\},\\{\pm 1,\pm
2,\pm 3,\pm 4,\pm 5\\},\\{\pm 6,\pm 7,\pm 8\\},$
and the following kernels of one-sided transition maps:
$\displaystyle\\{-3,-2,-1\\},\\{-5,-4\\},\\{-5,-4,-3,-2,-1\\},$
$\displaystyle\\{4,5\\},\\{1,2,3,4,5\\},\\{1,2,3\\}.$
So an element $A$ of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is given by
specifying linear automorphisms $A_{123}$ of $\langle 1,2,3\rangle$ and
$A_{45}$ of $\langle 4,5\rangle$ and a symplectic similitude $A_{\pm 6,\pm
7,\pm 8}$ of $\langle\pm 6,\pm 7,\pm 8\rangle$, such that
$\left.A\right|_{\langle 1,2,3\rangle}=A_{123}$, $\left.A\right|_{\langle
4,5\rangle}=A_{45}$, $\left.A\right|_{\langle\pm 6,\pm 7,\pm 8\rangle}=A_{\pm
6,\pm 7,\pm 8}$, where $\left.A\right|_{\langle-1,-2,-3\rangle}$ is uniquely
determined by $A_{123}$, $c(A_{\pm 6,\pm 7,\pm 8})$ ($c$ being the multiplier
character) and the imposition that $A$ be a symplectic similitude, and
similarly for $\left.A\right|_{\langle-4,-5\rangle}$.
If for example we consider $J/2g\mathbb{Z}=\\{0,\pm 2,\pm 3,\pm 5\\}$ instead,
we expect something along the lines of
$\operatorname{GSp}(6)\times\operatorname{GL}(2)\times\operatorname{GL}(2)$
and indeed we additionally get the subbundles
$\displaystyle\\{-2,-1\\},\\{1,2\\},\\{-3\\},\\{-5,-4,-3\\},$
$\displaystyle\\{3,4,5\\},\\{3\\},\\{3,4,5,6,7,8,-8,-7,-6,-5,-4,-3\\},\\{1,2,-2,-1\\}.$
So an element $A$ of $\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}$ is given by
specifying linear automorphisms $A_{12}$ of $\langle 1,2\rangle$ and $A_{45}$
of $\langle 4,5\rangle$ and a symplectic similitude $A_{\pm 6,\pm 7,\pm 8}$ of
$\langle\pm 6,\pm 7,\pm 8\rangle$ in a similar way to above.
#### 2.2.4 $\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$ in the Siegel
case
Recall that we denote the unipotent radical of $\overline{\mathcal{G}}_{K}$ by
$R_{u}\overline{\mathcal{G}}_{K}$.
We divide out of $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$
the action of the smooth normal subgroup
$R_{u}\overline{\mathcal{G}}_{K}\times
R_{u}\overline{\mathcal{G}}_{K}\subseteq\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$
and observe that $\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$
still acts on $[R_{u}\overline{\mathcal{G}}_{K}\times
R_{u}\overline{\mathcal{G}}_{K}\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}]=:\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$
(not a scheme).
We also define
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}:=[(\Delta({\overline{\mathcal{G}}_{K}})\cdot(R_{u}\overline{\mathcal{G}}_{K}\times
R_{u}\overline{\mathcal{G}}_{K}))\backslash\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}]$.
###### Proposition 2.57.
We have well-defined morphisms
$\displaystyle(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})/\widetilde{E}_{w}$
$\displaystyle\to(\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\times\overline{\mathcal{G}}_{K}^{\mathrm{rdt}})/E_{\mathcal{Z}_{w}},$
$\displaystyle\quad(X,Y)$
$\displaystyle\mapsto(X^{\mathrm{rdt}},Y^{\mathrm{rdt}}),$
$\displaystyle\overline{\mathcal{G}}_{K}/\widetilde{E}_{w}$
$\displaystyle\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}/E_{\mathcal{Z}_{w}},$
$\displaystyle\quad X$ $\displaystyle\mapsto X^{\mathrm{rdt}},$
and a bijectiion
$(\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K})/(\widetilde{E}_{w}\cdot(R_{u}\overline{\mathcal{G}}_{K}\times
R_{u}\overline{\mathcal{G}}_{K}))\to(\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\times\overline{\mathcal{G}}_{K}^{\mathrm{rdt}})/E_{\mathcal{Z}_{w}}.$
###### Proof:
The first assertion follows from the definition of $E_{\mathcal{Z}_{w}}$ and
equation (2.46). The second then follows from Lemma 2.48. □
###### Lemma 2.58.
Assume $0\in J$. The underlying topological spaces of the stacks in
consideration are as follows:
1. (1)
$|[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}]|=\mathrm{KR}(K,\\{\mu\\})\overset{\text{def.}}{=}W_{K}\backslash(W_{K}\mathrm{Adm}(\\{\mu\\})W_{K})/W_{K}$.
2. (2)
$|\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}|=\mathrm{EKOR}(K,\\{\mu\\})=\mathrm{Adm}(\\{\mu\\})^{K}\cap{}^{K}\widetilde{W}$
$\overset{\text{\ref{iw-
diagr}}}{\cong}\bigcup_{w\in\mathrm{KR}(K,\\{\mu\\})}\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$.
###### Proof:
(1) is well-known as explained in Section 2.1.2.
(2): By Lemma 2.49, the
$\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbits in
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$ are indexed by
$\mathrm{Adm}(\\{\mu\\})_{K}=\mathrm{KR}(K,\\{\mu\\})$.
Let us further investigate the
$\overline{\mathcal{G}}_{K}\times\overline{\mathcal{G}}_{K}$-orbit of
$\widetilde{\mathrm{Std}}(w)$ in
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$ for some fixed
$w\in\mathrm{Adm}(\\{\mu\\})^{K}$.
By Proposition 2.57, its underlying topological space agrees with that of
$\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\sim,\mathcal{Z}_{w}}$.
By [SYZ19] we know that
$|\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}|\cong\breve{K}w\breve{K}/\breve{K}_{\sigma}(\breve{K}_{1}\times\breve{K}_{1})$,
whence the lemma. □
###### Corollary 2.59.
We have a morphism
$\left(\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}_{w}\right)_{\mathrm{red}}=\text{orbit
of
}\mathrm{Std}(w)\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}.$
This defines the EKOR stratification on
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}_{w}$. All in all, we get an
EKOR stratification on $\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}$.
The morphism factors through
$(\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}_{w})_{\mathrm{red}}$, and
$(\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}_{w})_{\mathrm{red}}\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}$
is an isomorphism.
###### Corollary 2.60.
For every point of $[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$,
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$
is smooth as a map between the associated reduced fiber of
$\overline{\mathscr{S}}_{K}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$
and the associated reduced fiber of
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$.
###### Proof:
This follows from the preceding corollary by [SYZ19, Theorem A] (which says
that the map
$\overline{\mathscr{S}}_{K}^{w}\to\overline{\mathcal{G}}_{K}^{\mathrm{rdt}}\text{-}\mathrm{Zip}^{\mathcal{Z}_{w}}$
is smooth, cf. subsection 2.1.4). □
The key obstacle in going forward toward proving smoothness of
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$
now is that we do not know whether the fibers of
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$
are reduced.
###### Conjecture 2.61.
We conjecture that the answer is affirmative. In fact, we conjecture that
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}\to[\overline{\mathcal{G}}_{K}\backslash\overline{M}_{K}]$
is smooth.
###### Corollary 2.62.
$\overline{\mathscr{S}}_{K}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$
is surjective.
###### Proof:
This follows from the description of the topological space and what is already
known from [HR17, first paragraph of section 6.3]. □
We get a commutative diagram
$\tilde{\overline{\mathscr{S}}}_{K}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$$\overline{\mathscr{S}}_{K}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}$$\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}$$[\overline{\mathcal{G}}_{K}\backslash\overline{M}^{\mathrm{loc}}_{K}]$
###### Remark 2.63.
Since $R_{u}\overline{\mathcal{G}}_{K}$ is smooth,
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{AdmZip}^{\sim}\to\overline{\mathcal{G}}_{K}\text{-}\mathrm{EKORZip}^{\sim}$
is smooth.
###### Remark 2.64.
Another open question at this point is: what is the relationship between
$\overline{\mathcal{G}}_{K}\text{-EKORZip}^{\mathrm{perf}}$ and the shtuka
approach of [SYZ19, Section 4]?
###### Remark 2.65.
It should be straightforward to generalize (taking into account the extra
structure) our constructions to those (P)EL cases where the local model is the
“naive” local model of Rapoport-Zink [RZ96].
#### 2.2.5 The example of $\operatorname{GSp}(4)$
To illustrate some aspects, we look at the example $2g=4$.
##### The apartment.
We describe the (extended) apartment. We follow the general outline of
[Lan00], in particular as far as notation is concerned.
The roots are
$\pm(2e_{1}-e_{0}),\pm(2e_{2}-e_{0}),\pm(e_{1}-e_{2}),\pm(e_{1}+e_{2}-e_{0})$.
The simple affine roots and the (various variants of the) Weyl group are as
described in Remark 2.26. The root one-parameter subgroups171717The parameter
being additive here; i.e., we’re talking about homomorphisms
$\mathbb{G}_{a}\to G$. are given as follows:
$\displaystyle u_{e_{1}-e_{2}}(x)$ $\displaystyle=\begin{pmatrix}1&x&&\\\
&1&&\\\ &&1&-x\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{2}-e_{1}}(x)$
$\displaystyle=\begin{pmatrix}1&&&\\\ x&1&&\\\ &&1&\\\ &&-x&1\end{pmatrix},$
$\displaystyle u_{2e_{1}-e_{0}}(x)$ $\displaystyle=\begin{pmatrix}1&&&x\\\
&1&&\\\ &&1&\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{0}-2e_{1}}(x)$
$\displaystyle=\begin{pmatrix}1&&&\\\ &1&&\\\ &&1&\\\ x&&&1\end{pmatrix},$
$\displaystyle u_{2e_{2}-e_{0}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\
&1&x&\\\ &&1&\\\ &&&1\end{pmatrix},$ $\displaystyle u_{e_{0}-2e_{2}}(x)$
$\displaystyle=\begin{pmatrix}1&&&\\\ &1&&\\\ &x&1&\\\ &&&1\end{pmatrix},$
$\displaystyle u_{e_{1}+e_{2}-e_{0}}(x)$
$\displaystyle=\begin{pmatrix}1&&x&\\\ &1&&x\\\ &&1&\\\ &&&1\end{pmatrix},$
$\displaystyle u_{e_{0}-e_{1}-e_{2}}(x)$ $\displaystyle=\begin{pmatrix}1&&&\\\
&1&&\\\ x&&1&\\\ &x&&1\end{pmatrix}$
For $a\in R$ define $w_{a}(x):=u_{a}(x)u_{-a}(-x^{-1})u_{a}(x)$.
###### Remark 2.66.
$N(\mathbb{Q}_{p})$ is generated by $T(\mathbb{Q}_{p})$ and all $w_{a}(x)$ as
above.
###### Remark 2.67.
$w_{a}(x)=m(u_{-a}(-x^{-1}))$ in Landvogt’s notation [Lan00].
We have
$V_{1}:=X_{*}(T)\otimes\mathbb{R}=\\{(x_{1},x_{2},x_{-2},x_{-1})\in\mathbb{R}^{4}\;|\;x_{1}+x_{-1}=x_{2}+x_{-2}\\}$
and
$\nu_{1}\colon T(\mathbb{Q}_{p})\to V_{1},\;\begin{pmatrix}d_{1}&&&\\\
&d_{2}&&\\\ &&cd_{2}^{-1}&\\\
&&&cd_{1}^{-1}\end{pmatrix}\mapsto\begin{pmatrix}-v_{p}(d_{1})\\\
-v_{p}(d_{2})\\\ -v_{p}(cd_{2}^{-1})\\\ -v_{p}(cd_{1}^{-1})\end{pmatrix}.$
Also, $V_{0}=\\{v\in V_{1}\;|\;a(v)=0\;\forall
a\in\Phi\\}=\mathbb{R}(1,1,1,1)$, $V:=V_{1}/V_{0}$.
The extended apartment $A=A^{\mathrm{ext}}$ now is an affine $V_{1}$-space
together with the map $\nu_{1}\colon
N(\mathbb{Q}_{p})\to\operatorname{Aff}(A)=\operatorname{GL}(V_{1})\ltimes
V_{1}$, whose restriction to $T(\mathbb{Q}_{p})$ is given as above and (cf.
Remark 2.66)
$\displaystyle\nu_{1}(w_{2e_{1}-e_{0}}(x))$
$\displaystyle=(\left(\begin{smallmatrix}&&&1\\\ &1&&\\\ &&1&\\\
1&&&\end{smallmatrix}\right),\left(\begin{smallmatrix}-v_{p}(x)\\\ 0\\\ 0\\\
v_{p}(x)\end{smallmatrix}\right)),$
$\displaystyle\nu_{1}(w_{2e_{2}-e_{0}}(x))$
$\displaystyle=(\left(\begin{smallmatrix}1&&&\\\ &&1&\\\ &1&&\\\
&&&1\end{smallmatrix}\right),\left(\begin{smallmatrix}0\\\ -v_{p}(x)\\\
v_{p}(x)\\\ 0\end{smallmatrix}\right)),$
$\displaystyle\nu_{1}(w_{e_{1}-e_{2}}(x))$
$\displaystyle=(\left(\begin{smallmatrix}&1&&\\\ 1&&&\\\ &&&1\\\
&&1&\end{smallmatrix}\right),\left(\begin{smallmatrix}-v_{p}(x)\\\ v_{p}(x)\\\
-v_{p}(x)\\\ v_{p}(x)\end{smallmatrix}\right)),$
$\displaystyle\nu_{1}(w_{e_{1}+e_{2}-e_{0}}(x))$
$\displaystyle=(\left(\begin{smallmatrix}&&1&\\\ &&&1\\\ 1&&&\\\
&1&&\end{smallmatrix}\right),\left(\begin{smallmatrix}-v_{p}(x)\\\
-v_{p}(x)\\\ v_{p}(x)\\\ v_{p}(x)\end{smallmatrix}\right)),$
etc. (Recipe: Write $w_{a}(x)$ as a product of a diagonal matrix
$\operatorname{diag}(d_{1},d_{2},d_{-2},d_{-1})$ and a permutation matrix $P$
(this need not be a factorization in $\operatorname{GSp}(4)$); then
$\nu_{1}(w_{a}(x))=(P,\left(\begin{smallmatrix}-v_{p}(d_{1})\\\
-v_{p}(d_{2})\\\ -v_{p}(d_{-2})\\\ -v_{p}(d_{-1})\end{smallmatrix}\right)).)$
The reduced apartment $A^{\mathrm{red}}$ is the affine $V$-space together with
$\nu\colon
N(\mathbb{Q}_{p})\to\operatorname{Aff}(A^{\mathrm{red}})=\operatorname{GL}(V)\ltimes
V$ given by the same formulas.
The walls (or rather, wall conditions) are given as follows
($n\in\mathbb{Z}$):
$\displaystyle 2e_{1}-e_{0}$ $\displaystyle:n=x_{0}-2x_{1},$ $\displaystyle
2e_{2}-e_{0}$ $\displaystyle:n=x_{0}-2x_{2},$ $\displaystyle e_{1}-e_{2}$
$\displaystyle:n=x_{2}-x_{1},$ $\displaystyle e_{1}+e_{2}-e_{0}$
$\displaystyle:n=x_{0}-x_{1}-x_{2}.$
Figure 1: The reduced apartment with the base alcove highlighted.
##### Lattice chains and parahoric subgroups.
By [BT84a], the extended building
$\mathcal{B}(\operatorname{GL}(X),\mathbb{Q}_{p})$ is in bijection with
norms181818Defining conditions for a norm:
$\alpha(tx)=\alpha(x)+\operatorname{ord}_{p}(t)$,
$\alpha(x+y)\geq\min(\alpha(x),\alpha(y))$, $\alpha(x)=\infty\iff x=0$
$\alpha\colon X\to\mathbb{R}\cup\\{\infty\\}$. Norms in turn are in bijection
with graded lattice chains (cf. Remark 1.10). Indeed, if $\alpha$ is a norm,
define $\Delta_{\alpha}$ to be the set of its balls centered around zero and
$c_{\alpha}(\Lambda):=\inf_{\lambda\in\Lambda}\alpha(\lambda)$. Conversely,
given a graded lattice chain $(\Delta,c)$, define a norm $\alpha$ by
$\alpha(x):=c(\Lambda)$ for the smallest $\Lambda\in\Delta$ with
$x\in\Lambda$.
To go from the extended apartment of $\operatorname{GL}(X)$, an affine
$\mathbb{R}^{n}$-space, where $n=\dim X$, to norms, fix a basis
$e_{1},\dotsc,e_{n}$ of $X$. Then $v\in\mathbb{R}^{n}$ corresponds to the norm
$\alpha_{v}$ with
$\alpha_{v}(\sum t_{i}e_{i})=\min_{i}(\operatorname{ord}_{p}(t_{i})-v_{i}).$
There are seven types of points in the extended apartment (in each case we
choose one in the base alcove to represent all of its type) corresponding to
the vertices, edges and interior of the base alcove:
* •
standard hyperspecial: $x_{\mathrm{hs}}=(0,0,0,0)$
* •
paramodular: $x_{\mathrm{paramod}}=(-1/2,0,0,1/2)$
* •
Klingen: $x_{\mathrm{Klingen}}=(-1/4,0,0,1/4)$
* •
Siegel: $x_{\mathrm{Siegel}}=(-1/4,-1/4,1/4,1/4)$
* •
Iwahori: $x_{\mathrm{Iwahori}}=(-1/4,-1/8,1/8,1/4)$
* •
another hyperspecial: $x=(-1/2,-1/2,1/2,1/2)$
* •
another parahoric: $x=(-1/2,-1/4,1/4,1/2)$
The last two are conjugates (by the Atkin-Lehner element) of the standard
hyperspecial and the Klingen parahoric, respectively (see e.g. [Rös18, 151]);
therefore we will neglect them in the sequel.
For a set of lattices $S$ denote by $\langle S\rangle$ the closure under
homotheties, i.e., $\langle S\rangle:=\\{p^{n}s\;|\;n\in\mathbb{Z},\;s\in
S\\}$.
Then:
* •
$\Delta_{\mathrm{hs}}=\langle\mathbb{Z}_{p}^{4}\rangle$
and $c_{\mathrm{hs}}(\mathbb{Z}_{p}^{4})=0$.
* •
$\Delta_{\mathrm{paramod}}=\langle\mathbb{Z}_{p}^{3}\oplus
p\mathbb{Z}_{p},\;\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}\rangle$
and $c_{\mathrm{paramod}}(\mathbb{Z}_{p}^{3}\oplus
p\mathbb{Z}_{p})=-\frac{1}{2}$, $c_{\mathrm{paramod}}(\mathbb{Z}_{p}\oplus
p\mathbb{Z}_{p}^{3})=0$.
* •
$\Delta_{\mathrm{Klingen}}=\langle\mathbb{Z}_{p}^{4},\mathbb{Z}_{p}^{3}\oplus
p\mathbb{Z}_{p},\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}\rangle$
and $c_{\mathrm{Klingen}}(\mathbb{Z}_{p}^{4})=-1/4$,
$c_{\mathrm{Klingen}}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})=0$,
$c_{\mathrm{Klingen}}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})=1/4$.
* •
$\Delta_{\mathrm{Siegel}}=\langle\mathbb{Z}_{p}^{4},\mathbb{Z}_{p}^{2}\oplus
p\mathbb{Z}_{p}^{2}\rangle$
and $c_{\mathrm{Siegel}}(\mathbb{Z}_{p}^{4})=-1/4$,
$c_{\mathrm{Siegel}}(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2})=1/4$.
* •
$\Delta_{\mathrm{Iwahori}}=\langle\mathbb{Z}_{p}^{4},\mathbb{Z}_{p}^{3}\oplus
p\mathbb{Z}_{p},\mathbb{Z}_{p}^{2}\oplus
p\mathbb{Z}_{p}^{2},\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}\rangle$
and $c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}^{4})=-1/4$,
$c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})=-1/8$,
$c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2})=1/8$,
$c_{\mathrm{Iwahori}}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})=1/4$.
The associated parahoric subgroups are
* •
hyperspecial: $\operatorname{GSp}_{4}(\mathbb{Z}_{p})$
* •
paramodular:
$\operatorname{GSp}_{4}(\mathbb{Q}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&p^{-1}\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$
* •
Klingen:
$\operatorname{GSp}_{4}(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$
* •
Siegel:
$\operatorname{GSp}_{4}(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$
* •
Iwahori:
$\operatorname{GSp}_{4}(\mathbb{Z}_{p})\cap\begin{pmatrix}\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}&\mathbb{Z}_{p}\\\
p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&p\mathbb{Z}_{p}&\mathbb{Z}_{p}\end{pmatrix}$
###### Remark 2.68.
Dualizing with respect to the symplectic form, we have
$\displaystyle(\mathbb{Z}_{p}^{4})^{\vee}$ $\displaystyle=\mathbb{Z}_{p}^{4},$
$\displaystyle(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3})^{\vee}$
$\displaystyle=p^{-1}(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p}),$
$\displaystyle(\mathbb{Z}_{p}^{3}\oplus p\mathbb{Z}_{p})^{\vee}$
$\displaystyle=p^{-1}(\mathbb{Z}_{p}\oplus p\mathbb{Z}_{p}^{3}),$
$\displaystyle(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2})^{\vee}$
$\displaystyle=p^{-1}(\mathbb{Z}_{p}^{2}\oplus p\mathbb{Z}_{p}^{2}).$
##### Admissible set.
We compute the admissible set in the way outlined in Remark 2.29. The
cocharacter $\mu$ is $(1,1,0,0)$.
We obtain
$\displaystyle\mathrm{Adm}(\\{\mu\\})=\bigl{\\{}$
$\displaystyle\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}1\\\ 1\\\
0\\\
0\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(2\quad{-2}),\left(\begin{smallmatrix}1\\\
0\\\ 1\\\
0\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}1\\\
0\\\ 1\\\ 0\end{smallmatrix}\right)\Bigr{)},$
$\displaystyle\Bigl{(}(1\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\
1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(1\quad
2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\
1\end{smallmatrix}\right)\Bigr{)},$ $\displaystyle\Bigl{(}(1\quad
2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\
1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}0\\\
1\\\ 0\\\ 1\end{smallmatrix}\right)\Bigr{)},$
$\displaystyle\Bigl{(}(1\quad{-1})(2\quad{-2}),\left(\begin{smallmatrix}0\\\
0\\\ 1\\\
1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(1\quad{-1}),\left(\begin{smallmatrix}0\\\
0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},$
$\displaystyle\Bigl{(}(1\quad{-2})(2\quad{-1}),\left(\begin{smallmatrix}0\\\
0\\\ 1\\\
1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}(1\quad{-2}\quad{-1}\quad
2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)},$
$\displaystyle\Bigl{(}(2\quad{-2}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\
1\end{smallmatrix}\right)\Bigr{)},\quad\Bigl{(}\operatorname{id},\left(\begin{smallmatrix}0\\\
0\\\ 1\\\ 1\end{smallmatrix}\right)\Bigr{)}\bigr{\\}},$
or, in terms of Frobenii (cf. Construction 2.43)
$\displaystyle\Bigl{\\{}$ $\displaystyle\left(\begin{smallmatrix}p&0&0&0\\\
0&p&0&0\\\ 0&0&1&0\\\
0&0&0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}p&0&0&0\\\ 0&0&1&0\\\
0&p&0&0\\\ 0&0&0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}p&0&0&0\\\
0&1&0&0\\\ 0&0&p&0\\\
0&0&0&1\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&0&1\\\ 0&p&0&0\\\
0&0&1&0\\\ p&0&0&0\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&1&0\\\
p&0&0&0\\\ 0&0&0&1\\\ 0&p&0&0\end{smallmatrix}\right),$
$\displaystyle\left(\begin{smallmatrix}0&1&0&0\\\ p&0&0&0\\\ 0&0&0&1\\\
0&0&p&0\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0&0&0\\\ 0&p&0&0\\\
0&0&1&0\\\ 0&0&0&p\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&0&1\\\
0&0&1&0\\\ 0&p&0&0\\\
p&0&0&0\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&0&1\\\ 0&1&0&0\\\
0&0&p&0\\\ p&0&0&0\end{smallmatrix}\right),\left(\begin{smallmatrix}0&0&1&0\\\
0&0&0&1\\\ p&0&0&0\\\ 0&p&0&0\end{smallmatrix}\right),$
$\displaystyle\left(\begin{smallmatrix}0&1&0&0\\\ 0&0&0&1\\\ p&0&0&0\\\
0&0&p&0\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0&0&0\\\ 0&0&1&0\\\
0&p&0&0\\\ 0&0&0&p\end{smallmatrix}\right),\left(\begin{smallmatrix}1&0&0&0\\\
0&1&0&0\\\ 0&0&p&0\\\ 0&0&0&p\end{smallmatrix}\right)\Bigr{\\}}.$
##### Siegel level.
From now on, we consider the Siegel level structure. Denote the Siegel
parahoric by $K$ and the standard hyperspecial subgroup by $H$. Here $W_{K}$
is generated by $({-1}\quad{-2})(1\quad 2)$, while $W_{H}$ is generated by
$W_{K}$ and $(2\quad{-2})$.
Recalling Remark 2.31 (2), we note that one has a natural morphism
$\overline{\mathcal{G}}_{K}\text{-}\mathrm{Zip}\to\overline{\mathcal{G}}_{H}\text{-}\mathrm{Zip}\times\overline{\mathcal{G}}_{H}\text{-}\mathrm{Zip}$.
We have
$\displaystyle\mathrm{KR}(K,\\{\mu\\})$
$\displaystyle=\Bigl{\\{}(\operatorname{id},\left(\begin{smallmatrix}1\\\ 1\\\
0\\\
0\end{smallmatrix}\right)),((1\quad{-2})(2\quad{-1}),\left(\begin{smallmatrix}0\\\
0\\\ 1\\\ 1\end{smallmatrix}\right)),((1\quad
2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\
1\end{smallmatrix}\right)),$
$\displaystyle(\operatorname{id},\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\
1\end{smallmatrix}\right)),((1\quad{-2}\quad{-1}\quad
2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right)),((1\quad
2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\
1\end{smallmatrix}\right))\Bigr{\\}},$
$\displaystyle\mathrm{EKOR}(K,\\{\mu\\})$
$\displaystyle=\mathrm{KR}(K,\\{\mu\\})\cup\Bigl{\\{}(\operatorname{id},\left(\begin{smallmatrix}0\\\
1\\\ 0\\\
1\end{smallmatrix}\right)),\quad((2\quad{-2}),\left(\begin{smallmatrix}0\\\
0\\\ 1\\\
1\end{smallmatrix}\right)),\quad((1\quad{-1}),\left(\begin{smallmatrix}0\\\
1\\\ 0\\\ 1\end{smallmatrix}\right))\Bigr{\\}}.$
In the following table, $w^{j}$ is the isomorphism type of the
$\overline{\mathcal{G}}_{H}$-zip at position $j$. For
$\mathcal{C}^{\bullet},\mathcal{D}^{\bullet}$ we give (indices of) basis
vectors. “$\leftarrow$” means “same as in the column adjacent to the left”.
$\alpha_{0}\colon\bar{\mathbb{F}}_{p}^{4}\to\bar{\mathbb{F}}_{p}^{4}$ is the
projection onto the plane spanned by the $1,2$-coordinates, $\alpha_{2}$ the
projection onto the plane spanned by the $-2,-1$-coordinates. By
$\alpha_{j,\mathcal{C}^{\bullet}/\mathcal{D}^{\bullet}}$ we denote the induced
maps on
$\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ and
$\mathcal{D}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{D}^{\bullet}$,
respectively. Each $\mathcal{C}^{j}\subseteq\mathcal{V}^{j}$ has a canonical
complement in terms of standard basis vectors. Importantly however, we will
not always have a complementary _chain_ of linear subspaces. In any event,
below we say what the $\alpha_{j,\mathcal{C}^{\bullet}/\mathcal{D}^{\bullet}}$
are the projection onto if interpreted as described. For instance, the
projection onto $\emptyset$ is the zero map. So in that case
$\mathcal{V}^{\bullet}/\mathcal{C}^{\bullet}\oplus\mathcal{C}^{\bullet}$ (or
$\mathcal{D}^{\bullet}\oplus\mathcal{V}^{\bullet}/\mathcal{D}^{\bullet}$) is a
chain of vector spaces with zero transition maps.
$w$ | KR-type | $\mathcal{C}^{0}$ | $\mathcal{D}^{0}$ | $\mathcal{C}^{2}$ | $\mathcal{D}^{2}$ | $w^{0}$ | $w^{2}$ | $\alpha_{2,\mathcal{C}^{\bullet}}$ | $\alpha_{0,\mathcal{C}^{\bullet}}$ | $\alpha_{2,\mathcal{D}^{\bullet}}$ | $\alpha_{0,\mathcal{D}^{\bullet}}$
---|---|---|---|---|---|---|---|---|---|---|---
$(\operatorname{id},\left(\begin{smallmatrix}1\\\ 1\\\ 0\\\ 0\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{-2,-1\\}$ | $(-2\quad 1)({-1}\quad 2)$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$
$((1\quad{-2})(2\quad{-1}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\operatorname{id}$ | $\leftarrow$ | $\emptyset$ | $\emptyset$ | $\emptyset$ | $\emptyset$
$((1\quad 2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\\{-2,-1\\}$ | $(-2\quad 2)$ | $\leftarrow$ | $\\{1\\}$ | $\\{-1\\}$ | $\\{2\\}$ | $\\{-2\\}$
$(\operatorname{id},\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{-2,-1\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{1,2\\}$ | $(-2\quad 1)({-1}\quad 2)$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$
$((1\quad{-2}\quad{-1}\quad 2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{-2,1\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,1\\}$ | $(-2\quad 2)$ | $\leftarrow$ | $\\{2\\}$ | $\\{-2\\}$ | $\\{1\\}$ | $\\{-1\\}$
$((1\quad 2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\leftarrow$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $\operatorname{id}$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$
$(\operatorname{id},\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $((1\quad 2)({-2}\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\\{-1,2\\}$ | $\\{-1,2\\}$ | $\\{-2,1\\}$ | $\\{-2,1\\}$ | $(-2\quad 1)({-1}\quad 2)$ | $\leftarrow$ | $\\{1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,-1\\}$
$((2\quad{-2}),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $((1\quad{-2}\quad{-1}\quad 2),\left(\begin{smallmatrix}0\\\ 0\\\ 1\\\ 1\end{smallmatrix}\right))$ | $\\{-1,2\\}$ | $\\{-2,-1\\}$ | $\\{1,2\\}$ | $\\{-2,1\\}$ | $(-2\quad{-1}\quad 2\quad 1)$ | $\leftarrow$ | $\\{1\\}$ | $\\{-1\\}$ | $\\{1\\}$ | $\\{-1\\}$
$((1\quad{-1}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $((1\quad 2\quad{-1}\quad{-2}),\left(\begin{smallmatrix}0\\\ 1\\\ 0\\\ 1\end{smallmatrix}\right))$ | $\\{1,2\\}$ | $\\{-1,2\\}$ | $\\{-2,1\\}$ | $\\{-2,-1\\}$ | $(-2\quad{-1}\quad 2\quad 1)$ | $\leftarrow$ | $\\{2\\}$ | $\\{-2\\}$ | $\\{2\\}$ | $\\{-2\\}$
###### Observations 2.69.
* •
We always have $w^{0}=w^{2}$. This is explained by the fact that the Ekedahl-
Oort stratification in this case agrees with the Newton stratification (and
isogenous abelian varieties by definition lie in the same Newton stratum).
* •
Consider the Kottwitz-Rapoport strata containing more than one EKOR stratum
(i.e., containing two EKOR strata). Then we can distinguish among the EKOR
strata by looking at the Ekedahl-Oort stratum. In other words, the EKOR
stratification is in this case the coarsest common refinement of the Kottwitz-
Rapoport and Ekedahl-Oort stratifications.
## References
* [1] M. Kisin and G. Pappas “Integral models of Shimura varieties with parahoric level structure” In _Publ. Math., Inst. Hautes Étud. Sci._ 128 Springer, Berlin/Heidelberg; Institut des Hautes Études Scientifiques, Bures-sur-Yvette, 2018, pp. 121–218
* [Ahs11] Tobias Ahsendorf “$\mathcal{O}$-displays and $\pi$-divisible formal $\mathcal{O}$-modules”, 2011 URL: http://nbn-resolving.de/urn:nbn:de:hbz:361-24713520
* [AT08] Alexander Arhangel’skii and Mikhail Tkachenko “Topological groups and related structures” Hackensack, NJ: World Scientific; Paris: Atlantis Press, 2008
* [BB05] Anders Björner and Francesco Brenti “Combinatorics of Coxeter groups” 231, Graduate Texts in Mathematics Springer, New York, 2005, pp. xiv+363
* [BG18] Alessandra Bertapelle and Cristian D. González-Avilés “On the perfection of schemes” In _Expo. Math._ 36.2 Elsevier, Munich, 2018, pp. 197–220
* [Bor69] Armand Borel “Introduction aux groupes arithmétiques”, Publications de l’Institut de Mathématique de l’Université de Strasbourg, XV. Actualités Scientifiques et Industrielles, No. 1341 Hermann, Paris, 1969
* [BT84] François Bruhat and Jacques Tits “Groupes réductifs sur un corps local. II. Schémas en groupes. Existence d’une donnée radicielle valuée.” In _Publ. Math., Inst. Hautes Étud. Sci._ 60 Springer, Berlin/Heidelberg; Institut des Hautes Études Scientifiques, Bures-sur-Yvette, 1984, pp. 1–194
* [BT84a] François Bruhat and Jacques Tits “Schémas en groupes et immeubles des groupes classiques sur un corps local” In _Bull. Soc. Math. Fr._ 112 Société Mathématique de France (SMF), Paris, 1984, pp. 259–301 DOI: 10.24033/bsmf.2006
* [Car93] Roger W. Carter “Finite groups of Lie type” Conjugacy classes and complex characters, Reprint of the 1985 original, A Wiley-Interscience Publication, Wiley Classics Library John Wiley & Sons, Ltd., Chichester, 1993, pp. xii+544
* [Dan36] D. Dantzig “Zur topologischen Algebra. III: Brouwersche und Cantorsche Gruppen” In _Compos. Math._ 3 Cambridge University Press, Cambridge; London Mathematical Society, London, 1936, pp. 408–426
* [Del71] Pierre Deligne “Travaux de Shimura” In _Séminaire Bourbaki, 23ème année (1970/71), Exp. No. 389_ 244, Lecture Notes in Math. Springer, Berlin, 1971, pp. 123–165
* [DG80] Michel Demazure and Peter Gabriel “Introduction to algebraic geometry and algebraic groups” Translated from the French by J. Bell 39, North-Holland Mathematics Studies North-Holland Publishing Co., Amsterdam-New York, 1980, pp. xiv+357
* [EGA2] A. Grothendieck “Éléments de géométrie algébrique. II. Étude globale élémentaire de quelques classes de morphismes” In _Inst. Hautes Études Sci. Publ. Math._ , 1961 URL: http://www.numdam.org/item?id=PMIHES_1961__8__222_0
* [Gör03] Ulrich Görtz “On the flatness of local models for the symplectic group” In _Adv. Math._ 176.1 Elsevier (Academic Press), San Diego, CA, 2003, pp. 89–115
* [Gro74] Alexandre Grothendieck “Groupes de Barsotti-Tate et cristaux de Dieudonné” Séminaire de Mathématiques Supérieures, No. 45 (Été, 1970) Les Presses de l’Université de Montréal, Montreal, Que., 1974
* [Hai05] Thomas J. Haines “Introduction to Shimura varieties with bad reduction of parahoric type” In _Harmonic analysis, the trace formula, and Shimura varieties. Proceedings of the Clay Mathematics Institute 2003 summer school, Toronto, Canada, June 2–27, 2003_ Providence, RI: American Mathematical Society (AMS), 2005, pp. 583–642
* [Haz78] Michiel Hazewinkel “Formal groups and applications”, Pure and Applied Mathematics, 78. New York-San Francisco-London: Academic Press. XXII, 573 p., 1978
* [Hes20] Jens Hesse “Central leaves and EKOR strata on Shimura varieties with parahoric reduction”, 2020 URL: http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-115430
* [Hes20a] Jens Hesse “Central leaves on Shimura varieties with parahoric reduction” Preprint, 2020 arXiv:2003.03175 [math.AG]
* [HR08] Thomas J. Haines and Michael Rapoport “Appendix: On parahoric subgroups” In _Advances in Mathematics_ 219.1, 2008, pp. 188–198 DOI: https://doi.org/10.1016/j.aim.2008.04.020
* [HR17] X. He and M. Rapoport “Stratifications in the reduction of Shimura varieties.” In _Manuscr. Math._ 152.3-4 Springer, Berlin/Heidelberg, 2017, pp. 317–343 DOI: 10.1007/s00229-016-0863-x
* [Ill85] Luc Illusie “Déformations de groupes de Barsotti-Tate (d’après A. Grothendieck)” Seminar on arithmetic bundles: the Mordell conjecture (Paris, 1983/84) In _Astérisque_ , 1985, pp. 151–198
* [Kis10] Mark Kisin “Integral models for Shimura varieties of abelian type” In _J. Amer. Math. Soc._ 23.4, 2010, pp. 967–1012 DOI: 10.1090/S0894-0347-10-00667-3
* [Kot92] Robert E. Kottwitz “Points on some Shimura varieties over finite fields” In _J. Amer. Math. Soc._ 5.2, 1992, pp. 373–444 DOI: 10.2307/2152772
* [KP15] M. Kisin and G. Pappas “Integral models of Shimura varieties with parahoric level structure” Preprint, 2015 arXiv:1512.01149v2 [math.AG]
* [KR00] R. Kottwitz and M. Rapoport “Minuscule alcoves for $\mathrm{GL}_{n}$ and $\mathrm{GSp}_{2n}$” In _Manuscripta Math._ 102.4, 2000, pp. 403–428 DOI: 10.1007/s002290070034
* [Lan00] Erasmus Landvogt “Some functorial properties of the Bruhat-Tits building” In _J. Reine Angew. Math._ 518 De Gruyter, Berlin, 2000, pp. 213–241 DOI: 10.1515/crll.2000.006
* [Lan96] Erasmus Landvogt “A compactification of the Bruhat-Tits building” 1619, Lecture Notes in Mathematics Springer-Verlag, Berlin, 1996, pp. viii+152 DOI: 10.1007/BFb0094594
* [Lau13] Eike Lau “Smoothness of the truncated display functor.” In _J. Am. Math. Soc._ 26.1 American Mathematical Society (AMS), Providence, RI, 2013, pp. 129–165
* [Lau14] Eike Lau “Relations between Dieudonné displays and crystalline Dieudonné theory” In _Algebra Number Theory_ 8.9, 2014, pp. 2201–2262 DOI: 10.2140/ant.2014.8.2201
* [Mil05] J.. Milne “Introduction to Shimura varieties.” In _Harmonic analysis, the trace formula, and Shimura varieties. Proceedings of the Clay Mathematics Institute 2003 summer school, Toronto, Canada, June 2–27, 2003_ Providence, RI: American Mathematical Society (AMS), 2005, pp. 265–378
* [Mor93] Lawrence Morris “Tamely ramified intertwining algebras” In _Invent. Math._ 114.1, 1993, pp. 1–54 DOI: 10.1007/BF01232662
* [MW04] Ben Moonen and Torsten Wedhorn “Discrete invariants of varieties in positive characteristic” In _Int. Math. Res. Not._ , 2004, pp. 3855–3903 DOI: 10.1155/S1073792804141263
* [Pin90] Richard Pink “Arithmetical compactification of mixed Shimura varieties” 209, Bonner Mathematische Schriften [Bonn Mathematical Publications] Universität Bonn, Mathematisches Institut, 1990
* [PRS13] Georgios Pappas, Michael Rapoport and Brian Smithling “Local models of Shimura varieties, I. Geometry and combinatorics” In _Handbook of moduli. Vol. III_ 26, Adv. Lect. Math. (ALM) Int. Press, Somerville, MA, 2013, pp. 135–217
* [PWZ11] Richard Pink, Torsten Wedhorn and Paul Ziegler “Algebraic zip data” In _Doc. Math._ 16 Deutsche Mathematiker-Vereinigung, Berlin, 2011, pp. 253–300
* [PWZ15] Richard Pink, Torsten Wedhorn and Paul Ziegler “$F$-zips with additional structure” In _Pac. J. Math._ 274.1 Mathematical Sciences Publishers (MSP), Berkeley, CA; Pacific Journal of Mathematics c/o University of California, Berkeley, CA, 2015, pp. 183–236 DOI: 10.2140/pjm.2015.274.183
* [PZ13] Georgios Pappas and Xinwen Zhu “Local models of Shimura varieties and a conjecture of Kottwitz” In _Invent. Math._ 194.1 Springer, Berlin/Heidelberg, 2013, pp. 147–254 DOI: 10.1007/s00222-012-0442-z
* [Rap05] Michael Rapoport “A guide to the reduction modulo $p$ of Shimura varieties.” In _Formes automorphes (I). Actes du Semestre du Centre Émile Borel, Paris, France, 17 février au 11 juillet 2000_ Paris: Société Mathématique de France, 2005, pp. 271–318
* [Rös18] Mirko Rösner “Parahoric restriction for $\mathrm{GSp}(4)$.” In _Algebr. Represent. Theory_ 21.1 Springer Netherlands, Dordrecht, 2018, pp. 145–161
* [RZ96] M. Rapoport and Th. Zink “Period spaces for $p$-divisible groups” 141, Annals of Mathematics Studies Princeton University Press, Princeton, NJ, 1996 DOI: 10.1515/9781400882601
* [Ser97] Jean-Pierre Serre “Galois cohomology” Translated from the French by Patrick Ion Springer-Verlag, Berlin, 1997, pp. x+210 DOI: 10.1007/978-3-642-59141-9
* [Stacks] The Stacks Project Authors “Stacks Project”, http://stacks.math.columbia.edu, 2020
* [SYZ19] Xu Shen, Chia-Fu Yu and Chao Zhang “EKOR strata for Shimura varieties with parahoric level structure” Preprint, 2019 arXiv:1910.07785v1 [math.AG]
* [Tit79] J. Tits “Reductive groups over local fields” In _Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1_, Proc. Sympos. Pure Math., XXXIII Amer. Math. Soc., Providence, R.I., 1979, pp. 29–69
* [VW13] Eva Viehmann and Torsten Wedhorn “Ekedahl-Oort and Newton strata for Shimura varieties of PEL type” In _Math. Ann._ 356.4, 2013, pp. 1493–1550 DOI: 10.1007/s00208-012-0892-z
* [Wor13] D. Wortmann “The $\mu$-ordinary locus for Shimura varieties of Hodge type” Preprint, 2013 arXiv:1310.6444v1 [math.AG]
* [Yu08] Chia-Fu Yu “Irreducibility and $p$-adic monodromies on the Siegel moduli spaces” In _Adv. Math._ 218.4 Elsevier (Academic Press), San Diego, CA, 2008, pp. 1253–1285
* [Zha15] C. Zhang “Stratifications and foliations for good reductions of Shimura varieties of Hodge type” Preprint, 2015 arXiv:1512.08102v1 [math.AG]
* [Zha18] Chao Zhang “Ekedahl-Oort strata for good reductions of Shimura varieties of Hodge type” In _Canad. J. Math._ 70.2, 2018, pp. 451–480 DOI: 10.4153/CJM-2017-020-5
* [Zin01] Thomas Zink “A Dieudonné theory for $p$-divisible groups.” In _Class field theory – its centenary and prospect. Proceedings of the 7th MSJ International Research Institute of the Mathematical Society of Japan, Tokyo, Japan, June 3–12, 1998_ Tokyo: Mathematical Society of Japan, 2001, pp. 139–160
* [Zin02] Thomas Zink “The display of a formal $p$-divisible group.” In _Cohomologies $p$-adiques et applications arithmétiques (I)_ Paris: Société Mathématique de France, 2002, pp. 127–248
|
2024-09-04T02:54:58.744700 | 2020-03-07T09:19:04 | 2003.04739 | {
"authors": "Gioele Zardini, Nicolas Lanzetti, Mauro Salazar, Andrea Censi, Emilio\n Frazzoli, Marco Pavone",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26136",
"submitter": "Gioele Zardini",
"url": "https://arxiv.org/abs/2003.04739"
} | arxiv-papers | # On the Co-Design of AV-Enabled Mobility Systems
Gioele Zardini1,3, Nicolas Lanzetti2,3, Mauro Salazar3,4, Andrea Censi1,
Emilio Frazzoli1, and Marco Pavone3 1Institute for Dynamic Systems and
Control, ETH Zürich<EMAIL_ADDRESS>Control Laboratory, ETH Zürich<EMAIL_ADDRESS>of Aeronautics and
Astronautics, Stanford University<EMAIL_ADDRESS>Systems
Technology Group, Eindhoven University of Technology<EMAIL_ADDRESS>preliminary version of this paper was presented at the 99th Annual Meeting of
the Transportation Research Board [1].This research was supported by the
National Science Foundation under CAREER Award CMMI-1454737, the Toyota
Research Institute (TRI), and ETH Zürich. This article solely reflects the
opinions and conclusions of its authors and not NSF, TRI, or any other entity.
###### Abstract
The design of autonomous vehicles (AVs) and the design of AV-enabled mobility
systems are closely coupled. Indeed, knowledge about the intended service of
AVs would impact their design and deployment process, whilst insights about
their technological development could significantly affect transportation
management decisions. This calls for tools to study such a coupling and co-
design AVs and AV-enabled mobility systems in terms of different objectives.
In this paper, we instantiate a framework to address such co-design problems.
In particular, we leverage the recently developed theory of co-design to frame
and solve the problem of designing and deploying an intermodal Autonomous
Mobility-on-Demand system, whereby AVs service travel demands jointly with
public transit, in terms of fleet sizing, vehicle autonomy, and public transit
service frequency. Our framework is modular and compositional, allowing one to
describe the design problem as the interconnection of its individual
components and to tackle it from a system-level perspective. To showcase our
methodology, we present a real-world case study for Washington D.C., USA. Our
work suggests that it is possible to create user-friendly optimization tools
to systematically assess costs and benefits of interventions, and that such
analytical techniques might gain a momentous role in policy-making in the
future.
## I Introduction
Arguably, the current design process for AVs largely suffers from the lack of
clear, specific requirements in terms of the service such vehicles will be
providing. Yet, knowledge about their intended service (e.g., last-mile versus
point-to-point travel) might dramatically impact how the AVs are designed,
and, critically, significantly ease their development process. For example, if
for a given city we knew that for an effective on-demand mobility system
autonomous cars only need to drive up to 25 mph and only on relatively easy
roads, their design would be greatly simplified and their deployment could
certainly be accelerated. At the same time, from the system-level perspective
of transportation management, knowledge about the trajectory of technology
development for AVs would certainly impact decisions on infrastructure
investments and provision of service. In other words, the design of the AVs
and the design of a mobility system leveraging AVs are intimately coupled.
This calls for methods to reason about such a coupling, and in particular to
_co-design_ the AVs and the associated AV-enabled mobility system. A key
requirement in this context is the ability to account for a range of
heterogeneous objectives that are often not directly comparable (consider, for
instance, travel time and emissions).
Accordingly, the goal of this paper is to lay the foundations for a framework
through which one can co-design future AV-enabled mobility systems.
Specifically, we show how one can leverage the recently developed mathematical
theory of co-design [2, 3, 4], which provides a general methodology to co-
design complex systems in a modular and compositional fashion. This tool
delivers the set of rational design solutions lying on the Pareto front,
allowing one to reason about costs and benefits of the individual design
options. The framework is instantiated in the setting of co-designing
intermodal AMoD systems [5], whereby fleets of self-driving vehicles provide
on-demand mobility jointly with public transit. Aspects subject to co-design
include fleet size, AV-specific characteristics, and public transit service
frequency.
### I-A Literature Review
Our work lies at the interface of the design of urban public transportation
services and the design of AMoD systems. The first research stream is reviewed
in [6, 7], and comprises _strategic_ long-term infrastructure modifications
and _operational_ short-term scheduling. The joint design of traffic network
topology and control infrastructure has been presented in [8]. Public
transportation scheduling has been solved jointly with the design of the
transit network in a passengers’ and operators’ cost-optimal fashion in [9],
using demand-driven approaches in [10], and in an energy-efficient way in
[11]. However, these works only focus on the public transit system and do not
consider its joint design with an AMoD system. The research on the design of
AMoD systems is reviewed in [12] and mainly pertains their fleet sizing. In
this regard, studies range from simulation-based approaches [13, 14, 15, 16]
to analytical methods [17]. In [18], the authors jointly design the fleet size
and the charging infrastructure, and formulate the arising design problem as a
mixed integer linear program. The authors of [19] solve the fleet sizing
problem together with the vehicle allocation problem. Finally, [20] co-designs
the AMoD fleet size and its composition. More recently, the joint design of
multimodal transit networks and AMoD systems was formulated in [21] as a
bilevel optimization problem and solved with heuristics. Overall, the problem-
specific structure of existing design methods for AMoD systems is not amenable
to a modular and compositional problem formulation. Moreover, previous work
does not capture important aspects of AV-enabled mobility systems, such as
other transportation modes and AV-specific design parameters (e.g., the level
of autonomy).
### I-B Statement of Contribution
In this paper we lay the foundations for the systematic study of the design of
AV-enabled mobility systems. Specifically, we leverage the mathematical theory
of co-design [2] to devise a framework to study the design of intermodal AMoD
(I-AMoD) systems in terms of fleet characteristics and public transit service,
enabling the computation of the _rational_ solutions lying on the Pareto front
of minimal travel time, transportation costs, and emissions. Our framework
allows one to structure the design problem in a modular way, in which each
different transportation option can be “plugged in” in a larger model. Each
model has minimal assumptions: Rather than properties such as linearity and
convexity, we ask for very general monotonicity assumptions. For example, we
assume that the cost of automation increases monotonically with the speed
achievable by the AV. We are able to obtain the full Pareto front of
_rational_ solutions, or, given policies, to weigh incomparable costs (such as
travel time and emissions) and to present actionable information to the
stakeholders of the mobility ecosystem. We showcase our methodology through a
real-world case study of Washington D.C., USA. We show how, given the model,
we can easily formulate and answer several questions regarding the
introduction of new technologies and investigate possible infrastructure
interventions.
### I-C Organization
The remainder of this paper is structured as follows: Section II reviews the
mathematical theory of co-design. Section III presents the co-design problem
for AV-enabled mobility systems. We showcase our approach with real-world case
studies for Washington D.C., USA, in Section IV. Section V concludes the paper
with a discussion and an overview on future research directions.
## II Background
This paper builds on the mathematical theory of co-design, presented in [2].
In this section, we present a review of the main contents needed for this
work.
### II-A Orders
We will use basic facts from order theory, which we review in the following.
###### Definition II.1 (Poset).
A partially ordered set (poset) is a tuple
$\langle\mathcal{P},\preceq_{\mathcal{P}}\rangle$, where $\mathcal{P}$ is a
set and $\preceq_{\mathcal{P}}$ is a partial order, defined as a reflexive,
transitive, and antisymmetric relation.
Given a poset, we can formalize the idea of “Pareto front” through antichains.
###### Definition II.2 (Antichains).
A subset $S\subseteq\mathcal{P}$ is an antichain iff no elements are
comparable: For $x,y\in S$, $x\preceq y$ implies $x=y$. We denote by
$\textsf{A}\mathcal{P}$ the set of all antichains in $\mathcal{P}$.
###### Definition II.3 (Directed set).
A subset $S\subseteq\mathcal{P}$ is directed if each pair of elements in $S$
has an upper bound: For all $a,b\in S$, there exists a $c\in S$ such that
$a\preceq c$ and $b\preceq c$.
###### Definition II.4 (Completeness).
A poset is a complete partial order (CPO) if each of its directed subsets has
a supremum and a least element.
For instance, the poset $\langle\mathbb{R}_{+},\leq\rangle$, with
$\mathbb{R}_{+}\coloneqq\\{x\in\mathbb{R}\,|\,x\geq 0\\}$, is not complete, as
its directed subset $\mathbb{R}_{+}\subseteq\mathbb{R}_{+}$ does not have an
upper bound (and therefore a supremum). Nonetheless, we can make it complete
by artificially adding a top element $\top$, i.e., by defining
$\langle\overline{\mathbb{R}}_{+},\leq\rangle$ with
$\overline{\mathbb{R}}_{+}\coloneqq\mathbb{R}_{+}\cup\\{\top\\}$ and
$a\leq\top$ for all $a\in\mathbb{R}_{+}$. Similarly, we can complete
$\mathbb{N}$ to $\overline{\mathbb{N}}$.
In this setting, Scott-continuous maps will play a key role. Intuitively,
Scott-continuity can be understood as a stronger notion of monotonicity.
###### Definition II.5 (Scott continuity).
A map $f:\mathcal{P}\rightarrow\mathcal{Q}$ between two posets
$\langle\mathcal{P},\preceq_{\mathcal{P}}\rangle$ and
$\langle\mathcal{Q},\preceq_{\mathcal{Q}}\rangle$ is Scott-continuous iff for
each directed set $D\subseteq\mathcal{P}$ the image $f(D)$ is directed and
$\sup f(D)=f(\sup D)$.
### II-B Mathematical Theory of Co-Design
We start by presenting design problems with implementation (DPIs), which can
then be composed and interconnected to form a co-design problem with
implementation (CDPI).
###### Definition II.6 (DPI).
A DPI is a tuple
$\langle\mathcal{F},\mathcal{R},\mathcal{I},\textsf{{exe}},\textsf{{eva}}\rangle$:
* •
$\mathcal{F}$ is a poset, called functionality space;
* •
$\mathcal{R}$ is a poset, called resource space;
* •
$\mathcal{I}$ is a set, called implementation space;
* •
the map $\textsf{{exe}}:\mathcal{I}\to\mathcal{F}$ maps an implementation to
the functionality it provides;
* •
the map $\textsf{{eva}}:\mathcal{I}\to\mathcal{R}$, maps an implementation to
the resources it requires.
Given a DPI we can define a map which, given a functionality
$\textsf{{f}}\in\mathcal{F}$, returns all the non-comparable resources (i.e.,
the antichain) which provide f.
###### Definition II.7 (Functionality to resources map).
Given a DPI
$\langle\mathcal{F},\mathcal{R},\mathcal{I},\textsf{{exe}},\textsf{{eva}}\rangle$
define the map $h:\mathcal{F}\to\textsf{{A}}\mathcal{R}$ as
$\displaystyle h:$ $\displaystyle\mathcal{F}$ $\displaystyle\to$
$\displaystyle\textsf{{A}}\mathcal{R}$ (1) f $\displaystyle\mapsto$
$\displaystyle\min_{\preceq_{\mathcal{R}}}\\{\textsf{{eva}}(\textsf{{i}})\,|\,\textsf{{i}}\in\mathcal{I}\wedge\textsf{{f}}\preceq\textsf{{exe}}(\textsf{{i}})\\}.$
In particular, if a functionality is infeasible, then
$h(\textsf{{f}})=\emptyset$. We now turn our attention to “monotone” DPIs.
###### Definition II.8 (Monotone DPI).
We say a DPI
$\langle\mathcal{F},\mathcal{R},\mathcal{I},\textsf{{exe}},\textsf{{eva}}\rangle$
is monotone if:
1. 1.
The posets $\mathcal{F}$ and $\mathcal{R}$ are CPOs.
2. 2.
The map $h$ (see Definition II.7) is Scott-continuous.
Individual DPIs can be composed in series (i.e., the functionality of a DPI is
the resource of a second DPI) and in parallel (i.e., two DPIs share the same
resource or functionality) to obtain a CDPI. Notably, such compositions
preserve monotonicity and, thus, all related algorithmic properties. For
further details we refer to [2].
## III Co-Design of AV-enabled Mobility Systems
### III-A Intermodal AMoD Framework
#### III-A1 Multi-Commodity Flow Model
The transportation system and its different modes are modeled using the
digraph $\mathcal{G}=\left(\mathcal{V},\mathcal{A}\right)$, shown in Footnote
2.
Figure 1: The I-AMoD network consists of a road, a walking, and a public
transportation digraph. The coloured circles represent stops or intersections
and the black arrows denote road links, pedestrian pathways, or public transit
arcs. Dashed lines are nodes which are close geographically, while grey arrows
denote the mode-switching arcs connecting them222We thank Ms. Sonia Monti for
the illustration..
It is described through a set of nodes $\mathcal{V}$ and a set of arcs
$\mathcal{A}\subseteq\mathcal{V}\times\mathcal{V}$. Specifically, it contains
a road network layer
$\mathcal{G}_{\mathrm{R}}=\left(\mathcal{V}_{\mathrm{R}},\mathcal{A}_{\mathrm{R}}\right)$,
a public transportation layer
$\mathcal{G}_{\mathrm{P}}=\left(\mathcal{V}_{\mathrm{P}},\mathcal{A}_{\mathrm{P}}\right)$,
and a walking layer
$\mathcal{G}_{\mathrm{W}}=\left(\mathcal{V}_{\mathrm{W}},\mathcal{A}_{\mathrm{W}}\right)$.
The road network is characterized through intersections
$i\in\mathcal{V}_{\mathrm{R}}$ and road segments
$(i,j)\in\mathcal{A}_{\mathrm{R}}$. Similarly, public transportation lines are
modeled through station nodes $i\in\mathcal{V}_{\mathrm{P}}$ and line segments
$(i,j)\in\mathcal{A}_{\mathrm{P}}$. The walking network contains walkable
streets $(i,j)\in\mathcal{A}_{\mathrm{W}}$, connecting intersections
$i\in\mathcal{V}_{\mathrm{W}}$. Our model allows mode-switching arcs
$\mathcal{A}_{\mathrm{C}}\subseteq\mathcal{V}_{\mathrm{R}}\times\mathcal{V}_{\mathrm{W}}\cup\mathcal{V}_{\mathrm{W}}\times\mathcal{V}_{\mathrm{R}}\cup\mathcal{V}_{\mathrm{P}}\times\mathcal{V}_{\mathrm{W}}\cup\mathcal{V}_{\mathrm{W}}\times\mathcal{V}_{\mathrm{P}}$,
connecting the road and the public transportation layers to the walking layer.
Consequently,
$\mathcal{V}=\mathcal{V}_{\mathrm{W}}\cup\mathcal{V}_{\mathrm{R}}\cup\mathcal{V}_{\mathrm{P}}$
and
$\mathcal{A}=\mathcal{A}_{\mathrm{W}}\cup\mathcal{A}_{\mathrm{R}}\cup\mathcal{A}_{\mathrm{P}}\cup\mathcal{A}_{\mathrm{C}}$.
Consistently with the structural properties of road and walking networks in
urban environments, we assume the graph $\mathcal{G}$ to be strongly
connected. We represent customer movements by means of travel requests. A
travel request refers to a customer flow starting its trip at a node
$o\in\mathcal{V}$ and ending it at a node $d\in\mathcal{V}$.
###### Definition III.1 (Travel request).
A travel request $\rho$ is a triple
$(o,d,\alpha)\in\mathcal{V}\times\mathcal{V}\times\mathbb{R}_{+}$, described
by an origin node $o\in\mathcal{V}$, a destination node $d\in\mathcal{V}$, and
the request rate $\alpha>0$, namely, the number of customers who want to
travel from $o$ to $d$ per unit time.
To ensure that a customer is not forced to use a given transportation mode, we
assume all requests to lie on the walking digraph, i.e.,
$o_{m},d_{m}\in\mathcal{V}_{\mathrm{W}}$ for all
$m\in\mathcal{M}\coloneqq\\{1,\ldots,M\\}$.
The flow $f_{m}\left(i,j\right)\geq 0$ represents the number of customers per
unit time traversing arc $(i,j)\in\mathcal{A}$ and satisfying a travel request
$m$. Furthermore, $f_{0}\left(i,j\right)\geq 0$ denotes the flow of empty AVs
on road arcs $(i,j)\in\mathcal{A}_{\mathrm{R}}$. This accounts for rebalancing
flows of AVs between a customer’s drop-off and the next customer’s pick-up.
Assuming AVs to carry one customer at a time, the flows satisfy
$\displaystyle\sum_{i:(i,j)\in\mathcal{A}}f_{m}\left(i,j\right)+\mathds{1}_{j=o_{m}}\cdot\alpha_{m}=\sum_{k:(j,k)\in\mathcal{A}}f_{m}\left(j,k\right)+\mathds{1}_{j=d_{m}}\cdot\alpha_{m}$
$\displaystyle\hskip 136.5733pt\forall m\in\mathcal{M},\,j\in\mathcal{V}$ (2a)
$\displaystyle\sum_{i:(i,j)\in\mathcal{A}_{\mathrm{R}}}f_{\mathrm{tot}}\left(i,j\right)=\sum_{k:(j,k)\in\mathcal{A}_{\mathrm{R}}}f_{\mathrm{tot}}\left(j,k\right)\quad\forall
j\in\mathcal{V}_{\mathrm{R}},$ (2b)
where $\mathbb{1}_{j=x}$ denotes the boolean indicator function and
$f_{\mathrm{tot}}\left(i,j\right)\coloneqq
f_{0}\left(i,j\right)+\sum_{m\in\mathcal{M}}f_{m}\left(i,j\right)$.
Specifically, (2a) guarantees flows conservation for every transportation
demand, and (2b) preserves flow conservation for AVs on every road node.
Combining conservation of customers (2a) with the conservation of AVs (2b)
guarantees rebalancing AVs to match the demand.
### III-B Travel Time and Travel Speed
The variable $t_{ij}$ denotes the time needed to traverse an arc $(i,j)$ of
length $s_{ij}$. We assume a constant walking speed on walking arcs and infer
travel times on public transportation arcs from the public transit schedules.
Assuming that the public transportation system at node $j$ operates with
frequency $\varphi_{j}$, switching from a pedestrian vertex
$i\in\mathcal{V}_{\mathrm{W}}$ to a public transit station
$j\in\mathcal{V}_{\mathrm{P}}$ takes, on average,
$t_{ij}=t_{\mathrm{WS}}+0.5\cdot
1/\varphi_{j}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{W}},$ (3)
where $t_{\mathrm{WS}}$ is a constant sidewalk-to-station travel time. We
assume that the average waiting time for AMoD vehicles is $t_{\mathrm{WR}}$,
and that switching from the road graph and the public transit graph to the
walking graph takes the transfer times $t_{\mathrm{RW}}$ and
$t_{\mathrm{SW}}$, respectively. While each road arc
$(i,j)\in\mathcal{A}_{\mathrm{R}}$ is characterized by a speed limit
$v_{\mathrm{L,V},ij}$, AVs safety protocols impose a maximum achievable
velocity $v_{\mathrm{V,a}}$. In order to prevent too slow and therefore
dangerous driving behaviours, we only consider road arcs through which the AVs
can drive at least at a fraction $\beta$ of the speed limit: Arc
$(i,j)\in\mathcal{A}_{\mathrm{R}}$ is kept in the road network iff
$v_{\mathrm{V,a}}\geq\beta\cdot v_{\mathrm{L,V},ij},$ (4)
where $\beta\in(0,1]$. We set the velocity of all arcs fulfilling condition
(4) to $v_{\mathrm{V},ij}=\min\\{v_{\mathrm{V,a}},v_{\mathrm{L,V},ij}\\}$ and
compute the travel time to traverse them as
$t_{ij}=s_{ij}/v_{\mathrm{V},ij}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{R}}.$
(5)
### III-C Road Congestion
We capture congestion effects with a threshold model. The total flow on each
road arc $(i,j)\in\mathcal{A}_{\mathrm{R}}$, given by the sum of the AVs flow
$f_{\mathrm{tot}}\left(i,j\right)$ and the baseline usage $u_{ij}$ (e.g.,
private vehicles), must remain below the nominal capacity $c_{ij}$ of the arc:
$f_{\mathrm{tot}}\left(i,j\right)+u_{ij}\leq
c_{ij}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{R}}.$ (6)
### III-D Energy Consumption
We compute the energy consumption of AVs for each road link considering an
urban driving cycle, scaled so that the average speed $v_{\mathrm{avg,cycle}}$
matches the free-flow speed on the link. The energy consumption is then scaled
as
$e_{ij}=e_{\mathrm{cycle}}\cdot
s_{ij}/s_{\mathrm{cycle}}\quad\forall(i,j)\in\mathcal{A}_{\mathrm{R}}.$ (7)
For the public transportation system, we assume a constant energy consumption
per unit time. This approximation is reasonable in urban environments, as the
operation of the public transportation system is independent from the number
of customers serviced, and its energy consumption is therefore customer-
invariant.
### III-E Fleet Size
We consider a fleet of $n_{\mathrm{V,max}}$ AVs. In a time-invariant setting,
the number of vehicles on arc $(i,j)\in\mathcal{A}_{\mathrm{R}}$ is expressed
as the product of the total vehicles flow on the arc and its travel time.
Therefore, we constrain the number of used AVs as
$n_{\mathrm{V,u}}=\sum_{(i,j)\in\mathcal{A}_{\mathrm{R}}}f_{\mathrm{tot}}\left(i,j\right)\cdot
t_{ij}\leq n_{\mathrm{V,max}}.$ (8)
### III-F Discussion
A few comments are in order. First, we assume the demand to be time-invariant
and allow flows to have fractional values. This assumption is in line with the
mesoscopic and system-level planning perspective of our study. Second, we
model congestion effects using a threshold model. This approach can be
interpreted as a municipality preventing AVs to exceed the critical flow
density on road arcs. AVs can be therefore assumed to travel at free-flow
speed [22]. This assumption is realistic for an initial low penetration of
AMoD systems in the transportation market, especially when the AV fleet is of
limited size. Finally, we allow AVs to transport one customer at the time
[23].
### III-G Co-Design Framework
We integrate the I-AMoD framework presented in Section III-A in the co-design
formalism, allowing one to decompose the CDPI of a complex system in the DPIs
of its individual components in a modular, compositional, and systematic
fashion. We aim at computing the antichain of resources, quantified in terms
of costs, average travel time per trip, and emissions required to provide the
mobility service to a set of customers. In order to achieve this, we decompose
the CDPI in the DPIs of the individual AVs (Section III-G1), of the AV fleet
(Section III-G3), and of the public transportation system (Section III-G2).
The interconnection of the presented DPIs is presented in Section III-G4.
#### III-G1 The Autonomous Vehicle Design Problem
The AV DPI consists of selecting the maximal speed of the AVs. Under the
rationale that driving safely at higher speed requires more advanced sensing
and algorithmic capabilities, we model the achievable speed of the AVs
$v_{\mathrm{V,a}}$ as a monotone function of the vehicle fixed costs
$C_{\mathrm{V,f}}$ (resulting from the cost of the vehicle $C_{\mathrm{V,v}}$
and the cost of its automation $C_{\mathrm{V,a}}$) and of the mileage-
dependent operational costs $C_{\mathrm{V,o}}$ (accounting for maintenance,
cleaning, energy consumption, depreciation, and opportunity costs [24]). In
this setting, the AV DPI provides the functionality $v_{\mathrm{V,a}}$ and
requires the resources $C_{\mathrm{V,f}}$ and $C_{\mathrm{V,o}}$.
Consequently, the functionality space is
$\mathcal{F}_{\mathrm{V}}=\overline{\mathbb{R}}_{+}$, and the resources space
is
$\mathcal{R}_{\mathrm{V}}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$.
#### III-G2 The Subway Design Problem
We design the public transit infrastructure by means of the service frequency
introduced in Section III-B. Specifically, we assume that the service
frequency $\varphi_{j}$ scales linearly with the size of the train fleet
$n_{\mathrm{S}}$ as
$\varphi_{j}/\varphi_{j,\mathrm{base}}=n_{\mathrm{S}}/n_{\mathrm{S,base}}.$
(9)
We relate a train fleet of size $n_{\mathrm{S}}$ to the fixed costs
$C_{\mathrm{S,f}}$ (accounting for train and infrastructural costs) and to the
operational costs $C_{\mathrm{S,o}}$ (accounting for energy consumption,
vehicles depreciation, and train operators’ wages). Given the passenger-
independent public transit operation in today’s cities, we reasonably assume
the operational costs $C_{\mathrm{S,o}}$ to be mileage independent and to only
vary with the size of the fleet. Formally, the number of acquired trains
$n_{\mathrm{S,a}}=n_{\mathrm{S}}-n_{\mathrm{S,base}}$ is a functionality,
whereas $C_{\mathrm{S,f}}$ and $C_{\mathrm{S,o}}$ are resources. The
functionality space is $\mathcal{F}_{\mathrm{S}}=\overline{\mathbb{N}}$ and
the resources space is
$\mathcal{R}_{\mathrm{S}}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$.
#### III-G3 The I-AMoD Framework Design Problem
The I-AMoD DPI considers demand satisfaction as a functionality. Formally,
$\mathcal{F}_{\mathrm{O}}=2^{\mathcal{V}\times\mathcal{V}\times\overline{\mathbb{R}}_{+}}$
with the partial order $\preceq_{\mathcal{F}_{\mathrm{O}}}$ defined by
$\mathcal{D}_{1}\coloneqq\\{(o^{1}_{i},d^{1}_{i},\alpha^{1}_{i})\\}_{i=1}^{M_{1}}\preceq_{\mathcal{F}_{\mathrm{O}}}\\{(o^{2}_{i},d^{2}_{i},\alpha^{2}_{i})\\}_{i=1}^{M_{2}}\eqqcolon\mathcal{D}_{2}$
iff for all $(o^{1},d^{1},\alpha^{1})\in\mathcal{D}_{1}$ there is some
$(o^{2},d^{2},\alpha^{2})\in\mathcal{D}_{2}$ with $o^{1}=o^{2}$,
$d^{1}=d^{2}$, and $\alpha^{2}_{i}\geq\alpha^{1}_{i}$. In other words,
$\mathcal{D}_{1}\preceq_{\mathcal{F}_{\mathrm{O}}}\mathcal{D}_{2}$ if every
travel request in $\mathcal{D}_{1}$ is in $\mathcal{D}_{2}$ too. To
successfully satisfy a given set of travel requests, we require the following
resources: (i) the achievable speed of the AVs $v_{\mathrm{V,a}}$, (ii) the
number of available AVs per fleet $n_{\mathrm{V,max}}$, (iii) the number of
trains $n_{\mathrm{S,a}}$ acquired by the public transportation system, and
(iv) the average travel time of a trip
$t_{\mathrm{avg}}\coloneqq\frac{1}{\alpha_{\mathrm{tot}}}\cdot\sum_{m\in\mathcal{M},(i,j)\in\mathcal{A}}t_{ij}\cdot
f_{m}\left(i,j\right),$ (10)
with $\alpha_{\mathrm{tot}}\coloneqq\sum_{m\in\mathcal{M}}\alpha_{m}$, (v) the
total distance driven by the AVs per unit time
$s_{\mathrm{V,tot}}\coloneqq\sum_{(i,j)\in\mathcal{A}_{\mathrm{R}}}s_{ij}\cdot
f_{\mathrm{tot}}\left(i,j\right),$ (11)
(vi) the total AVs CO2 emissions per unit time
$m_{\mathrm{CO_{2},V,tot}}\coloneqq\gamma\cdot\sum_{(i,j)\in\mathcal{A}_{\mathrm{R}}}e_{ij}\cdot
f_{\mathrm{tot}}\left(i,j\right),$ (12)
where $\gamma$ relates the energy consumption and the CO2 emissions. We assume
that customers’ trips and AMoD rebalancing strategies are chosen to maximize
customers’ welfare, defined through the average travel time
$t_{\mathrm{avg}}$. Hence, we link the functionality and resources of the
I-AMoD DPI through the following optimization problem:
$\begin{split}\min_{\begin{subarray}{c}f_{m}\left(\cdot,\cdot\right)\geq 0,\\\
f_{0}\left(\cdot,\cdot\right)\geq
0\end{subarray}}t_{\mathrm{avg}}&=\frac{1}{\alpha_{\mathrm{tot}}}\sum_{m\in\mathcal{M},(i,j)\in\mathcal{A}}t_{ij}\cdot
f_{m}\left(i,j\right)\\\ &\mathrm{s.t.\ Eq.}\eqref{eq:flowconstotal},\
\mathrm{Eq.}\eqref{eq:capacity},\
\mathrm{Eq.}\eqref{eq:fleetsizecar}.\end{split}$ (13)
Formally, $\mathcal{F}_{\mathrm{O}}=\overline{\mathbb{R}}_{+}$, and
$\mathcal{R}_{\mathrm{O}}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{N}}\times\overline{\mathbb{N}}\times\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$.
###### Remark.
In general, the optimization problem (13) might possess multiple optimal
solutions, making the relation between resources and functionality ill-posed.
To overcome this subtlety, if two solutions share the same average travel
time, we select the one incurring in the lowest mileage.
#### III-G4 The Monotone Co-Design Problem
The functionality of the system is to provide mobility service to the
customers. Formally, the functionality provided by the CDPI is the set of
travel requests. To provide the mobility service, the following three
resources are required. First, on the customers’ side, we require an average
travel time, defined in (10). Second, on the municipality side, the resource
is the total transportation cost of the intermodal mobility system. Assuming
an average vehicles’ life $l_{\mathrm{V}}$, an average trains’ life
$l_{\mathrm{S}}$, and a baseline subway fleet of $n_{\mathrm{S,base}}$ trains,
we express the total costs as
$C_{\mathrm{tot}}=C_{\mathrm{V}}+C_{\mathrm{S}},$ (14)
where $C_{\mathrm{V}}$ is the AV-related cost
$C_{\mathrm{V}}=\frac{C_{\mathrm{V,f}}}{l_{\mathrm{V}}}\cdot
n_{\mathrm{V,max}}+C_{\mathrm{V,o}}\cdot s_{\mathrm{V,tot}},$ (15)
and $C_{\mathrm{S}}$ is the public transit-related cost
$C_{\mathrm{S}}=\frac{C_{\mathrm{S,f}}}{l_{\mathrm{S}}}\cdot
n_{\mathrm{S,a}}+C_{\mathrm{S,o}}.$ (16)
Third, on the environmental side, the resources are the total CO2 emissions
$m_{\mathrm{CO_{2},tot}}=m_{\mathrm{CO_{2},V,tot}}+m_{\mathrm{CO_{2},S}}\cdot
n_{\mathrm{S}},$ (17)
where $m_{\mathrm{CO_{2},S}}$ represents the CO2 emissions of a single train.
Formally, the set of travel requests $\\{\rho_{m}\\}_{m\in\mathcal{M}}$ is the
CDPI functionality, whereas $t_{\mathrm{avg}}$, $C_{\mathrm{tot}}$, and
$m_{\mathrm{CO_{2},tot}}$ are its resources. Consistently, the functionality
space is $\mathcal{F}=\overline{\mathbb{R}}_{+}$ and the resources space is
$\mathcal{R}=\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}$.
Note that the resulting CDPI (Figure 2) is indeed monotone, since it consists
of the interconnection of monotone DPIs [2].
#### III-G5 Discussion
A few comments are in order. First, we lump the autonomy functionalities in
its achievable velocity. We leave to future research more elaborated AV
models, accounting for instance for accidents rates [25] and for safety
levels. Second, we assume the service frequency of the subway system to scale
linearly with the number of trains. We inherently rely on the assumption that
the existing infrastructure can homogeneously accommodate the acquired train
cars. To justify the assumption, we include an upper bound on the number of
potentially acquirable trains in our case study design in Section IV. Third,
we highlight that the I-AMoD framework is only one of the many feasible ways
to map total demand to travel time, costs, and emissions. Specifically,
practitioners can easily replace the corresponding DPI with more sophisticated
models (e.g., simulation-based frameworks like AMoDeus [26]), as long as the
monotonicity of the DPI is preserved. In our setting, we conjecture the
customers’ and vehicles’ routes to be centrally controlled by the municipality
in a socially-optimal fashion. Implicitly, we rely on the existence of
effective incentives aligning private and societal interests. The study of
such incentives represents an avenue for future research. Fourth, we assume a
homogenous fleet of AVs. Nevertheless, our model is readily extendable to
capture heterogeneous fleets. Finally, we consider a fixed travel demand, and
compute the antichain of resources providing it. Nonetheless, our
formalization can be readily extended to arbitrary demand models preserving
the monotonicity of the CDPI (accounting for instance for elastic effects). We
leave this topic to future research.
I-AMoDVehicleSubway$\preceq$$\preceq$$\preceq$$\times$$\preceq$$\preceq$$\times$$\preceq$$\preceq$$\times$$\preceq$$\preceq$$\preceq$$+$$\preceq$$+$$\preceq$$\times$$\preceq$$\preceq$$+$$+$$\preceq$$+$$\preceq$$v_{\mathrm{V,a}}$$n_{\mathrm{S,a}}$$s_{\mathrm{V,tot}}$$C_{\mathrm{V,o}}$$C_{\mathrm{V,f}}$co-
design
constraint$C_{\mathrm{S,o}}$$C_{\mathrm{S,f}}$$n_{\mathrm{V,max}}$$C_{\mathrm{tot}}$$\alpha_{\mathrm{tot}}$$m_{\mathrm{CO_{2},V,tot}}$$n_{\mathrm{S}}$$m_{\mathrm{CO_{2},S}}$$m_{\mathrm{CO_{2},tot}}$$t_{\mathrm{avg}}$$l_{\mathrm{V}}$$l_{\mathrm{S}}$total
costaverage travel timetotal emissionstotal request rate
Figure 2: Schematic representation of the CDPI. In solid green the provided
functionalities and in dashed red the required resources. The edges represent
co-design constraints: The resources required by a first design problem are
the lower bound for the functionalities provided by the second one.
## IV Results
In this section, we leverage the framework presented in Section III to perform
a real-world case study of Washington D.C., USA. Section IV-A details the case
study. We then present numerical results in Sections IV-B and IV-C.
### IV-A Case Study
We base our studies on a real-world case of the urban area of Washington D.C.,
USA. We import the road network and its features from OpenStreetMap [27]. The
public transit network and its schedules are extracted from the GTFS data
[28]. Demand data is obtained by merging the origin-destination pairs of the
morning peak of May 31st 2017 provided by taxi companies [29] and the
Washington Metropolitan Area Transit Authority (WMATA) [23]. Given the lack of
reliable demand data for the MetroBus system, we focus our studies on the
MetroRail system and its design, inherently assuming MetroBus commuters to be
unaffected by our design methodology. To conform with the large presence of
ride-hailing companies, we scale the taxi demand rate by factor of 5 [30].
Overall, the demand dataset includes 15,872 travel requests, corresponding to
a demand rate of 24.22 $\nicefrac{{requests}}{{s}}$. To account for congestion
effects, we compute the nominal road capacity as in [31] and assume an average
baseline road usage of 93%, in line with [32]. We summarize the main
parameters together with their bibliographic sources in Table I. In the
remainder of this section, we tailor and solve the co-design problem presented
in Section III through the PyMCDP solver [33], and investigate the influence
of different AVs costs on the design objectives and strategies.
| Parameter | Name | Value | Units | Source
---|---|---|---|---|---
| Road usage | | $u_{ij}$ | 93 | % | [32]
Vehicle | | | | C1 | C2.1 | C2.2 | |
Operational cost | $C_{\mathrm{V,o}}$ | 0.084 | 0.084 | 0.062 | $\nicefrac{{USD}}{{mile}}$ | [34, 35]
Cost | $C_{\mathrm{V}}$ | 32,000 | 32,000 | 26,000 | $\nicefrac{{USD}}{{car}}$ | [34]
Automation cost | 20 mph | $C_{\mathrm{V,a}}$ | 15,000 | 20,000 | 3,700 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
25 mph | 15,000 | 30,000 | 4,400 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
30 mph | 15,000 | 55,000 | 6,200 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
35 mph | 15,000 | 90,000 | 8,700 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
40 mph | 15,000 | 115,000 | 9,800 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
45 mph | 15,000 | 130,000 | 12,000 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
50 mph | 15,000 | 150,000 | 13,000 | $\nicefrac{{USD}}{{car}}$ | [35, 36, 37, 38, 39]
Vehicle life | $l_{\mathrm{V}}$ | 5 | 5 | 5 | years | [34]
CO2 per Joule | $\gamma$ | 0.14 | 0.14 | 0.14 | $\nicefrac{{g}}{{kJ}}$ | [40]
Time $\mathcal{G}_{\mathrm{W}}$ to $\mathcal{G}_{\mathrm{R}}$ | $t_{\mathrm{WR}}$ | 300 | 300 | 300 | s | -
Time $\mathcal{G}_{\mathrm{R}}$ to $\mathcal{G}_{\mathrm{W}}$ | $t_{\mathrm{RW}}$ | 60 | 60 | 60 | s | -
| Speed fraction | $\beta$ | $\frac{1}{1.3}$ | $\frac{1}{1.3}$ | $\frac{1}{1.3}$ | - | -
Public transit | Operational cost | 100 % | $C_{\mathrm{S,o}}$ | 148,000,000 | $\nicefrac{{USD}}{{year}}$ | [41]
133 % | 197,000,000 | $\nicefrac{{USD}}{{year}}$ | [41]
200 % | 295,000,000 | $\nicefrac{{USD}}{{year}}$ | [41]
Fixed cost | $C_{\mathrm{S,f}}$ | 14,500,000 | $\nicefrac{{USD}}{{train}}$ | [42]
Train life | $l_{\mathrm{S}}$ | 30 | years | [42]
Emissions/train | $m_{\mathrm{CO_{2},S}}$ | 140,000 | $\nicefrac{{kg}}{{year}}$ | [43]
Fleet baseline | $n_{\mathrm{S,base}}$ | 112 | trains | [42]
Service frequency | $\varphi_{j,\mathrm{base}}$ | $1/6$ | $\nicefrac{{1}}{{min}}$ | [44]
Time $\mathcal{G}_{\mathrm{W}}$ to $\mathcal{G}_{\mathrm{P}}$ | $t_{\mathrm{WS}}$ | $60$ | s | -
Time $\mathcal{G}_{\mathrm{P}}$ to $\mathcal{G}_{\mathrm{W}}$ | $t_{\mathrm{SW}}$ | $60$ | s | -
TABLE I: Parameters, variables, numbers, and units for the case studies.
### IV-B Case 1 - Constant Cost of Automation
In line with [35, 36, 37, 38, 39], we first assume an average achievable-
velocity-independent cost of automation. As discussed in Section III, we
design the system by means of subway service frequency, AV fleet size, and
achievable free-flow speed. Specifically, we allow the municipality to (i)
increase the subway service frequency $\varphi_{j}$ by a factor of 0%, 33%, or
100%, (ii) deploy an AVs fleet of size
$n_{\mathrm{V,max}}\in\\{0,500,1000,\ldots,6000\\}$ vehicles, and (iii) design
the single AV achievable velocity
$v_{\mathrm{V,a}}\in\\{20\,\mathrm{mph},25\,\mathrm{mph},\ldots,50\,\mathrm{mph}\\}$.
We assume the AVs fleet to be composed of battery electric BEV-250 mile AVs
[34]. In Figure 3(a), we show the solution of the co-design problem by
reporting the antichain consisting of the total transportation cost, average
travel time, and total CO2 emissions. These solutions are _rational_ (and not
comparable) in the sense that there exists no instance which simultaneously
yields lower cost, average travel time, and emissions.
(a) Left: Three-dimensional representation of antichain elements and their
projection in the cost-time space. Right: Two-dimensional projections.
(b) Results for constant automation costs. On the left, the two-dimensional
representation of the antichain elements: In red are the unfeasible
strategies, in orange the feasible but irrational solutions, and in green the
Pareto front. On the right, the implementations corresponding to the
highlighted antichain elements, quantified in terms of achievable vehicle
speed, AV fleet size, and train fleet size.
Figure 3: Solution of the CDPI: state-of-the art case.
For the sake of clarity, we opt for a two-dimensional antichain
representation, by translating and including the emissions in the total cost.
To do so, we consider the conversion factor 40 $\nicefrac{{USD}}{{kg}}$ [45].
Note that since this transformation preserves the monotonicity of the CDPI it
smoothly integrates in our framework. Doing so, we can conveniently depict the
co-design strategies through the two-dimensional antichain (Figure 3(b), left)
and the corresponding municipality actions (Figure 3(b), right). Generally, as
the municipality budget increases, the average travel time per trip required
to satisfy the given demand decreases, reaching a minimum of about 17.1 min
with an expense of around 43 $\nicefrac{{MilUSD}}{{month}}$. This
configuration corresponds to a fleet of 5,500 AVs able to drive at 50 mph and
to the doubling of the current MetroRail train fleet. On the other hand, the
smallest rational investment of 12.9 $\nicefrac{{MilUSD}}{{month}}$ leads to a
42 % higher average travel time, corresponding to a inexistent autonomous
fleet and an unchanged subway infrastructure. Notably, an expense of 23
$\nicefrac{{MilUSD}}{{month}}$ (48 % lower than the highest rational
investment) only increases the minimal required travel time by 9 %, requiring
a fleet of 4,000 vehicles able to drive at 35 mph and no acquisition of
trains. Conversely, an investment of 15.6 $\nicefrac{{MilUSD}}{{month}}$ (just
2 $\nicefrac{{MilUSD}}{{month}}$ more than the minimal rational investment)
provides a 3 min shorter travel time. Remarkably, the design of AVs able to
exceed 40 mph only improves the average travel time by 6 %, and it is rational
just starting from an expense of 22.8 $\nicefrac{{MilUSD}}{{month}}$. This
suggests that the design of faster vehicles mainly results in higher emission
rates and costs, without substantially contributing to a more time-efficient
demand satisfaction. Finally, it is rational to improve the subway system only
starting from a budget of 28.5 $\nicefrac{{MilUSD}}{{month}}$, leading to a
travel time improvement of just 4 %. This trend can be explained with the high
train acquisition and increased operation costs, related to the subway
reinforcement. We expect this phenomenon to be more marked for other cities,
considering the moderate operation costs of the MetroRail subway system due to
its automation [44] and related benefits [46].
### IV-C Case 2 - Speed-Dependent Automation Costs
To relax the potentially unrealistic assumption of a velocity-independent
automation cost, we consider a performance-dependent cost structure. The large
variance in sensing technologies and their reported performances [47] suggests
that this rationale is reasonable. Indeed, the technology required today to
safely operate an autonomous vehicle at 50 mph is substantially more
sophisticated, and therefore more expensive, than the one needed at 20 mph. To
this end, we adopt the cost structure reported in Table I. Furthermore, the
frenetic evolution of automation techniques intricates their monetary
quantification. Therefore, we perform our studies with current (2020) costs as
well as with their projections for the upcoming decade (2025) [48, 34].
#### IV-C1 Case 2.1 - 2020
We study the hypothetical case of an immediate AV fleet deployment. We
introduce the aforementioned velocity-dependent automation cost structure and
obtain the results reported in Figure 4(a). Comparing these results with the
state-of-the-art parameters presented in Figure 3 confirms the previously
observed trend concerning high vehicle speeds. Indeed, spending 24.9
$\nicefrac{{MilUSD}}{{month}}$ (55 % lower than the highest rational expense)
only increases the average travel time by 10 %, requiring a fleet of 3,000 AVs
at 40 mph and no subway interventions. Nevertheless, the comparison shows two
substantial differences. First, the budget required to reach the minimum
travel time of 17.1 min is 28 % higher compared to the previous case, and
consists of the same strategy for the municipality, i.e., doubling the train
fleet and having a fleet of 5,500 AVs at 50 mph. Second, the higher vehicle
costs result in an average AV fleet growth of 5 %, an average velocity
reduction of 9 %, and an average train fleet growth of 7 %. This trend
suggests that, compared to Case 1, rational design strategies foster larger
fleets and less performing AVs.
(a) Results for speed-dependent automation costs in 2020.
(b) Results for speed-dependent automation costs in 2025.
Figure 4: Results for the speed-dependent automation costs. On the left, the
two-dimensional representation of the antichain elements: In red are the
unfeasible strategies, in orange the feasible but irrational solutions, and in
green the Pareto front. On the right, the implementations corresponding to the
highlighted antichain elements.
#### IV-C2 Case 2.2 - 2025
Experts forecast a substantial decrease of automation costs (up to 90 %) in
the next decade, mainly due to mass-production of the AVs sensing technology
[48, 49]. In line with this prediction, we inspect the futuristic scenario by
solving the CDPI for the adapted automation costs, and report the results in
Figure 4(b). Two comments are in order. First, the maximal rational budget is
25 % lower than in the immediate adoption case. Second, the reduction in
autonomy costs clearly eases the acquisition of more performant AVs,
increasing the average vehicle speed by 10 %. As a direct consequence, the AV
and train fleets are reduced in size by 5 % and 10 %, respectively.
### IV-D Discussion
We conclude the analysis of our case study with two final comments. First, the
presented case studies illustrate the ability of our framework to extract the
set of rational design strategies for an AV-enabled mobility system. This way,
stakeholders such as AVs companies, transportation authorities, and policy
makers can get transparent and interpretable insights on the impact of future
interventions. Second, we perform a sensitivity analysis through the variation
of the autonomy cost structures. On the one hand, this reveals a clear
transition from small fleets of fast AVs (in the case of low autonomy costs)
to slow fleets of numerous AVs (in the case of high autonomy costs). On the
other hand, our studies highlight that investments in the public transit
infrastructure are rational only when large budgets are available. Indeed, the
onerous train acquisition and operation costs lead to a comparative advantage
of AV-based mobility.
## V Conclusion
In this paper, we leveraged the mathematical theory of co-design to propose a
design framework for AV-enabled mobility systems. Specifically, the nature of
our framework allows both for the modular and compositional interconnection of
the DPIs of different mobility options and for multiple objectives. Starting
from the multi-commodity flow model of an I-AMoD system, we optimize the
design of AVs and public transit both from a vehicle-centric and fleet-level
perspective. In particular, we studied the problem of deploying a fleet of AVs
providing on-demand mobility in cooperation with public transit, optimizing
the speed achievable by the vehicles, the fleet size, and the service
frequency of the subway lines. Our framework allows the stakeholders involved
in the mobility ecosystem, from vehicle developers all the way to mobility-as-
a-service companies and governmental authorities, to characterize rational
trajectories for technology and investment development. We showcased our
methodology on a real-world case study of Washington D.C., USA. Notably, our
problem formulation allows for a systematic analysis of incomparable
objectives, providing stakeholders with analytical insights for the socio-
technical design of AV-enabled mobility systems. This work opens the field for
the following future research streams:
_Modeling:_ First, we would like to extend the presented framework to capture
additional modes of transportation, such as micromobility, and heterogeneous
fleets with different self-driving infrastructures, propulsion systems, and
passenger capacity. Second, we would like to investigate variable demand
models. Finally, we would like to analyze the interactions between multiple
stakeholders, characterizing the equilibrium arising from their conflicting
interests.
_Algorithms:_ It is of interest to tailor co-design algorithmic frameworks to
the particular case of transportation DPIs, possibly leveraging their specific
structure.
_Application:_ Finally, we would like to devise a user-friendly web interface
which supports mobility stakeholders to reason about strategic interventions
in urban areas.
## References
* [1] G. Zardini, N. Lanzetti, M. Salazar, A. Censi, E. Frazzoli, and M. Pavone, “Towards a co-design framework for future mobility systems,” in _Annual Meeting of the Transportation Research Board_ , Washington D.C., United States, 2020.
* [2] A. Censi, “A mathematical theory of co-design,” _arXiv preprint arXiv:1512.08055v7_ , 2015.
* [3] ——, “Monotone co-design problems; or, everything is the same,” in _American Control Conference_ , 2016.
* [4] ——, “A class of co-design problems with cyclic constraints and their solution,” _IEEE Robotics and Automation Letters_ , vol. 2, pp. 96–103, 2017.
* [5] M. Salazar, N. Lanzetti, F. Rossi, M. Schiffer, and M. Pavone, “Intermodal autonomous mobility-on-demand,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019.
* [6] R. Z. Farahani, E. Miandoabchi, W. Y. Szeto, and H. Rashidi, “A review of urban transportation network design problems,” _European Journal of Operational Research_ , vol. 229, pp. 281–302, 2013.
* [7] V. Guihaire and J.-K. Hao, “Transit network design and scheduling: A global review,” _Transportation Research Part B: Methodological_ , vol. 42, pp. 1251–1273, 2008.
* [8] Z. Cong, B. De Schutter, and R. Babuska, “Co-design of traffic network topology and control measures,” _Transportation Research Part C: Emerging Technologies_ , vol. 54, pp. 56–73, 2015.
* [9] R. O. Arbex and C. B. da Cunha, “Efficient transit network design and frequencies setting multi-objective optimization by alternating objective genetic algorithm,” _Transportation Research Part B: Methodological_ , vol. 81, pp. 355–376, 2015.
* [10] L. Sun, J. G. Jin, D.-H. Lee, K. W. Axhausen, and A. Erath, “Demand-driven timetable design for metro services,” _Transportation Research Part C: Emerging Technologies_ , vol. 46, pp. 284–299, 2014.
* [11] S. Su, X. Li, T. Tang, and Z. Gao, “A subway train timetable optimization approach based on energy-efficient operation strategy,” _IEEE Transactions on Intelligent Transportation Systems_ , vol. 14, no. 2, pp. 883–893, 2013.
* [12] S. Narayanan, E. Chaniotakis, and C. Antoniou, “Shared autonomous vehicle services: A comprehensive review,” _Transportation Research Part C: Emerging Technologies_ , vol. 111, pp. 255–293, 2020.
* [13] J. A. Barrios and J. D. Godier, “Fleet sizing for flexible carsharing systems: Simulation-based approach,” _Transportation Research Record: Journal of the Transportation Research Board_ , vol. 2416, pp. 1–9, 2014.
* [14] D. J. Fagnant and K. M. Kockelman, “Dynamic ride-sharing and fleet sizing for a system of shared autonomous vehicles in austin, texas,” _Transportation_ , vol. 45, no. 1, pp. 143–158, 2018.
* [15] M. M. Vazifeh, P. Santi, G. Resta, S. H. Strogatz, and C. Ratti, “Addressing the minimum fleet problem in on-demand urban mobility,” _Nature_ , vol. 557, no. 7706, p. 534, 2018.
* [16] P. M. Boesch, F. Ciari, and K. W. Axhausen, “Autonomous vehicle fleet sizes required to serve different levels of demand,” _Transportation Research Record: Journal of the Transportation Research Board_ , vol. 2542, no. 1, pp. 111–119, 2016.
* [17] K. Spieser, K. Treleaven, R. Zhang, E. Frazzoli, D. Morton, and M. Pavone, “Toward a systematic approach to the design and evaluation of automated mobility-on-demand systems: A case study in singapore,” in _Road vehicle automation_ , 2014, pp. 229–245.
* [18] H. Zhang, C. J. R. Sheppard, T. Lipman, and S. Moura, “Joint fleet sizing and charging system planning for autonomous electric vehicles,” _IEEE Transactions on Intelligent Transportation Systems_ , 2019.
* [19] G. J. Beaujon and M. A. Turnquist, “A model for fleet sizing and vehicle allocation,” _Transportation Science_ , vol. 25, no. 1, pp. 19–45, 1991\.
* [20] A. Wallar, W. Schwarting, J. Alonso-Mora, and D. Rus, “Optimizing multi-class fleet compositions for shared mobility-as-a-service,” in _Proc. IEEE Int. Conf. on Intelligent Transportation Systems_. IEEE, 2019, pp. 2998–3005.
* [21] H. K. R. F. Pinto, M. F. Hyland, H. S. Mahmassani, and I. O. Verbas, “Joint design of multimodal transit networks and shared autonomous mobility fleets,” _Transportation Research Part C: Emerging Technologies_ , 2019\.
* [22] C. F. Daganzo and N. Geroliminis, “An analytical approximation for the macroscopic fundamental diagram of urban traffic,” _Transportation Research Part B: Methodological_ , vol. 42, no. 9, pp. 771–781, 2008.
* [23] PIM. (2012) Metrorail ridership by origin and destination. Plan It Metro. Plan It Metro.
* [24] A. Mas-Colell, M. D. Whinston, and J. R. Green, _Microeconomic Theory_. Oxford Univ. Press, 1995.
* [25] D. C. Richards, “Relationship between speed and risk of fatal injury: Pedestrians and car occupants,” Department for Transport: London, Tech. Rep., 2010.
* [26] C. Ruch, S. Hörl, and E. Frazzoli, “Amodeus, a simulation-based testbed for autonomous mobility-on-demand systems,” in _Proc. IEEE Int. Conf. on Intelligent Transportation Systems_ , 2018, pp. 3639–3644.
* [27] M. Haklay and P. Weber, “OpenStreetMap: User-generated street maps,” _IEEE Pervasive Computing_ , vol. 7, no. 4, pp. 12–18, 2008.
* [28] GTFS. (2019) Gtfs: Making public transit data universally accessible.
* [29] ODDC. (2017) Taxicab trips in 2016. Open Data DC. Open Data DC. Available online at https://opendata.dc.gov/search?q=taxicabs.
* [30] F. Siddiqui. (2018) As ride hailing booms in d.c., it’s not just eating in the taxi market – it’s increasing vehicle trips. The Washington Post. The Washington Post. available online.
* [31] DoA, Ed., _Military Police Traffic Operations_. Department of the Army, 1977.
* [32] S. Dixon, H. Irshad, and V. White, “Deloitte city moblity index – washington d.c.” Deloitte, Tech. Rep., 2018.
* [33] A. Censi. (2019) Monotone co-design problems. Available online: https://co-design.science/index.html.
* [34] N. Pavlenko, P. Slowik, and N. Lutsey, “When does electrifying shared mobility make economic sense?” The International Council on Clean Transportation, Tech. Rep., 2019.
* [35] P. M. Boesch, F. Becker, H. Becker, and K. W. Axhausen, “Cost-based analysis of autonomous mobility services,” _Transport Policy_ , vol. 64, pp. 76–91, 2018.
* [36] D. J. Fagnant and K. Kockelman, “Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations,” _Transportation Research Part A: Policy and Practice_ , vol. 77, pp. 167–181, 2015.
* [37] G. S. Bauer, J. B. Greenblatt, and B. F. Gerke, “Cost, energy, and environmental impact of automated electric taxi fleets in manhattan,” _Environmental Science & Technology_, vol. 52, no. 8, pp. 4920–4928, 2018\.
* [38] Z. Wadud, “Fully automated vehicles: A cost of ownership analysis to inform early adoption,” _Transportation Research Part A: Policy and Practice_ , vol. 101, pp. 163–176, 2017.
* [39] T. Litman, “Autonomous vehicle implementation predictions – implications for transport planning,” Victoria Transport Policy Institute, Tech. Rep., 2019.
* [40] W. Time. (2018, Mar.) Carbon footprint data. Wired.
* [41] WMATA, “Fy2018 proposed budget,” Washington Metropolitan Area Transit Authority, Tech. Rep., 2017.
* [42] L. Aratani. (2015) Metro to debut first of its 7000-series cars on blue line on april 14. The Washington Post. available online.
* [43] WMATA, “Sustainability report 2018,” Washington Metropolitan Area Transit Authority, Tech. Rep., 2018.
* [44] E. Jaffe. (2015) The case for driverless trains, by the numbers. Citylab. Citylab. Available online.
* [45] P. Howard and D. Sylvan, “Expert consensus on the economics of climate change,” Institute for Policy Integrity – New York University School of Law, Tech. Rep., 2015.
* [46] Y. Wang, J. Zhang, M. Ma, and X. Zhoum, “Survey on driverless train operation for urban rail transit systems,” _Urban Rail Transit_ , vol. 2, no. 3–4, p. 106–113, 2016.
* [47] J. H. Gawron, G. A. Keoleian, R. D. De Kleine, T. J. Wallington, and K. Hyung Chul, “Life cycle assessment of connected and automated vehicles: Sensing and computing subsystem and vehicle level effects,” _Environmental Science & Technology_, vol. 52, pp. 3249–3256, 2018.
* [48] P. Lienert. (2019) Cost of driverless vehicles to drop dramatically: Delphi ceo. Insurance Journal. available online.
* [49] WCP, “The automotive lidar market,” Woodside Capital Partners, Tech. Rep., 2018\.
|
2024-09-04T02:54:58.766292 | 2020-02-29T21:59:52 | 2003.04805 | {
"authors": "Razvan V. Marinescu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26137",
"submitter": "Razvan Marinescu",
"url": "https://arxiv.org/abs/2003.04805"
} | arxiv-papers | *mymilestone mymilestone/.style= shape=isosceles triangle, inner sep=0pt, draw=cyan, top color=white, bottom color=cyan!50 , mymilestone incomplete/.style= /pgfgantt/mymilestone, draw=yellow, bottom color=yellow!50 , mymilestone label font=, mymilestone left shift=0pt, mymilestone right shift=0pt stardate1cm, inner sep=0]
|
2024-09-04T02:54:58.774191 | 2020-03-10T15:54:53 | 2003.04819 | {
"authors": "Benedek Rozemberczki, Oliver Kiss, Rik Sarkar",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26138",
"submitter": "Benedek Rozemberczki",
"url": "https://arxiv.org/abs/2003.04819"
} | arxiv-papers |
Karate Club: An API Oriented Open-Source Python Framework for Unsupervised Learning on Graphs
Benedek Rozemberczki
The University of Edinburgh
United Kingdom
Oliver Kiss
Central European University
Rik Sarkar
The University of Edinburgh
United Kingdom
Graphs encode important structural properties of complex systems. Machine learning on graphs has therefore emerged as an important technique in research and applications. We present Karate Club – a Python framework combining more than 30 state-of-the-art graph mining algorithms. These unsupervised techniques make it easy to identify and represent common graph features. The primary goal of the package is to make community detection, node and whole graph embedding available to a wide audience of machine learning researchers and practitioners.
Karate Club is designed with an emphasis on a consistent application interface, scalability, ease of use, sensible out of the box model behaviour, standardized dataset ingestion, and output generation. This paper discusses the design principles behind the framework with practical examples. We show Karate Club's efficiency in learning performance on a wide range of real world clustering problems and classification tasks along with supporting evidence of its competitive speed.
§ ACKNOWLEDGEMENTS
Benedek Rozemberczki was supported by the Centre for Doctoral Training in Data Science, funded by EPSRC (grant EP/L016427/1).
#1 #1#1#1 #1 #1 #1 #1#1 #1#1
[Abadi, Barham, Chen, Chen, Davis, Dean, Devin,
Ghemawat, Irving, Isard, et alAbadi et al2016]
authorpersonMartín Abadi, personPaul
Barham, personJianmin Chen, personZhifeng Chen,
personAndy Davis, personJeffrey Dean,
personMatthieu Devin, personSanjay Ghemawat,
personGeoffrey Irving, personMichael Isard,
et al year2016.
Tensorflow: A system for large-scale machine
learning. In booktitle12th $\{$USENIX$\}$ Symposium on
Operating Systems Design and Implementation ($\{$OSDI$\}$ 16).
[Ahmed, Rossi, Lee, Willke, Zhou, Kong, and
EldardiryAhmed et al2019]
authorpersonNesreen K Ahmed, personRyan A
Rossi, personJohn Boaz Lee, personTheodore L Willke,
personRong Zhou, personXiangnan Kong, and
personHoda Eldardiry. year2019.
role2vec: Role-based network embeddings. In
booktitleProc. DLG KDD.
[Bandyopadhyay, Kara, Kannan, and
MurtyBandyopadhyay et al2018]
authorpersonSambaran Bandyopadhyay,
personHarsh Kara, personAswin Kannan, and
personM Narasimha Murty. year2018.
Fscnmf: Fusing structure and content via
non-negative matrix factorization for embedding information networks.
journalarXiv preprint arXiv:1804.05313
[Belkin and NiyogiBelkin and Niyogi2002]
authorpersonMikhail Belkin and
personPartha Niyogi. year2002.
Laplacian eigenmaps and spectral techniques for
embedding and clustering. In booktitleAdvances in neural
information processing systems. pages585–591.
[Buitinck, Louppe, Blondel, Pedregosa, Mueller,
Grisel, Niculae, Prettenhofer, Gramfort, Grobler, Layton, VanderPlas, Joly,
Holt, and VaroquauxBuitinck et al2013]
authorpersonLars Buitinck, personGilles
Louppe, personMathieu Blondel, personFabian
Pedregosa, personAndreas Mueller, personOlivier
Grisel, personVlad Niculae, personPeter
Prettenhofer, personAlexandre Gramfort, personJaques
Grobler, personRobert Layton, personJacob
VanderPlas, personArnaud Joly, personBrian Holt,
and personGaël Varoquaux. year2013.
API design for machine learning software:
experiences from the scikit-learn project.
journalArXiv volumeabs/1309.0238
[Cao, Lu, and XuCao et al2015]
authorpersonShaosheng Cao, personWei Lu,
and personQiongkai Xu. year2015.
Grarep: Learning graph representations with global
structural information. In booktitleProceedings of the 24th
ACM international on conference on information and knowledge management.
ACM, pages891–900.
[Chen and KogaChen and Koga2019]
authorpersonHong Chen and personHisashi
Koga. year2019.
GL2vec: Graph Embedding Enriched by Line Graphs
with Edge Features. In booktitleInternational Conference on
Neural Information Processing. Springer, pages3–14.
[de Lara and Edouardde Lara and
authorpersonNathan de Lara and
personPineau Edouard. year2018.
A simple baseline algorithm for graph
classification. In booktitleAdvances in Neural Information
Processing Systems.
[Defferrard, Martin, Pena, and
PerraudinDefferrard et al[n.d.]]
authorpersonMichaël Defferrard,
personLionel Martin, personRodrigo Pena, and
personNathanaël Perraudin. year[n.d.].
titlePyGSP: Graph Signal Processing in Python.
[Donnat, Zitnik, Hallac, and LeskovecDonnat
et al2018]
authorpersonClaire Donnat, personMarinka
Zitnik, personDavid Hallac, and personJure
Leskovec. year2018.
Learning structural node embeddings via diffusion
wavelets. In booktitleProceedings of the 24th ACM SIGKDD
International Conference on Knowledge Discovery & Data Mining. ACM,
[Epasto, Lattanzi, and Paes LemeEpasto
et al2017]
authorpersonAlessandro Epasto, personSilvio
Lattanzi, and personRenato Paes Leme.
Ego-Splitting Framework: From Non-Overlapping to
Overlapping Clusters. In booktitleProceedings of the 23rd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
(seriesKDD '17). pages145–154.
[Gao, Wolf, and HirnGao
et al2019]
authorpersonFeng Gao, personGuy Wolf, and
personMatthew Hirn. year2019.
Geometric Scattering for Graph Data Analysis. In
booktitleProceedings of the 36th International Conference on
Machine Learning, Vol. volume97. pages2122–2131.
[Hagberg, Swart, and S ChultHagberg
et al2008]
authorpersonAric Hagberg, personPieter
Swart, and personDaniel S Chult.
booktitleExploring network structure, dynamics, and
function using NetworkX.
typeTechnical Report. institutionLos
Alamos National Lab.(LANL), Los Alamos, NM (United States).
[Henderson, Gallagher, Eliassi-Rad, Tong, Basu,
Akoglu, Koutra, Faloutsos, and LiHenderson et al2012]
authorpersonKeith Henderson, personBrian
Gallagher, personTina Eliassi-Rad, personHanghang
Tong, personSugato Basu, personLeman Akoglu,
personDanai Koutra, personChristos Faloutsos, and
personLei Li. year2012.
Rolx: structural role extraction & mining in large
graphs. In booktitleProceedings of the 18th ACM SIGKDD
international conference on Knowledge discovery and data mining.
[Jundong LiJundong Li2019]
authorpersonHuan Liu Jundong Li, Liang Wu.
Multi-Level Network Embedding with Boosted Low-Rank
Matrix Approximation. In booktitleProceedings of the 2019
IEEE/ACM International Conference on Advances in Social Networks Analysis and
Mining 2019. ACM, pages50–56.
[Kuang, Ding, and ParkKuang
et al2012]
authorpersonDa Kuang, personChris Ding,
and personHaesun Park. year2012.
Symmetric nonnegative matrix factorization for
graph clustering. In booktitleProceedings of the 2012 SIAM
international conference on data mining. SIAM, pages106–117.
[Leskovec and KrevlLeskovec and
authorpersonJure Leskovec and personAndrej
Krevl. year2014.
titleSNAP Datasets: Stanford Large Network Dataset
[Li, Huang, Wang, and LaiLi
et al2019]
authorpersonPei-Zhen Li, personLing Huang,
personChang-Dong Wang, and personJian-Huang Lai.
EdMot: An Edge Enhancement Approach for Motif-aware
Community Detection. In booktitleProceedings of the 25th
ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
(seriesKDD '19). pages479–487.
[Narayanan, Chandramohan, Venkatesan, Chen, and
LiuNarayanan et al2017]
authorpersonAnnamalai Narayanan,
personMahinthan Chandramohan, personRajasekar
Venkatesan, personLihui Chen, and personYang Liu.
graph2vec: Learning distributed representations of
[Ou, Cui, Pei, Zhang, and ZhuOu
et al2016]
authorpersonMingdong Ou, personPeng Cui,
personJian Pei, personZiwei Zhang, and
personWenwu Zhu. year2016.
Asymmetric transitivity preserving graph
embedding. In booktitleProceedings of the 22nd ACM SIGKDD
international conference on Knowledge discovery and data mining.
[Paszke, Gross, Massa, Lerer, Bradbury, Chanan,
Killeen, Lin, Gimelshein, Antiga, et alPaszke
et al2019]
authorpersonAdam Paszke, personSam Gross,
personFrancisco Massa, personAdam Lerer,
personJames Bradbury, personGregory Chanan,
personTrevor Killeen, personZeming Lin,
personNatalia Gimelshein, personLuca Antiga,
et al year2019.
PyTorch: An imperative style, high-performance deep
learning library. In booktitleAdvances in Neural
Information Processing Systems. pages8024–8035.
[Pedregosa, Varoquaux, Gramfort, Michel,
Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg,
et alPedregosa et al2011]
authorpersonFabian Pedregosa, personGaël
Varoquaux, personAlexandre Gramfort, personVincent
Michel, personBertrand Thirion, personOlivier
Grisel, personMathieu Blondel, personPeter
Prettenhofer, personRon Weiss, personVincent
Dubourg, et al year2011.
Scikit-learn: Machine learning in Python.
journalJournal of machine learning research
volume12, numberOct (year2011),
authorpersonTiago P Peixoto.
The graph-tool python library.
journalfigshare (year2014).
[Perozzi, Al-Rfou, and SkienaPerozzi
et al2014]
authorpersonBryan Perozzi, personRami
Al-Rfou, and personSteven Skiena.
Deepwalk: Online learning of social
representations. In booktitleProceedings of the 20th ACM
SIGKDD international conference on Knowledge discovery and data mining.
ACM, pages701–710.
[Perozzi, Kulkarni, Chen, and SkienaPerozzi
et al2017]
authorpersonBryan Perozzi, personVivek
Kulkarni, personHaochen Chen, and personSteven
Skiena. year2017.
Don't Walk, Skip!: online learning of multi-scale
network embeddings. In booktitleProceedings of the 2017
IEEE/ACM International Conference on Advances in Social Networks Analysis and
Mining 2017. ACM, pages258–265.
[Prat-Pérez, Dominguez-Sal, and
Larriba-PeyPrat-Pérez et al2014]
authorpersonArnau Prat-Pérez,
personDavid Dominguez-Sal, and personJosep-Lluis
Larriba-Pey. year2014.
High quality, scalable and parallel community
detection for large real graphs. In booktitleProceedings of
the 23rd international conference on World wide web.
[Qiu, Dong, Ma, Li, Wang, and TangQiu
et al2018]
authorpersonJiezhong Qiu, personYuxiao
Dong, personHao Ma, personJian Li,
personKuansan Wang, and personJie Tang.
Network embedding as matrix factorization: Unifying
deepwalk, line, pte, and node2vec. In booktitleProceedings
of the Eleventh ACM International Conference on Web Search and Data Mining.
ACM, pages459–467.
[Raghavan, Albert, and KumaraRaghavan
et al2007]
authorpersonUsha Nandini Raghavan,
personRéka Albert, and personSoundar Kumara.
Near Linear Time Algorithm to Detect Community
Structures in Large-scale Networks.
journalPhysical review E volume76,
number3 (year2007), pages036106.
[Rehurek and SojkaRehurek and Sojka2011]
authorpersonRadim Rehurek and personPetr
Sojka. year2011.
Gensim—statistical semantics in python.
journalRetrieved from genism. org
[Rozemberczki, Allen, and SarkarRozemberczki
et al2019a]
authorpersonBenedek Rozemberczki, personCarl
Allen, and personRik Sarkar.
Multi-scale Attributed Node Embedding.
journalarXiv preprint arXiv:1909.13021
[Rozemberczki, Davies, Sarkar, and
SuttonRozemberczki et al2019b]
authorpersonBenedek Rozemberczki, personRyan
Davies, personRik Sarkar, and personCharles
Sutton. year2019b.
GEMSEC: Graph Embedding with Self Clustering. In
booktitleProceedings of the 2019 IEEE/ACM International
Conference on Advances in Social Networks Analysis and Mining 2019. ACM,
[Rozemberczki, Kiss, and SarkarRozemberczki
et al2020]
authorpersonBenedek Rozemberczki,
personOliver Kiss, and personRik Sarkar.
Little Ball of Fur: A Python Library for Graph
Sampling. In booktitleProceedings of the 29th ACM
International Conference on Information and Knowledge Management (CIKM
'20). ACM.
[Rozemberczki and SarkarRozemberczki and
authorpersonBenedek Rozemberczki and
personRik Sarkar. year2018.
Fast Sequence-Based Embedding with Diffusion
Graphs. In booktitleInternational Workshop on Complex
Networks. Springer, pages99–107.
[Rozemberczki and SarkarRozemberczki and
authorpersonBenedek Rozemberczki and
personRik Sarkar. year2020.
Characteristic Functions on Graphs: Birds of a
Feather, from Statistical Descriptors to Parametric Models. In
booktitleProceedings of the 29th ACM International on
Conference on Information and Knowledge Management (CIKM '20). ACM.
authorpersonRik Sarkar.
Low distortion delaunay embedding of trees in
hyperbolic plane. In booktitleInternational Symposium on
Graph Drawing. Springer, pages355–366.
[Sun, Shen, Gao, Ouyang, and ChengSun
et al2017]
authorpersonBing-Jie Sun, personHuawei
Shen, personJinhua Gao, personWentao Ouyang, and
personXueqi Cheng. year2017.
A non-negative symmetric encoder-decoder approach
for community detection. In booktitleProceedings of the
2017 ACM on Conference on Information and Knowledge Management. ACM,
[Sun and FevotteSun and Fevotte2014]
authorpersonDennis L Sun and personCedric
Fevotte. year2014.
Alternating direction method of multipliers for
non-negative matrix factorization with the beta-divergence. In
booktitle2014 IEEE international conference on acoustics,
speech and signal processing (ICASSP). IEEE, pages6201–6205.
[Tsitsulin, Mottin, Karras, Bronstein, and
MüllerTsitsulin et al2018]
authorpersonAnton Tsitsulin, personDavide
Mottin, personPanagiotis Karras, personAlexander
Bronstein, and personEmmanuel Müller.
Netlsd: hearing the shape of a graph. In
booktitleProceedings of the 24th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining.
[Verbeek and SuriVerbeek and Suri2014]
authorpersonKevin Verbeek and
personSubhash Suri. year2014.
Metric embedding, hyperbolic space, and social
networks. In booktitleProceedings of the thirtieth annual
symposium on Computational geometry. pages501–510.
[Verma and ZhangVerma and Zhang2017]
authorpersonSaurabh Verma and personZhi-Li
Zhang. year2017.
Hunt for the unique, stable, sparse and fast
feature learning on graphs. In booktitleAdvances in Neural
Information Processing Systems. pages88–98.
[Virtanen, Gommers, Oliphant, Haberland, Reddy,
Cournapeau, Burovski, Peterson, Weckesser, Bright, et alVirtanen
et al2019]
authorpersonPauli Virtanen, personRalf
Gommers, personTravis E Oliphant, personMatt
Haberland, personTyler Reddy, personDavid
Cournapeau, personEvgeni Burovski, personPearu
Peterson, personWarren Weckesser, personJonathan
Bright, et al year2019.
SciPy 1.0–fundamental algorithms for scientific
computing in Python.
journalarXiv preprint arXiv:1907.10121
[Walt, Colbert, and VaroquauxWalt
et al2011]
authorpersonStéfan van der Walt,
personS Chris Colbert, and personGael Varoquaux.
The NumPy array: a structure for efficient
numerical computation.
journalComputing in Science & Engineering
volume13, number2 (year2011),
[Wang, Cui, Wang, Pei, Zhu, and YangWang
et al2017]
authorpersonXiao Wang, personPeng Cui,
personJing Wang, personJian Pei,
personWenwu Zhu, and personShiqiang Yang.
Community Preserving Network Embedding. In
booktitleProceedings of the Thirty-First AAAI Conference on
Artificial Intelligence (seriesAAAI'17).
[Yanardag and VishwanathanYanardag and
authorpersonPinar Yanardag and
personS.V.N. Vishwanathan. year2015.
Deep Graph Kernels. In
booktitleProceedings of the 21th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining.
[Yang, Liu, Zhao, Sun, and ChangYang
et al2015]
authorpersonCheng Yang, personZhiyuan Liu,
personDeli Zhao, personMaosong Sun, and
personEdward Chang. year2015.
Network representation learning with rich text
information. In booktitleTwenty-Fourth International Joint
Conference on Artificial Intelligence.
[Yang, Sun, Liu, and TuYang
et al2017]
authorpersonCheng Yang, personMaosong Sun,
personZhiyuan Liu, and personCunchao Tu.
Fast network embedding enhancement via high order
proximity approximation.. In booktitleIJCAI.
[Yang, Rosso, Li, and Cudre-MaurouxYang
et al2019]
authorpersonDingqi Yang, personPaolo Rosso,
personBin Li, and personPhilippe Cudre-Mauroux.
NodeSketch: Highly-Efficient Graph Embeddings via
Recursive Sketching. In booktitleProceedings of the 25th
ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
[Yang, Pan, Zhang, Chen, Lian, and ZhangYang
et al2018]
authorpersonHong Yang, personShirui Pan,
personPeng Zhang, personLing Chen,
personDefu Lian, and personChengqi Zhang.
Binarized attributed network embedding. In
booktitle2018 IEEE International Conference on Data Mining
(ICDM). IEEE, pages1476–1481.
[Yang and LeskovecYang and Leskovec2013]
authorpersonJaewon Yang and personJure
Leskovec. year2013.
Overlapping community detection at scale: a
nonnegative matrix factorization approach. In
booktitleProceedings of the sixth ACM international
conference on Web search and data mining. ACM, pages587–596.
[Yang and YangYang and Yang2018]
authorpersonShuang Yang and personBo
Yang. year2018.
Enhanced Network Embedding with Text Information.
In booktitle2018 24th International Conference on Pattern
Recognition (ICPR). IEEE, pages326–331.
[Ye, Chen, and ZhengYe et al2018]
authorpersonFanghua Ye, personChuan Chen,
and personZibin Zheng. year2018.
Deep Autoencoder-like Nonnegative Matrix
Factorization for Community Detection. In
booktitleProceedings of the 27th ACM International
Conference on Information and Knowledge Management
(seriesCIKM '18). pages1393–1402.
authorpersonWayne W Zachary.
An information flow model for conflict and fission
in small groups.
journalJournal of anthropological research
volume33, number4 (year1977),
[Zhang, Yin, Zhu, and ZhangZhang
et al2018]
authorpersonDaokun Zhang, personJie Yin,
personXingquan Zhu, and personChengqi Zhang.
SINE: Scalable Incomplete Network Embedding. In
booktitle2018 IEEE International Conference on Data Mining
(ICDM). IEEE, pages737–746.
|
2024-09-04T02:54:58.783178 | 2020-03-10T16:16:15 | 2003.04827 | {
"authors": "David I. Spivak, David Jaz Myers",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26139",
"submitter": "David Spivak",
"url": "https://arxiv.org/abs/2003.04827"
} | arxiv-papers | 0pt0pt *34pc* **1 1in1in* 2 *1.5* subsubsection subsection
# Dirichlet Polynomials
David I. Spivak David Jaz Myers
# Dirichlet Polynomials form a Topos
David I. Spivak David Jaz Myers
###### Abstract
One can think of power series or polynomials in one variable, such as
$P(\mathcal{y})=2\mathcal{y}^{3}+\mathcal{y}+5$, as functors from the category
$\mathsf{Set}$ of sets to itself; these are known as polynomial functors.
Denote by $\mathsf{Poly}_{\mathsf{Set}}$ the category of polynomial functors
on $\mathsf{Set}$ and natural transformations between them. The constants
$0,1$ and operations $+,\times$ that occur in $P(\mathcal{y})$ are actually
the initial and terminal objects and the coproduct and product in
$\mathsf{Poly}_{\mathsf{Set}}$.
Just as the polynomial functors on $\mathsf{Set}$ are the copresheaves that
can be written as sums of representables, one can express any Dirichlet
series, e.g. $\sum_{n=0}^{\infty}n^{\mathcal{y}}$, as a coproduct of
representable presheaves. A Dirichlet polynomial is a finite Dirichlet series,
that is, a finite sum of representables $n^{\mathcal{y}}$. We discuss how both
polynomial functors and their Dirichlet analogues can be understood in terms
of bundles, and go on to prove that the category of Dirichlet polynomials is
an elementary topos.
## Chapter 0 Introduction
Polynomials $P(\mathcal{y})$ and finite Dirichlet series $D(\mathcal{y})$ in
one variable $\mathcal{y}$, with natural number coefficients
$a_{i}\in\mathbb{N}$, are respectively functions of the form
$\displaystyle P(\mathcal{y})$
$\displaystyle=a_{n}\mathcal{y}^{n}+\cdots+a_{2}\mathcal{y}^{2}+a_{1}\mathcal{y}^{1}+a_{0}\mathcal{y}^{0},$
(1) $\displaystyle D(\mathcal{y})$
$\displaystyle=a_{n}n^{\mathcal{y}}+\cdots+a_{2}2^{\mathcal{y}}+a_{1}1^{\mathcal{y}}+a_{0}0^{\mathcal{y}}.$
The first thing we should emphasize is that the algebraic expressions in (1)
can in fact be regarded as _objects in a category_ , in fact two categories:
$\mathsf{Poly}$ and $\mathsf{Dir}$. We will explain the morphisms later, but
for now we note that in $\mathsf{Poly}$,
$\mathcal{y}^{2}=\mathcal{y}\times\mathcal{y}$ is a product and
$2\mathcal{y}=\mathcal{y}+\mathcal{y}$ is a coproduct, and similarly for
$\mathsf{Dir}$. The operators—in both the polynomial and the Dirichlet
case—are not just algebraic, they are category-theoretic. Moreover, these
categories have a rich structure.
The category $\mathsf{Poly}$ is well studied (see [GK12]). In particular, the
following are equivalent:
###### Theorem 1.
[GK12] For a functor $P\colon\mathsf{Fin}\to\mathsf{Fin}$, the following are
equivalent:
1. 1.
$P$ is polynomial.
2. 2.
$P$ is a sum of representables.
3. 3.
$P$ preserves connected limits – or equivalently, wide pullbacks.
In Theorem 8 we prove an analogous result characterizing Dirichlet
polynomials:
###### Theorem 2.
For a functor $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$, the
following are equivalent:
1. 1.
$D$ is a Dirichlet polynomial.
2. 2.
$D$ is a sum of representables.
3. 3.
$D$ sends connected colimits to limits – or equivalently, $D$ preserves wide
pushouts.
We will also show that $\mathsf{Dir}$ is equivalent to the arrow category of
finite sets,
$\mathsf{Dir}\simeq\mathsf{Fin}^{\to},$
and in particular that $\mathsf{Dir}$ is an elementary topos.
If one allows _arbitary_ sums of functors represented by finite sets, one gets
_analytic_ functors in the covariant case—first defined by Joyal in his
seminal paper on combinatorial species [Joy81]—and _Dirichlet_ functors in the
contravariant case—first defined by Baez and Dolan and appearing in Baez’s
_This Week’s Finds_ blog [BD]. Baez and Dolan also drop the traditional
negative sign in the exponent (that is, they use $n^{s}$ where $n^{-s}$
usually appears), but also find a nice way to bring it back by moving to
groupoids. Here, we drop the negative sign and work with finite sets to keep
things as simple as possible. Similar considerations hold with little extra
work for infinite Dirichlet series or power series, and even more generally,
by replacing $\mathsf{Fin}$ with $\mathsf{Set}$.
## Chapter 1 Polynomial and Dirichlet functors
Recall that a _co-representable functor_ $\mathsf{Fin}\to\mathsf{Fin}$ is one
of the form $\mathsf{Fin}(k,-)$ for a finite set
$k=\\{`1\text{'},`2\text{'},\ldots,`k\text{'}\\}.$
We denote this functor by $\mathcal{y}^{k}$ and say it is _represented by_
$k\in\mathsf{Fin}$. Similarly, a _(contra-) representable functor_
$\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ is contravariant functor of
the form $\mathsf{Fin}(-,k)$; we denote this functor by $k^{\mathcal{y}}$. The
functors $\mathcal{y}^{-}$ and $-^{\mathcal{y}}$ are the contravariant and
covariant Yoneda embeddings,
$\mathcal{y}^{k}\coloneqq\mathsf{Fin}(k,-)\qquad\text{and}\qquad
k^{\mathcal{y}}\coloneqq\mathsf{Fin}(-,k).$
For example $\mathcal{y}^{3}(2)\cong 8$ and $3^{\mathcal{y}}(2)\cong 9$.
Note that the functor $0^{\mathcal{y}}\not\cong 0$ is not the initial object
in $\mathsf{Dir}$; it is given by
$0^{\mathcal{y}}(s)=\begin{cases}1&\textnormal{ if }s=0\\\ 0&\textnormal{ if
}s\geq 1.\end{cases}$
The coefficient $a_{0}$ of $1=\mathcal{y}^{0}$ in a polynomial $P$ is called
its _constant_ term. We refer to the coefficient $D_{\text{zc}}\coloneqq
a_{0}$ of $0^{\mathcal{y}}$ in a Dirichlet series $D$ as its _zero-content_
term. Rather than having no content, the content of the functor
$D_{\text{zc}}{\cdot}0^{\mathcal{y}}$ becomes significant exactly when it is
applied to zero.
###### Example 1.
The reader can determine which Dirichlet polynomial
$D(\mathcal{y})\in\mathsf{Dir}$ as in Eq. 1 has the following terms
$\begin{array}[]{c|ccccccc}\mathcal{y}&\cdots&5&4&3&2&1&0\\\ \hline\cr
D(\mathcal{y})&\cdots&96&48&24&12&6&7\end{array}$
Hint: its zero-content term is $D_{\text{zc}}=4$.
The set $P(1)$ (resp. the set $D(0)$) has particular importance; it is the set
of pure-power terms $\mathcal{y}^{k}$ in $P$ (resp. the pure-exponential terms
$k^{\mathcal{y}}$ in $D$). For example if $P=\mathcal{y}^{2}+4\mathcal{y}+4$
and $D=2^{\mathcal{y}}+4+4{\cdot}0^{\mathcal{y}}$ then $P(1)=D(0)=9$.
###### Definition 2.
A _polynomial functor_ is a functor $P\colon\mathsf{Fin}\to\mathsf{Fin}$ that
can be expressed as a sum of co-representable functors. Similarly, we define a
_Dirichlet functor_ to be a functor
$D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ that can be expressed
as a sum of representable presheaves (contravariant functors):
$P=\sum_{i=1}^{P(1)}\mathcal{y}^{p_{i}}\qquad\text{and}\qquad
D=\sum_{i=1}^{D(0)}(d_{i})^{\mathcal{y}}.$ (1)
That is, $P(X)=\sum_{i=1}^{P(1)}\mathsf{Fin}(p_{i},X)$ and
$D(X)=\sum_{i=1}^{D(0)}\mathsf{Fin}(X,d_{i})$ as functors applied to
$X\in\mathsf{Fin}$.
See Theorem 1 above for well-known equivalent conditions in $\mathsf{Poly}$
and Theorem 8 below for a Dirichlet analogue.
## Chapter 2 The categories $\mathsf{Poly}$ and $\mathsf{Dir}$
For any small category $C$, let $\mathsf{Fin}^{C}$ denote the category whose
objects are the functors $C\to\mathsf{Fin}$ and whose morphisms are the
natural transformations between them.
###### Definition 1.
The _category of polynomial functors_ , denoted $\mathsf{Poly}$, is the
(skeleton of the) full subcategory of $\mathsf{Fin}^{\mathsf{Fin}}$ spanned by
sums $P$ of representable functors. The _category of Dirichlet functors_ ,
denoted $\mathsf{Dir}$, is the (skeleton of the) full subcategory of
$\mathsf{Fin}^{(\mathsf{Fin}^{\textnormal{op}})}$ spanned by the sums $D$ of
representable presheaves.
While we will not pursue it here, one can take $\mathsf{Poly}_{\mathsf{Set}}$
to be the full subcategory of functors $\mathsf{Set}\to\mathsf{Set}$ spanned
by small coproducts of representables, and similarly for
$\mathsf{Dir}_{\mathsf{Set}}$.
###### Lemma 2.
The set of polynomial maps $P\to Q$ and Dirichlet maps $D\to E$ are given by
the following formulas:
$\mathsf{Poly}(P,Q)\coloneqq\prod_{i\in
P(1)}Q(p_{i})\qquad\text{and}\qquad\mathsf{Dir}(D,E)\coloneqq\prod_{i\in
D(0)}E(d_{i}).$
###### Example 3.
Let $P=2\mathcal{y}^{2}$, $Q=\mathcal{y}+1$, and let $D=2\cdot
2^{\mathcal{y}}$ and $E=1+0^{\mathcal{y}}$. Then there are nine ($9$)
polynomial morphisms $P\to Q$, zero ($0$) polynomial morphisms $Q\to P$, one
($1$) Dirichlet morphism $D\to E$, and eight ($8$) Dirichlet morphisms $E\to
D$.
###### Remark 4.
Sums and products of polynomials in the usual algebraic sense agree exactly
with sums and products in the categorical sense: if $P$ and $Q$ are
polynomials, i.e. objects in $\mathsf{Poly}$, then their coproduct is the
usual algebraic sum $P+Q$ of polynomials, and similarly their product is the
usual algebraic product $PQ$ of polynomials. The same is true for
$\mathsf{Dir}$: sums and products of Dirichlet polynomials in the usual
algebraic sense agree exactly with sums and products in the categorical sense.
#### Formal structures.
We review some formal structures of the categories $\mathsf{Poly}$ and
$\mathsf{Dir}$; all are straightforward to prove. There is an adjoint
quadruple and adjoint 5-tuple as follows, labeled by where they send objects
$n\in\mathsf{Fin}$, $P\in\mathsf{Poly}$, $D\in\mathsf{Dir}$:
${\mathsf{Fin}}$${\mathsf{Poly}}$$\scriptstyle{n}$$\scriptstyle{n\mathcal{y}}$$\scriptstyle{P(0)}$$\scriptstyle{P(1)}$${\scriptstyle\top}$${\scriptstyle\top}$${\scriptstyle\top}$
${\mathsf{Fin}}$${\mathsf{Dir}}$$\scriptstyle{n\cdot
0^{\mathcal{y}}}$$\scriptstyle{n}$$\scriptstyle{n^{\mathcal{y}}}$$\scriptstyle{D(0)}$$\scriptstyle{D(1)}$${\scriptstyle\bot}$${\scriptstyle\bot}$${\scriptstyle\bot}$${\scriptstyle\bot}$
(1)
All five of the displayed functors out of $\mathsf{Fin}$ are fully faithful.
For each $k:\mathsf{Fin}$ the functors $P\mapsto P(k)$ and $D\mapsto D(k)$
have left adjoints, namely $n\mapsto n\mathcal{y}^{k}$ and $n\mapsto
n{\cdot}k^{\mathcal{y}}$ respectively. These are functorial in $k$ and in fact
extend to two-variable adjunctions
$\mathsf{Fin}\times\mathsf{Poly}\to\mathsf{Poly}$ and
$\mathsf{Fin}\times\mathsf{Dir}\to\mathsf{Dir}$. Indeed, for
$n\in\mathsf{Fin}$ and $P,Q\in\mathsf{Poly}$ (respectively
$D,E\in\mathsf{Dir}$), we have
$\displaystyle\mathsf{Poly}(nP,Q)\cong\mathsf{Poly}(P,Q^{n})\cong\mathsf{Fin}(n,\mathsf{Poly}(P,Q)),$
$\displaystyle\mathsf{Dir}(nD,E)\cong\mathsf{Dir}(D,E^{n})\cong\mathsf{Fin}(n,\mathsf{Dir}(D,E)),$
where $nP$ and $nD$ denote $n$-fold coproducts and $P^{n}$ and $D^{n}$ denote
$n$-fold products.
Consider the unique function $0\to 1$. The natural transformation induced by
it, denoted $\pi_{D}\colon D(1)\to D(0)$, is equivalent to two natural
transformations on $\mathsf{Dir}$ via the adjunctions in Eq. 1:
$n{\cdot}0^{\mathcal{y}}\to n,\qquad D(1)\xrightarrow{\pi_{D}}D(0),\qquad n\to
n^{\mathcal{y}}.$ (2)
The one labeled $\pi_{D}$ is also $D(0!)$, where $0!\colon 0\to 1$ is the
unique function of that type.
The composite of two polynomial functors $\mathsf{Fin}\to\mathsf{Fin}$ is
again polynomial, $(P\circ Q)(n)\coloneqq P(Q(n))$; this gives a nonsymmetric
monoidal structure on $\mathsf{Poly}$. The monoidal unit is $\mathcal{y}$.
Day convolution for the cartesian product monoidal structure provides
symmetric monoidal structure
$\otimes\colon\mathsf{Poly}\times\mathsf{Poly}\to\mathsf{Poly}$, for which the
monoidal unit is $\mathcal{y}$. This monoidal structure—like the Cartesian
monoidal structure—distributes over $+$ We can write an explicit formula for
$P\otimes Q$, with $P,Q$ as in Eq. 1:
$P\otimes Q=\sum_{i=1}^{P(1)}\sum_{j=1}^{Q(1)}\mathcal{y}^{p_{i}q_{j}}$ (3)
We call this the _Dirichlet product_ of polynomials, for reasons we will see
in Remark 1.
The Dirichlet monoidal structure is closed; that is, for any
$A,Q:\mathsf{Poly}$ we define
$[A,Q]\coloneqq\prod_{i:A(1)}Q\circ(a_{i}\mathcal{y}),$ (4)
for example $[n\mathcal{y},\mathcal{y}]\cong\mathcal{y}^{n}$ and
$[\mathcal{y}^{n},\mathcal{y}]\cong n\mathcal{y}$. For any polynomial $A$
there is an $(-\otimes A)\dashv[A,-]$ adjunction
$\mathsf{Poly}(P\otimes A,Q)\cong\mathsf{Poly}(P,[A,Q]).$ (5)
In particular we recover Lemma 2 using Eqs. 4 and 1. The cartesian monoidal
structure on $\mathsf{Poly}$ is also closed, $\mathsf{Poly}(P\times
A,Q)\cong\mathsf{Poly}(P,Q^{A})$, and the formula for $Q^{A}$ is similar to
Eq. 4:
$Q^{A}\coloneqq\prod_{i:A(1)}Q\circ(a_{i}+\mathcal{y}).$
If we define the _global sections_ functor
$\Gamma\colon\mathsf{Poly}\to\mathsf{Fin}^{\textnormal{op}}$ by $\Gamma
P\coloneqq\mathsf{Poly}(P,\mathcal{y})$, or explicitly
$\Gamma(P)=[P,\mathcal{y}](1)=\prod_{i}p_{i}$, we find that it is left adjoint
to the Yoneda embedding
$\leavevmode\hbox to107.58pt{\vbox
to31.03pt{\pgfpicture\makeatletter\hbox{\hskip
53.78891pt\lower-21.44022pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{\offinterlineskip{}{}{{{}}{{}}}{{{}}}{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-53.78891pt}{-8.65135pt}\pgfsys@invoke{
}\hbox{\vbox{\halign{\pgf@matrix@init@row\pgf@matrix@step@column{\pgf@matrix@startcell#\pgf@matrix@endcell}&#\pgf@matrix@padding&&\pgf@matrix@step@column{\pgf@matrix@startcell#\pgf@matrix@endcell}&#\pgf@matrix@padding\cr\hfil\hskip
14.69168pt\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.38614pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{${\mathsf{Fin}^{\textnormal{op}}}$} }}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}}}&\hskip
14.69168pt\hfil&\hfil\hskip 64.09723pt\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-9.79169pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{${\mathsf{Poly}}$}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}&\hskip
14.09723pt\hfil\cr}}}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}{{{{}}}{{}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} {}{ {}{}{}}{}{ {}{}{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{}{{{{}}}{{{}}}}{{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{}{{}}
{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.39998pt}\pgfsys@invoke{
}{}{}{}{}{{}}{}{}{{}}\pgfsys@moveto{-24.20555pt}{-0.15135pt}\pgfsys@lineto{24.99449pt}{-0.15135pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}}}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{25.19447pt}{-0.15135pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.55408pt}{3.5625pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{$\scriptstyle{n\mapsto\mathcal{y}^{n}}$}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{ {}{}{}}{}{ {}{}{}}{}{
{}{}{}} {{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}{}{}{{{}{}}}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{}{{{{}}}{{{}}}}{{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{{{}}{{}}}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{{}}{}{}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{}{{}}
{}{}{}{}{{{}{}}}{}{{}}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.39998pt}\pgfsys@invoke{
}{}{}{}{}{{}}{}{}{{}}\pgfsys@moveto{25.39445pt}{-12.15135pt}\pgfsys@lineto{-23.80559pt}{-12.15135pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}}}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-1.0}{0.0}{0.0}{-1.0}{-24.00557pt}{-12.15135pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-10.23367pt}{-19.28745pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{$\scriptstyle{\Gamma
P\mapsfrom P}$} }}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{ {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}{}{{{}{}}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.39998pt}\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-3.23886pt}{-8.5819pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{${\scriptstyle\top}$}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>
Each of the categories $\mathsf{Poly}$ and $\mathsf{Dir}$ has pullbacks, which
we denote using “fiber product notation” $A\times_{C}B$. We can use pullbacks
in combination with monad units $\eta_{P}\colon P\to P(1)$ and $\eta_{D}\colon
D\to D(0)$ arising from Eq. 1 to recover Eq. 1:
$P=\sum_{i=1}^{P(1)}P\times_{P(1)}`i\text{'}\qquad\text{and}\qquad
D=\sum_{i=1}^{D(0)}D\times_{D(0)}`i\text{'}.$
###### Remark 5.
By a result of Rosebrugh and Wood [RW94], the category of finite sets is
characterized amongst locally finite categories by the existence of the five
left adjoints to its Yoneda embedding $k\mapsto
y^{k}\colon\mathsf{Fin}\to\mathsf{Fin}^{\mathsf{Fin}^{\textnormal{op}}}$. The
adjoint sextuple displayed in (1) is just the observation that five of these
six functors restrict to the subcategory $\mathsf{Dir}$.
## Chapter 3 $\mathsf{Poly}$ and $\mathsf{Dir}$ in terms of bundles
There is a bijection between the respective object-sets of these two
categories
$\displaystyle\operatorname{Ob}(\mathsf{Poly})$
$\displaystyle\xrightarrow{\cong}\operatorname{Ob}(\mathsf{Dir})$
$\displaystyle\sum_{i=1}^{n}\mathcal{y}^{k_{i}}$
$\displaystyle\mapsto\sum_{i=1}^{n}(k_{i})^{\mathcal{y}}.$ (1)
We call this mapping the _Dirichlet transform_ and denote it using an over-
line $P\mapsto\overline{P}$. We will see in Theorem 6 that this bijection
extends to an equivalence
$\mathsf{Poly}_{\textnormal{cart}}\cong\mathsf{Dir}_{\textnormal{cart}}$
between the subcategories of cartesian maps.
###### Remark 1.
With the Dirichlet transform in hand, we see why $P\otimes Q$ can be called
the Dirichlet product, e.g. in Eq. 3. Namely, the Dirichlet transform is
strong monoidal with respect to $\otimes$ and the cartesian monoidal structure
$\times$ in $\mathsf{Dir}$:
$\overline{P\otimes Q}=\overline{P}\times\overline{Q}.$
###### Proposition 2.
There is a one-to-one correspondence between the set of polynomials in one
variable, the set of Dirichlet polynomials, and the set of (isomorphism
classes of) functions $\pi\colon s\to t$ between finite sets.
###### Proof.
We already established a bijection $P\mapsto\overline{P}$ between polynomials
and finite Dirichlet series in Eq. 1.
Given a finite Dirichlet series $D$, we have a function $\pi_{D}\colon D(1)\to
D(0)$ as in Eq. 2. And given a function $\pi\colon s\to t$, define
$D_{\pi}\coloneqq\sum_{i=1}^{t}(d_{i})^{\mathcal{y}}$, where
$d_{i}\coloneqq\pi^{-1}(i)$ for each $1\leq i\leq t$. (N.B. Rather than
constructing $D_{\pi}$ from $\pi$ by hand, one could instead use a certain
orthogonal factorization system on $\mathsf{Dir}$.)
It is easy to see that the roundtrip on Dirichlet series is identity, and that
the round-trip for functions is a natural isomorphism. ∎
We will upgrade Proposition 2 to an equivalence
$\mathsf{Poly}_{\textnormal{cart}}\simeq\mathsf{Dir}_{\textnormal{cart}}$
between certain subcategories of $\mathsf{Poly}$ and $\mathsf{Dir}$ in Theorem
6.
###### Example 3.
Under the identification from Proposition 2, both the polynomial
$2\mathcal{y}^{3}+\mathcal{y}^{2}+3$ and the Dirichlet series
$2{\cdot}3^{\mathcal{y}}+1{\cdot}2^{\mathcal{y}}+3{\cdot}0^{\mathcal{y}}$
correspond to the function
$\bullet$$1$$\bullet$$2$$\bullet$$3$$\bullet$$4$$\bullet$$5$$\bullet$$6$$6\cong
D(0)\cong$$\bullet$$(1,1)$$\bullet$$(1,2)$$\bullet$$(1,3)$$\bullet$$(2,1)$$\bullet$$(2,2)$$\bullet$$(2,3)$$\bullet$$(3,1)$$\bullet$$(3,2)$$8\cong
D(1)\cong$$\pi$ (2)
We can think of a function $\pi\colon s\to t$, e.g. that shown in (2), as a
_bundle_ of fibers $\pi^{-1}(`i\text{'})$, one for each element $`i\text{'}\in
t$. In Definition 4 we define two different notions of morphism between
bundles. We will see in Theorem 6 that they correspond to morphisms in the
categories $\mathsf{Poly}$ and $\mathsf{Dir}$.
For any function $\pi^{\prime}\colon s^{\prime}\to t^{\prime}$ and function
$f\colon t\to t^{\prime}$, denote by $f^{*}(\pi^{\prime})$ the pullback
function as shown
${s\times_{t^{\prime}}s^{\prime}}$${s^{\prime}}$${t}$${t^{\prime}}$$\scriptstyle{f^{*}(\pi^{\prime})}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f}$${\lrcorner}$
###### Definition 4.
Let $\pi\colon s\to t$ and $\pi^{\prime}\colon s^{\prime}\to t^{\prime}$ be
functions between finite sets.
* •
a _bundle morphism_ consists of a pair $(f,f_{\sharp})$ where $f\colon t\to
t^{\prime}$ is a function and $f_{\sharp}\colon\pi\to f^{*}(\pi^{\prime})$ is
a morphism in the slice category over $t$;
* •
a _container morphism_ consists of a pair $(f,f^{\sharp})$ where $f\colon t\to
t^{\prime}$ is a function and $f^{\sharp}\colon f^{*}(\pi^{\prime})\to\pi$ is
a morphism in the slice category over $t$.
We say a bundle morphism $(f,f_{\sharp})$ (resp. a container morphism
$(f,f^{\sharp})$) is _cartesian_ if $f_{\sharp}$ (resp. $f^{\sharp})$ is an
isomorphism.
${s}$${t\times_{t^{\prime}}s^{\prime}}$${s^{\prime}}$${t}$${t^{\prime}}$$\scriptstyle{\pi}$$\scriptstyle{f^{*}\pi^{\prime}}$$\scriptstyle{f_{\sharp}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f}$${\lrcorner}$
${s}$${t\times_{t^{\prime}}s^{\prime}}$${s^{\prime}}$${t}$${t^{\prime}}$$\scriptstyle{\pi}$$\scriptstyle{f^{*}\pi^{\prime}}$$\scriptstyle{f^{\sharp}}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f}$${\lrcorner}$
Figure 1: The categories $\mathsf{Bun}$ and $\mathsf{Cont}$ have the same
objects, functions $\pi\colon s\to t$. Here a morphism
$(f,f_{\sharp})\colon\pi\to\pi^{\prime}$ in $\mathsf{Bun}$ and a morphism
$(f,f^{\sharp})\colon\pi\to\pi^{\prime}$ in $\mathsf{Cont}$ are shown.
Define $\mathsf{Bun}$ (resp. $\mathsf{Cont}$) to be the category for which an
object is a function between finite sets and a morphism is a bundle morphism
(resp. container morphism); see Fig. 1. Denote by
$\mathsf{Bun}_{\textnormal{cart}}$ (resp. $\mathsf{Cont}_{\textnormal{cart}}$)
the subcategory of cartesian bundle morphisms.
One may note that $\mathsf{Bun}$ is the Grothendieck construction of the self-
indexing
$\mathsf{Fin}_{/(-)}\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Cat}$,
while $\mathsf{Cont}$ is the Grothendieck construction of its point-wise
opposite
$(\mathsf{Fin}_{/(-)})^{\textnormal{op}}\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Cat}$.
The name _container_ comes from the work of Abbot, Altenkirch, and Ghani
abbott2003categoriesabbott2005containersabbot2003categoriesthesis (see Remark
2.18 in [GK12] for a discussion of the precise relationship between the notion
of container and the notion of polynomial and polynomial functor).
###### Remark 5.
By the universal property of pullbacks, $\mathsf{Bun}\simeq\mathsf{Fin}^{\to}$
is equivalent (in fact isomorphic) to the category of morphisms and commuting
squares in $\mathsf{Fin}$. Furthermore, $\mathsf{Bun}_{\textnormal{cart}}$ is
equivalent to the category of morphisms and pullback squares in
$\mathsf{Fin}$, and
$\mathsf{Bun}_{\textnormal{cart}}\simeq\mathsf{Cont}_{\textnormal{cart}}$ (as
in both cases a cartesian morphism $(f,f_{\sharp})$ or $(f,f^{\sharp})$ is
determined by $f$ alone).
Next we show that $\mathsf{Bun}\simeq\mathsf{Dir}$ is also equivalent to the
category of Dirichlet functors, from Definition 1. Recall that a natural
transformation is called _cartesian_ if its naturality squares are pullbacks.
###### Theorem 6.
We have equivalences of categories
$\mathsf{Poly}\simeq\mathsf{Cont}\qquad\text{and}\qquad\mathsf{Dir}\simeq\mathsf{Bun}.$
In particular, this gives an equivalence
$\mathsf{Poly}_{\textnormal{cart}}\simeq\mathsf{Dir}_{\textnormal{cart}}$
between the category of polynomial functors and cartesian natural
transformations and the category of Dirichlet functors and cartesian natural
transformations.
###### Proof.
The functors $P_{-}\colon\mathsf{Cont}\to\mathsf{Poly}$ and
$D_{-}\colon\mathsf{Bun}\to\mathsf{Dir}$ are defined on each object, i.e.
function $\pi\colon s\to t$, by the formula $\pi\mapsto P_{\pi}$ and
$\pi\mapsto D_{\pi}\coloneqq\overline{P_{\pi}}$ as in Proposition 2. For each
$1\leq i\leq t$, denote the fiber of $\pi$ over $i$ by
$k_{i}\coloneqq\pi^{-1}(i)$.
For any finite set $X$, consider the unique map $X!\colon X\to 1$. Applying
$P_{-}$ and $D_{-}$ to it, we obtain the corresponding representable:
$P_{X!}\cong\mathcal{y}^{X}$ and $D_{X!}\cong X^{\mathcal{y}}$. We next check
that there are natural isomorphisms
$\displaystyle\mathsf{Poly}(P_{X!},P_{\pi})\cong P_{\pi}(X)=\sum
X^{k_{i}}\cong\mathsf{Cont}(X!,\pi),$
$\displaystyle\mathsf{Dir}(D_{X!},D_{\pi})\cong
D_{\pi}(X)=\sum_{i=1}^{t}(k_{i})^{X}\cong\mathsf{Bun}(X!,\pi).$ (3)
In both lines, the first isomorphism is the Yoneda lemma and the second is a
computation using Definition 4 (see Fig. 1). Thus we define $P_{-}$ on
morphisms by sending $f\colon\pi\to\pi^{\prime}$ in $\mathsf{Cont}$ to the
“compose-with-$f$” natural transformation, i.e. having $X$-component
$\mathsf{Cont}(X!,f)\colon\mathsf{Cont}(X!,\pi)\to\mathsf{Cont}(X!,\pi^{\prime})$,
which is clearly natural in $X$. We define $D_{-}$ on morphisms similarly: for
$f$ in $\mathsf{Bun}$, use the natural transformation $\mathsf{Bun}(-!,f)$.
By definition, every object in $\mathsf{Poly}$ and $\mathsf{Dir}$ is a
coproduct of representables, so to prove that we have the desired
equivalences, one first checks that coproducts in $\mathsf{Cont}$ and
$\mathsf{Bun}$ are taken pointwise:
$(\pi\colon s\to t)+(\pi^{\prime}\colon s^{\prime}\to
t^{\prime})\cong(\pi+\pi^{\prime})\colon(s+s^{\prime})\to(t+t^{\prime}),$
and then that $P_{\pi+\pi^{\prime}}=P_{\pi}+P_{\pi^{\prime}}$ and
$D_{\pi+\pi^{\prime}}=D_{\pi}+D_{\pi^{\prime}}$; see Remark 4.
By Remark 5, we know that
$\mathsf{Bun}_{\textnormal{cart}}\simeq\mathsf{Cont}_{\textnormal{cart}}$, and
we have just established the equivalences $\mathsf{Poly}\simeq\mathsf{Cont}$
and $\mathsf{Dir}\simeq\mathsf{Bun}$. It thus remains to check that the latter
equivalences identify cartesian natural transformations in $\mathsf{Poly}$
with cartesian morphisms in $\mathsf{Cont}$, and similarly for $\mathsf{Dir}$
and $\mathsf{Bun}$. For polynomial functors, we may refer to [GK12, Section
2].
Turning to Dirichlet functors, we want to show that for any $f\colon D\to
D^{\prime}$ the square
${D(1)}$${D^{\prime}(1)}$${D(0)}$${D^{\prime}(0)}$$\scriptstyle{f_{1}}$$\scriptstyle{\pi}$$\scriptstyle{\pi^{\prime}}$$\scriptstyle{f_{0}}$
(4)
is a pullback in $\mathsf{Set}$ iff for all functions $g\colon X\to
X^{\prime}$, the naturality square
${D(X^{\prime})}$${D^{\prime}(X^{\prime})}$${D(X)}$${D^{\prime}(X)}$$\scriptstyle{f_{X^{\prime}}}$$\scriptstyle{D(g)}$$\scriptstyle{D^{\prime}(g)}$$\scriptstyle{f_{X}}$
(5)
is a pullback in $\mathsf{Set}$; we will freely use the natural isomorphism
$D_{\pi}(X)\cong\mathsf{Bun}(X!,\pi)$ from Eq. 3. The square in Eq. 4 is a
special case of that in Eq. 5, namely for $g\coloneqq 0!$ the unique function
$0\to 1$; this establishes the only-if direction.
To complete the proof, suppose that Eq. 4 is a pullback, take an arbitrary
$g\colon X\to X^{\prime}$, and suppose given a commutative solid-arrow diagram
as shown:
${X}$${X^{\prime}}$${D(1)}$${D^{\prime}(1)}$${1}$${1}$${D(0)}$${D^{\prime}(0)}$$\scriptstyle{g}$
We can interpret the statement that Eq. 5 is a pullback as saying that there
are unique dotted arrows making the diagram commute, since
$DX\cong\mathsf{Bun}(X!,D0!)$ and similarly for the other corners of the
square in Eq. 5. So, we need to show that if the front face is a pullback,
then there are unique diagonal dotted arrows as shown, making the diagram
commute. This follows quickly from the universal property of the pullback. ∎
###### Corollary 7.
$\mathsf{Dir}$ is an elementary topos.
###### Proof.
For any finite category $C$, the functor category $\mathsf{Fin}^{C}$ is an
elementary topos. The result now follows from Remarks 5 and 6, noting that
$\mathsf{Dir}\simeq\mathsf{Fin}^{\to}$. ∎
As we mentioned in the introduction, this all goes through smoothly when one
drops all finiteness conditions. The general topos of Dirichlet functors is
the category of (arbitrary) sums of representables
$\mathsf{Set}^{\textnormal{op}}\to\mathsf{Set}$, and this is equivalent to the
arrow category $\mathsf{Set}^{\to}$ and so is itself a topos.
We conclude with the equivalence promised in Dirichlet Polynomials.
###### Theorem 8.
A functor $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ is a
Dirichlet polynomial if and only if it preserves connected limits, or
equivalently wide pullbacks.
###### Proof.
Let $D(\mathcal{y})=\sum_{i:D(0)}(d_{i})^{\mathcal{y}}$, and suppose that $J$
is any connected category. Then for any diagram $X\colon J\to\mathsf{Fin}$, we
have
$\displaystyle D(\operatorname*{colim}X_{j})$
$\displaystyle=\sum_{i:D(0)}(d_{i})^{\operatorname*{colim}X_{j}}$
$\displaystyle\cong\sum_{i:D(0)}\lim(d_{i})^{X_{j}}$
$\displaystyle\cong\lim\sum_{i:D(0)}(d_{i})^{X_{j}}$ $\displaystyle=\lim
D(X_{j})$
since connected limits commute with sums in any topos (in particular
$\mathsf{Set}$).
Now suppose $D\colon\mathsf{Fin}^{\textnormal{op}}\to\mathsf{Fin}$ is any
functor that preserves connected limits; in particular, it sends wide pushouts
to wide pullbacks. Every finite set $X$ can be expressed as the wide pushout
${X}$${1}$${1}$${\cdots}$${1}$${1}$${0}$
of its elements. Therefore, we have the following limit diagram:
${D(X)}$${D(1)}$${D(1)}$${\cdots}$${D(1)}$${D(1)}$${D(0)}$
That is, an element of $D(X)$ is a family of elements $a_{x}\in D(1)$, one for
each $x\in X$, such that the $D(0!)(a_{x})$ are all equal in $D(0)$. But this
is just a bundle map, i.e.
$D(X)\cong\mathsf{Bun}(X!,D(0!))$
where $X!\colon X\to 1$ and $D(0!)\colon D(1)\to D(0)$. Thus by Theorem 6, the
functor $D$ is the Dirichlet polynomial associated to the bundle $D(0!)$. ∎
### Acknowledgments
The authors thank Joachim Kock, André Joyal, and Brendan Fong for helpful
comments that improved the quality of this note. Spivak also appreciates
support by Honeywell Inc. as well as AFOSR grants FA9550-17-1-0058 and
FA9550-19-1-0113. Jaz Myers appreciates support by his advisor Emily Riehl and
the National Science Foundation grant DMS-1652600.
## References
* [AAG03] Michael Gordon Abbott, Thorsten Altenkirch and Neil Ghani “Categories of Containers” In _FoSSaCS_ , 2003
* [AAG05] Michael Abbott, Thorsten Altenkirch and Neil Ghani “Containers: Constructing strictly positive types” Applied Semantics: Selected Topics In _Theoretical Computer Science_ 342.1, 2005, pp. 3–27
* [Abb03] Michael Gordon Abbott “Categories of Containers”, 2003
* [BD] John Baez and James Dolan “This Week’s Finds 300” Accessed: 2020-02-16 URL: http://math.ucr.edu/home/baez/week300.html
* [GK12] Nicola Gambino and Joachim Kock “Polynomial functors and polynomial monads” In _Mathematical Proceedings of the Cambridge Philosophical Society_ 154.1 Cambridge University Press (CUP), 2012, pp. 153–192
* [Joy81] André Joyal “Une théorie combinatoire des séries formelles” In _Advances in Mathematics_ 42.1, 1981, pp. 1–82
* [RW94] Robert Rosebrugh and R.. Wood “An Adjoint Characterization of the Category of Sets” In _Proc. Amer. Math. Soc_ 122, 1994, pp. 409–413
|
2024-09-04T02:54:58.798419 | 2020-03-10T17:09:00 | 2003.04857 | {
"authors": "Yuqian Zhou, David Ren, Neil Emerton, Sehoon Lim, Timothy Large",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26140",
"submitter": "Yuqian Zhou",
"url": "https://arxiv.org/abs/2003.04857"
} | arxiv-papers | # Image Restoration for Under-Display Camera
Yuqian Zhou1, David Ren2, Neil Emerton3, Sehoon Lim3, Timothy Large3
1IFP, UIUC, 2CIL, UC Berkeley, 3Microsoft
###### Abstract
The new trend of full-screen devices encourages us to position a camera behind
a screen. Removing the bezel and centralizing the camera under the screen
brings larger display-to-body ratio and enhances eye contact in video chat,
but also causes image degradation. In this paper, we focus on a newly-defined
Under-Display Camera (UDC), as a novel real-world single image restoration
problem. First, we take a 4k Transparent OLED (T-OLED) and a phone Pentile
OLED (P-OLED) and analyze their optical systems to understand the degradation.
Second, we design a Monitor-Camera Imaging System (MCIS) for easier real pair
data acquisition, and a model-based data synthesizing pipeline to generate
Point Spread Function (PSF) and UDC data only from display pattern and camera
measurements. Finally, we resolve the complicated degradation using
deconvolution-based pipeline and learning-based methods. Our model
demonstrates a real-time high-quality restoration. The presented methods and
results reveal the promising research values and directions of UDC.
## 1 Introduction
Under-display Camera (UDC) is a new imaging system that mounts display screen
on top of a traditional digital camera lens, as shown in Figure 1. Such a
system has mainly two advantages. First, it brings a new product trend of
full-screen devices [11] with larger screen-to-body ratio, which can provide
better user perceptive and intelligent experience [12]. Without seeing the
bezel and extra buttons, users can easily access more functions by directly
touching the screen. Second, it provides better human computer interaction. By
putting the camera in the center of the display, it enhances teleconferencing
experiences with perfect gaze tracking, and it is increasingly relevant for
larger display devices such as laptops and TVs.
Unlike pressure or fingerprint sensors that can be easily integrated into a
display, it is relatively difficult to retain full functionality of an imaging
sensor after mounting it behind a display. The imaging quality of a camera
will be severely degraded due to lower light transmission rate and diffraction
effects. As a result, images captured will be noisy and blurry. Therefore,
while bringing better user experience and interaction, UDC may sacrifice the
quality of photography, face processing [35] and other downstream vision
tasks. Restoring and enhancing the images captured by UDC system will be
desired.
Figure 1: The newly proposed imaging system named Under-Display Camera (UDC).
We mount display screen on top of a traditional digital camera lens. The
design brings new trend of full-screen devices.
Traditional image restoration approaches form the task as an inverse problem
or an optimization problem like Maximum-a-Posterior (MAP). For the UDC
problem, for practical purposes, the proposed image restoration algorithm and
system are expected to work in real-time. Therefore, deconvolutional-based
methods like Wiener Filter [14] should be preferred. Deconvolution is the
inverse process of convolution and recovers the original signal from the
point-spread-function (PSF)-convolved image. The fidelity of the deconvolution
process is dependent on the space-invariance of the PSF over the image field
of-view (FOV) and on a low condition number for the inverse of the PSF [19].
For strongly non-delta-function-like PSFs such as those encountered when
imaging through a display, the value of condition number can be large. For
such PSFs an additional denoising step may be essential.
Another option is the emerging discriminative learning-based image restoration
model. Data-driven discriminative learning-based image restoration models
usually outperform traditional methods in specific tasks like image de-noising
[42, 48, 47, 26, 43, 3], de-bluring [21, 28],de-raining [40, 39], de-hazing
[13, 33], super-resolution [22, 37], and light-enhancement [9]. However,
working on synthesis data with single degradation type, existing models can be
hardly utilized to enhance real-world low-quality images with complicated or
combined degradation types. To address complicated real degradation like the
UDC problem, directly collecting real paired data or synthesizing near-
realistic data after fully understanding the degradation model is necessary.
In this paper, we present the first study to define and analyze the novel
Under-Display Camera (UDC) system from both optics and image restoration
viewpoints. For optics, we parse the optical system of the UDC pipeline and
analyze the characteristics of light transmission. Then we relate the obtained
intuitions and measurements to an image restoration pipeline, and propose two
ways of resolving the single-image restoration: A deconvolution-based Wiener
Filter [29] pipeline (DeP) and a data-driven learning-based approach.
Specifically, we regard UDC restoration as a combination of tasks such as low-
light enhancement, de-blurring, and de-noising.
Without loss of generality, our analysis focuses on two types of displays, a
4K Transparent Organic Light-Emitting Diode (T-OLED) and a phone Pentile OLED
(P-OLED), and a single camera type, a 2K FLIR RGB Point Grey research camera.
To obtain the real imaging data and measure the optical factors of the system,
we also propose a data acquisition system using the above optical elements.
In summary, the main contributions of our paper are: (1) A brand new imaging
system named Under-Display Camera (UDC) is defined, measured and analyzed.
Extensive experiments reveal the image degradation process of the system,
inspiring better approaches for restoring the captured images. (2) As a
baseline, two practical and potential solutions are proposed, including
conventional Wiener Filter and the recent learning-based method. (3) Adopting
the newly-assembled image acquisition system, we collect the first Under-
Display Camera (UDC) dataset which will be released and evaluated by the
public.
## 2 Related Work
#### Real-world Image Reconstruction and Restoration
Image restoration for UDC [46, 24, 23, 49] can be categorized into the problem
of Real-world restoration[3, 45]. It is becoming a new concept in low-level
vision. In the past decades, low-level vision works on synthetic data
(denoising on AWGN and SR on Bicubic), but the models are not efficient for
images with real degradation such as real noises or blur kernels. Making
models perform better on real-world inputs usually requires new problem
analysis and a more challenging data collection. Recently, researchers also
worked on challenging cases like lensless imaging problems [30, 27, 20], or
integrating optics theory with High Dynamic Range imaging [34]. Previously,
there has been two common ways to prepare adaptive training data for real-
world problems: real data collection and near-realistic data synthesis.
Recently, more real noise datasets such as DND [31], SIDD [2, 28], and RENOIR
[5], have been introduced to address practical denoising problems. Abdelrahman
et al. [3] proposed to estimate ground truth from captured smartphone noise
images, and utilized the paired data to train and evaluate the real denoising
algorithms. In addition to noise, Chen et al. first introduced the SID dataset
[9] to resolve extreme low-light imaging. In the area of Single Image Super
Resolution (SISR), researchers considered collecting optical zoom data [45,
10] to learn better computational zoom. Other restoration problems including
reflection removal [36, 32] also follow the trend of real data acquisition.
Collecting real data suffers from limitation of scene variety since most
previous models acquire images of postcards, static objects or color boards.
In this paper, we propose a novel monitor-camera imaging system, to add real
degradation to the existing natural image datasets like DIV2K [4].
A realistic dataset can be synthesized if the degradation model is fully
understood and resolved. One good practice of data synthesis is generating
real noises on raw sensors or RGB images. CBDNet [17] and Tim et al. [8]
synthesized realistic noise by unfolding the in-camera pipeline, and
Abdelhamed et al. [1] better fitted the real noise distribution with flow-
based generative models. Zhou et al. [48] adapted the AWGN-RVIN noise into
real RGB noise by analyzing the demosacing process. Other physics-based
synthesis was also explored in blur[7] or hazing[6]. For the UDC problem in
this paper, we either collected real paired data, or synthesized near-
realistic data from model simulation. In particular, we applied the theory of
Fourier optics to simulate the diffraction effects, and further adjusted the
data with other camera measurements. Our data synthesizing pipeline
demonstrates a promising performance for addressing real complicated
degradation problem.
(a)
(b)
Figure 2: Image formation pipeline of under-display camera (UDC) problem. (a)
Image Formation Pipeline. (b)Optics characteristics of UDC. The structure of
the 4K T-OLED has a grating-like pixel layout. P-OLED differs from T-OLED in
sub-pixel design. From left to right: Micrography of display patterns, PSFs
(red light only) and MTFs (red, green, and blue).
## 3 Formulation
In this section, we discuss the optical system and image formation process of
the proposed UDC imaging system. We analyze the degradation type, light
transmission rate and visualize the Point Spread Function (PSF). Moreover, we
formulate the image formation pipeline to compute simulated PSF from
measurements.
### 3.1 Optical System Analysis
Optical Elements. In our experiments, we focus on the Organic Light-Emitting
Diode (OLED) displays [38] as they have superior optical properties compared
to the traditional LCDs (Liquid Crystal Display). Due to confidentiality
reasons it is often difficult to obtain the sample materials used for demos
from commercial companies. In this case, we select the displays with different
transparencies to improve the generalization. Note that all the displays are
non-active in our experiments, since in real scenario, the display can be
turned off locally by setting black pixels on local regions of the OLED
display when the camera is in operation to 1) reduce unnecessary difficulty
from display contents while not affecting user experience and 2) provide users
with the status of the device and thus ensure privacy.
Owing to transparent materials being used in OLED display panels, visible
lights can be better transmitted through the OLEDs than LCDs. In the meantime,
pixels are also arranged such that open area is maximized. In particular, we
focus on 4k Transparent OLED (T-OLED) and a phone Pentile OLED (P-OLED).
Figure 2 is a micrograph illustration of the pixel layout in the two types of
OLED displays. The structure of the 4K T-OLED has a grating-like pixel layout.
P-OLED differs from T-OLED in sub-pixel design. It follows the basic structure
of RGBG matrix.
Table 1: Comparison of two displays in terms of light transmission rate and
physical pixel layout and open areas.
Metrics | T-OLED | P-OLED
---|---|---
Pixel Layout Type | Stripe | Pentile
Open Area | 21$\%$ | 23$\%$
Transmission Rate | 20$\%$ | 2.9$\%$
Major Degradation | Blur, Noise | Low-light, Color Shift, Noise
Light Transmission Rate. We measure the transmission efficiency of the OLEDs
by using a spectrophotometer and white light source. Table 1 compares the
light transmission rate of the two displays. For T-OLED, the open area
occupies about 21$\%$, and the light transmission rate is around 20$\%$. For
P-OLED, although the open area can be as large as 23$\%$, the light
transmission rate is only 2.9$\%$.
The loss of photons can be attributed mainly to the structure of P-OLED.
First, the P-OLED has a finer pixel pitch, so photos are scattered to higher
angles comparing to the T-OLED. As a result, high angle photons are not
collected by the lens. Second, P-OLED is a flexible/bendable display, which
has a poly-amide substrate on which the OLED is formed. Such a substrate has
relatively low transmission efficiency, causing photons to be absorbed. The
absorbed light with certain wavelengths may make the images captured through a
polyamide-containing display panel by a UDC appear yellow. As a result,
imaging through a P-OLED results in lower signal-to-noise ratio (SNR)
comparing to using a T-OLED, and has a color shift issue. One real imaging
example is shown in Figure 4.
Diffraction Pattern and Point Spread Function (PSF). Light diffracts as it
propagates through obstacles with sizes that are similar to its wavelength.
Unfortunately, the size of the openings in the pixel layout is on the order of
wavelength of visible light, and images formed will be degraded due to
diffraction.
Here we characterize our system by measuring the point spread function (PSF).
We do so by pointing a collimated red laser beam ($\lambda=$ 650nm) at the
display panel and recording the image formed on the sensor, as demonstrated in
Figure 1 and 2. An ideal PSF shall resemble a delta function, which then forms
a perfect image of the scene. However, light greatly spreads out in UDC. For
T-OLED, light spreads mostly across the horizontal direction due to its nearly
one dimensional structure in the pixel layout, while for P-OLED, light is more
equally distributed as the pixel layout is complex. Therefore, images captured
by UDC are either blurry (T-OLED) or hazy (P-OLED).
Modulation Transfer Function (MTF) Modulation Transfer Function (MTF) is
another important metric for an imaging system, as it considers the effect of
finite lens aperture, lens performance, finite pixel size, noise, non-
linearities, quantization (spatial and bit depth), and diffraction in our
systems. We characterize the MTF of our systems by recording sinusoidal
patterns with increasing frequency in both lateral dimensions, and we report
them in Figure 2. For T-OLED, contrasts along the horizontal direction are
mostly lost in the mid-band frequency due to diffraction. This phenomenon is
due to the nearly one-dimensional pixel layout of the T-OLED. Figure 4 shows
severe smearing horizontally when putting T-OLED in front of the camera. While
for P-OLED, the MTF is almost identical to that of display-free camera, except
with severe contrast loss. Fortunately, however, nulls have not been observed
in any particular frequencies.
### 3.2 Image Formation Pipeline
In this section, we derive the image formation process of UDC based on the
analysis in the previous sections. Given a calibrated pixel layout and
measurements using a specific camera, degraded images can be simulated from a
scene. From the forward model, we can compute the ideal PSF and consequently
synthesize datasets from ground truth images.
Given an object in the scene $\mathbf{x}$, the degraded observation
$\mathbf{y}$ can be modeled by a convolution process,
$\mathbf{y}=(\gamma\mathbf{x})\otimes\mathbf{k}+\mathbf{n},$ (1)
where $\gamma$ is the intensity scaling factor under the current gain setting
and display type, $\mathbf{k}$ is the PSF, and $\mathbf{n}$ is the zero-mean
signal-dependent noise. Notice that this is a simple noise model that
approximately resembles the combination of shot noise and readout noise of the
camera sensor, and it will be discussed in a later section.
Intensity Scaling Factor ($\gamma$) The intensity scaling factor measures the
changing ratio of the average pixel values after covering the camera with a
display. It simultaneously relates to the physical light transmission rate of
the display, as well as the digital gain $\delta$ setting of the camera.
$\gamma$ can be computed from the ratio of $\delta$-gain amplified average
intensity values $I_{d}(\delta,s)$ at position $s$ captured by UDC, to the
0-gain average intensity values $I_{nd}(0,s)$ by naked camera within an
enclosed region $S$. It is represented by,
$\gamma=\frac{\int_{S}I_{d}(\delta,s)ds}{\int_{S}I_{nd}(0,s)ds}$ (2)
Diffraction Model We approximate the blur kernel $\mathbf{k}$, which is the
Point Spread Function (PSF) of the UDC. As shown in Figure 1, in our model, we
assume the display panel is at the principle plane of the lens. We also assume
the input light is monochromatic plane wave with wavelength $\lambda$ (i.e.
perfectly coherent), or equivalently light from a distance object with unit
amplitude. Let the display pattern represented by transparency with complex
amplitude transmittance be $g(m,n)$ at the Cartesian co-ordinate $(m,n)$, and
let the camera aperture/pupil function $p(m,n)$ be 1 if $(m,n)$ lies inside
the lens aperture region and 0 otherwise, then the display pattern inside the
aperture range $g_{p}(m,n)$ becomes,
$g_{p}(m,n)=g(m,n)p(m,n).$ (3)
At the focal plane of the lens (i.e. 1 focal length away from the principle
plane), the image measured is the intensity distribution of the complex field,
which is proportional to the Fourier transform of the electric field at the
principle plane [16]:
$I(u,v)\propto\left|{\iint}^{\infty}_{-\infty}g_{p}(m,n)\exp\left[-j\frac{2\pi}{\lambda
f}(mu+nv)\right]\text{d}m\text{d}n\right|^{2}.$ (4)
Suppose $G_{p}(v_{m},v_{n})=\mathscr{F}(g_{p}(m,n))$, where
$\mathscr{F}(\cdot)$ is the Fourier transform operator, then
$I(u,v)\propto\left|G_{p}(v_{m},v_{n})\right|^{2}=\left|G_{p}(\frac{u}{\lambda
f},\frac{v}{\lambda f})\right|^{2},$ (5)
which performs proper scaling on the Fourier transform of the display pattern
on the focal plane.
Therefore, to compute the PSF $\mathbf{k}$ for image $\mathbf{x}$, we start
from computing Discrete Fourier Transform (DFT) with squared magnitude
$M(a,b)=|\hat{G_{p}}(a,b)|^{2}$ of the $N\times N$ microscope transmission
images $\hat{g_{p}}$ of the display pattern and re-scaling it. Then, the
spatial down-sampling factor $r$ (denoted by $\downarrow r$) becomes,
$r=\frac{1}{\lambda f}\cdot{\delta_{N}N}\cdot{\rho},$ (6)
where $\delta_{N}$ is the pixel size of the $\hat{g_{p}}$ images, and $\rho$
is the pixel size of the sensor. Finally, $\mathbf{k}$ can be represented as
$k(i,j)=\frac{M_{\downarrow r}(i,j)}{\sum_{(\hat{i},\hat{j})}M_{\downarrow
r}(\hat{i},\hat{j})}.$ (7)
$k$ is a normalized form since we want to guarantee that it represents the
density distribution of the intensity with diffraction effect. Note that only
PSF for a single wavelength is computed for simplicity. However, scenes in the
real-world are by no means monochromatic. Therefore, in order to calculate an
accurate color image from such UDC systems, PSF for multiple wavelengths shall
be computed. More details will be shown in Section 4.2.
Adding Noises We follow the commonly used shot-read noise model [8, 18, 25] to
represent the real noise on the imaging sensor. Given the dark and blur signal
$w=(\gamma\mathbf{x})\otimes\mathbf{k}$, the shot and readout noise can be
modeled by a heteroscedastic Gaussian,
$\mathbf{n}\sim\mathcal{N}(\mu=0,\sigma^{2}=\lambda_{read}+\lambda_{shot}w),$
(8)
where the variance $\sigma$ is signal-dependent, and $\lambda_{read}$ ,
$\lambda_{shot}$ are determined by camera sensor and gain values.
## 4 Data Acquisition and Synthesis
We propose an image acquisition system called Monitor-Camera Imaging System
(MCIS). In particular, we display natural images with rich textures on high-
resolution monitor and capture them with a static camera. The method is more
controllable, efficient, and automatic to capture a variety of scene contents
than using mobile set-ups to capture limited static objects or real scenes.
### 4.1 Monitor-Camera Imaging System
Figure 3: Monitor-Camera Imaging System (MCIS). MCIS consists of a 4K LCD
monitor, the 2K FLIR RGB Point-Grey research camera, and a panel that is
either T-OLED, P-OLED or Glass(i.e. no display). The camera is mounted on the
center line of the 4K monitor, and adjusted to cover the full monitor range.
(a) Display-free
(b) TOLED
(c) POLED
Figure 4: Real samples collected by the proposed MCIS. Images captured by
T-OLED are blur and noisy, while those captured by P-OLED are low-light,
color-shifted and hazy.
The system architecture is shown in Figure 3. MCIS consists of a 4K LCD
monitor, the 2K FLIR RGB Point-Grey research camera, and a panel that is
either T-OLED, P-OLED or Glass(i.e. no display). The camera is mounted on the
center line of the 4K monitor, and adjusted to cover the full monitor range.
We calibrate the camera gain by measuring a $256\times 256$ white square shown
on the monitor and matching the RGB histogram. For fair comparison and
simplicity, we adjust the focus and fix the aperture to f/1.8. It guarantees a
reasonable pixel intensity range avoiding saturation while collecting data
with no gain. Suppose we develop a real-time video system, the frame rate has
to be higher than 8 fps. So the lowest shutter speed is 125 ms for the better
image quality and the higher Signal-to-Noise Ratio (SNR).
Table 2: Camera Settings for different set of collected data
Parameteres | No-Display | T-OLED | P-OLED
---|---|---|---
Aperture | f/1.8
FPS/Shutter | 8/125ms
Brightness | 0
Gamma | 1
Gain | 1 | 6 | 25(Full)
White-balance | Yes | None | None
We split 300 images from DIV2K dataset [4], and take turns displaying them on
a 4K LCD in full screen mode. We either rotate or resize the images to
maintain the Aspect Ratio. For training purposes, we capture two sets of
images, which are the degraded images $\\{y_{i}\\}$, and the degradation-free
set $\\{x_{i}\\}$.
To capture $\\{x_{i}\\}$, we first cover the camera with a thin glass panel
which has the same thickness as a display panel. This process allows us to
avoid the pixel misalignment issues caused by light refraction inside the
panel. To eliminate the image noises in $\\{x_{i}\\}$, we average the 16
repeated captured frames. Then we replace the glass with a display panel
(T-OLED or P-OLED), calibrate the specific gain value avoiding saturation, and
capture $\\{y_{i}\\}$. For each set, we record both the 16-bit 1-channel
linear RAW CMOS sensor data as well as the 8-bit 3-channel linear RGB data
after in-camera pipeline that includes demosaicing. The collected pairs are
naturally well spatially-aligned in pixel-level. They can be directly used for
deep model training without further transformations.
Due to the yellow substrate inside the P-OLED, certain light colors,
especially blue, are filtered out and changes the white balance significantly.
We therefore did not further alter the white balance. The light transmission
ratio of P-OLED is extremely low, so we set up the gain value to be the
maximum (25) for higher signal values. All the detailed camera settings for
the two display types are shown in Table 2. One real data sample is shown in
Figure 4. As discussed and analyzed in Section 3.1, images captured by T-OLED
are blur and noisy, while those captured by P-OLED are low-light, color-
shifted and hazy.
Table 3: Measured parameters for data synthesis
Parameteres | T-OLED | P-OLED
---|---|---
| R | G | B | R | G | B
$\gamma$ | 0.97 | 0.97 | 0.97 | 0.34 | 0.34 | 0.20
$\lambda$ (nm) | 640 | 520 | 450 | 640 | 520 | 450
r | 2.41 | 2.98 | 3.44 | 2.41 | 2.98 | 3.44
### 4.2 Realistic Data Synthesis Pipeline
We follow the image formation pipeline to simulate the degradation on the
collected $\\{x_{i}\\}$. A model-based data synthesis method will benefit
concept understanding and further generalization. Note that all the camera
settings are the same as the one while collecting real data. We first
transform the 16-bit raw sensor data $\\{x_{i}\\}$ into four bayer channels
$x_{r}$, $x_{gr}$, $x_{gl}$, and $x_{b}$. Then, we multiply the measured
intensity scaling factor $\gamma$, compute the normalized and scaled PSF $k$,
and add noises to the synthesize degraded data.
Measuring $\gamma$: To measure $\gamma$ for each channel using the MCIS, we
select the region of interest $S$ to be a square region of size $256\times
256$, and display the intensity value input from 0 to 255 with stride 10 on
the monitor. We then record the average intensity both with and without the
display for each discrete intensity value, and plot the relationship between
display-covered values and no-display-covered ones. Using linear regression,
we obtain the ratios of lines for different RGGB channel. For T-OLED, the
measured $\gamma$ is 0.97, same for all the channels. For P-OLED,
$\gamma=0.20$ for the blue channel, and $\gamma=0.34$ for the other three
channels.
Computing PSF: Following equation 3, we acquire the transmission microscope
images of the display pattern and crop them with the approximated circular
aperture shape with diameter $3333\mu m$, the size of the camera aperture. In
equation 6, the $\delta_{N}N$ is $3333\mu m$. $\rho$ equals to $1.55\mu
m/pixel$ in Sony sensor. However, after re-arranging the raw image into four
RGGB channels, $\rho$ becomes 3.1 for each channel. The focal length is
$6000\mu m$. $\lambda=(640,520,450)$ for R, G, B channel, which are the
approximated center peaks of the R, G, B filters respectively on the sensor.
It yields the down-sampling ratio $r=(2.41,2.98,3.44)$ for the R, G, and B
channels.
Adding Noises: We measure $\lambda_{read}$ and $\lambda_{shot}$ to estimate
the noise statistics. We display random patterns within the $256\times 256$
window on the monitor, collect the paired noisy and noise-free RAW data, and
compute their differences. For each of the RGGB channel, we linearly regress
the function of noise variance to the intensity value, and obtain the ratio as
the shot noise variance, and the y-intersection as the readout noise variance.
We then repeat the process 100 times and collect pairs of data points.
Finally, we estimate the distribution and randomly sample $\lambda_{read}$ and
$\lambda_{shot}$. All the measurements are listed in Table 3.
Figure 5: Network structure of the proposed UNet. It takes a 4-channel RAW
sensor data observation $y$, and outputs the restored 3-channel RGB image $x$.
## 5 Image Restoration Baselines
We use the collected real paired data, synthetic paired data, simulated PSF,
and all the necessary measurements to perform image restoration. We split the
300 pairs of images in the UDC dataset into 200 for training, 40 for
validation and 60 images in the testing partition. All the images have a
resolution of $1024\times 2048$.
### 5.1 Deconvolution Pipeline (DeP)
The DeP is a general-purpose conventional pipeline concatenating denoising and
deconvolution (Wiener Filter), which is an inverse process of the analyzed
image formation. To better utilize the unsupervised Wiener Filter (WF) [29],
we first apply the BM3D denoiser to each RAW channel separately, afterwards we
linearly divide the measured $\gamma$ with the outputs for intensity scaling.
After that, WF is applied to each channel given the pre-computed PSF
$\mathbf{k}$. Finally, RAW images with bayer pattern are demosaiced by linear
interpolation. The restored results are evaluated on the testing partition of
the UDC dataset.
### 5.2 Learning-based Methods
UNet. We propose a learning-based restoration network baseline as shown in
Figure 5. The proposed model takes a 4-channel RAW sensor data observation
$y$, and outputs the restored 3-channel RGB image $x$. The model conducts
denoising, debluring, white-balancing, intensity scaling, and demosaicing in a
single network, whose structure is basically a UNet. We split the encoder into
two sub-encoders, one of which is for computing residual details to add, and
the other one learns content encoding from degraded images. By splitting the
encoder, compared with doubling the width of each layer, we will have fewer
parameters, and make the inference and learning more efficient. To train the
model from paired images, we apply the $L_{1}$ loss, which will at large
guarantee the temporal stability compared with adversarial loss [15]. Besides,
we also apply $SSIM$ and Perception Loss (VGG Loss) for ablation study.
We crop patches of $256\times 256$, and augment the training data using the
raw image augmentation [26] while preserving the RGGB bayer pattern. We train
the model for 400 epochs using Adam optimizer ($\beta_{1}=0.9$,
$\beta_{2}=0.999$ and $\epsilon=10^{-8}$) with learning rate $10^{-4}$ and
decay factor 0.5 after 200 epoches. We also train the same structure using the
synthetic data (denoted as UNet(Syn)) generated by the pipeline proposed in
section 4.2.
ResNet. Additionally, a data-driven ResNet trained with the same data is
utilized for evaluation. To our knowledge, UNet and ResNet-based structures
are two widely-used deep models for image restoration. We use 16 residual
blocks with a feature width of 64 for our ResNet architecture, just as Lim et
al. do for EDSR [22]. The model also takes 4-channel RAW data, and outputs
3-channel RGB images. The data-driven models cannot be directly adaptive to
UDC inputs if only trained with bi-cubic degradation. We did not compare with
their model structures because model novelty is not our main claim, and the
presented two methods are the most general ways which can achieve real-time
inference as the baselines. Other model variants can be further explored in
future work.
(a) T-OLED
(b) DeP
(c) UNet(Syn)
(d) UNet
(e) GT
Figure 6: Restoration Results Comparison for T-OLED. GT: Ground Truth.
(a) P-OLED
(b) DeP
(c) UNet(Syn)
(d) UNet
(e) GT
Figure 7: Restoration Results Comparison for P-OLED. GT: Ground Truth. Table
4: Pipeline Comparison
. 4K T-OLED P-OLED Pipeline Structure $\\#$P $\downarrow$ GFLOPs $\downarrow$
T $\downarrow$ PSNR/SSIM $\uparrow$ LPIPS $\downarrow$ PSNR/SSIM $\uparrow$
LPIPS $\downarrow$ DeP - - - 28.50/0.9117 0.4219 16.97/0.7084 0.6306 ResNet
1.37M 721.76 92.92 36.26/0.9703 0.1214 27.42/0.9176 0.2500 UNet(Syn) 8.93M
124.36 21.37 32.42/0.9343 0.1739 25.88/0.9006 0.3089 UNet 8.93M 124.36 21.37
36.71/0.9713 0.1209 30.45/0.9427 0.2219
Table 5: Ablation Study on UNet alternatives.
Alternatives | | | | 4K T-OLED | P-OLED
---|---|---|---|---|---
| $\\#$P $\downarrow$ | GFLOPs $\downarrow$ | T $\downarrow$ | PSNR/SSIM $\uparrow$ | LPIPS $\downarrow$ | PSNR/SSIM $\uparrow$ | LPIPS $\downarrow$
UNet Basseline | 8.93M | 124.36 | 21.37 | 36.71/0.9713 | 0.1209 | 30.45/0.9427 | 0.2219
Double Width | 31.03M | 386.37 | 40.42 | 37.00/0.9730 | 0.1171 | 30.37/0.9425 | 0.2044
Single Encoder | 7.76M | 97.09 | 15.85 | 36.47/0.9704 | 0.1288 | 30.26/0.9387 | 0.2318
$L_{1}\rightarrow L_{1}+SSIM$ | 8.93M | 124.36 | 21.37 | 36.69/0.9714 | 0.1246 | 30.37/0.9403 | 0.2131
$L_{1}\rightarrow L_{1}+VGG$ | 8.93M | 124.36 | 21.37 | 36.31/0.9711 | 0.1130 | 30.37/0.9403 | 0.2130
## 6 Experimental Results
### 6.1 Qualitative and Quantitative Comparisons
The qualitative restoration results are shown in Figure 6 and 7. As shown,
image Deconvolution Pipeline (DeP) successfully recovers image details but
still introduces some artifacts, and suffers from the inaccuracy of the
computed ideal PSF. The UNet-based model achieves better visual quality and
denoising performance. The results of UNet trained with the synthetic data are
visually better than DeP.
The quantitative results are listed in Table 4. We report the performance in
PSNR, SSIM, a perceptual metric LPIPS [44], inference time T (ms/MPixels) and
GFLOPs. The inference time is tested with one single Titan X, and the GFLOPs
is computed by input size of $512\times 1024\times 4$. ResNet achieves a
comparable performance to UNet, but it requires more computation operations
and longer inference time. The proposed UNet-based structure is efficient and
effective, which can therefore be deployed for real-time inference for high-
resolution inputs with a single GPU. In Table 4, we demonstrate that synthetic
data still has gaps with the real data, though it has already greatly out-
performed the DeP for the two display types. The domain gap mainly comes from
the following aspects. First, due to the existing distances between display
and lens, in real data there appears visible patterns of the display on the
image plane. We recall in the assumption of the diffraction model, the display
panel is exactly at the principle plane of the lens system. The cause of the
visible bands are illustrated in the supplementary material. Second, the
approximated light transmission rate may not be accurate, the measured values
may be influenced by other environment light sources. Third, impulse noise
caused by dead pixels or over-exposure in the camera sensors widely exist in
the real dataset. Those factors provide more improvement space for this work.
Figure 8: Face detection performance before and after applying restoration.
Without display, the original face recall rate is 60$\%$. Covering the camera
with T-OLED or P-OLED will decrease the recall rate to 8$\%$ and 0$\%$. After
image restoration, the recall rates recovered back to 56$\%$ and 39$\%$.
### 6.2 Ablation Study
For the best-performed UNet structure, we compare different UNet alternatives
in Table 5. We increase the parameter size by splitting the original encoders
into two sub-encoders, so the performance is also increased. The increment
parameter size and inference time is far less than doubling the width of each
layer of UNet, but the performance improvement is comparable (T-OLED), even
better (P-OLED). We claim that the proposed UNet structure will both maintain
a small number of parameters and operations, and achieve a real-time high-
quality inference. To try alternative loss functions, we add $SSIM$ or $VGG$
loss in additional to $L_{1}$ loss with 1:1 ratio. However, the performance
gains on either $SSIM$ or perceptual metric LPIPS are not significant enough,
and are not visually distinctive. Adversarial loss is not implemented due to
its temporal instability of GAN-based training.
### 6.3 Downstream Applications
The proposed image restoration also enhances the performance of downstream
applications including face detection. Figure 8 shows an example of detecting
faces using MTCNN [41]. Without display, the original face recall rate is
60$\%$. Covering the camera with T-OLED or P-OLED will decrease the recall
rate to 8$\%$ and 0$\%$. After image restoration, the recall rates are
recovered to 56$\%$ and 39$\%$.
## 7 Conclusion and Limitations
This paper defined and presented a novel imaging system named Under-Display-
Camera (UDC). Deploying UDC to full-screen devices improves the user
interaction as well as teleconferencing experience, but does harm to imaging
quality and other downstream vision applications. We systematically analyzed
the optical systems and modelled the image formation pipeline of UDC, and both
collected real data using a novel acquisition system and synthesized realistic
data and the PSF of the system using optical model. We then proposed to
address the image restoration of UDC using a Deconvolution-based Pipeline
(DeP) and data-driven learning-based methods. Our experiments showed that the
former achieved basic restoration and the latter demonstrated an efficient
high-quality restoration. The model trained with synthetic data also achieved
a remarkable performance indicating the potential generalization ability.
UDC problem has its promising research values in complicated degradation
analysis. In real-world applications, other factors like an active display,
reflection, lens flare etc. are still very challenging and complicated. Future
work can be exploring UDC-specific restoration models and working with
aperture and display researchers to analyze the influential factors of image
degradation. It will make the restoration model generalized better for mass
production, or helpful for down-stream tasks, as an ultimate goal.
## Appendix A Appendices
(a) Display-free
(b) T-OLED
(c) P-OLED
Figure A.1: More real data samples acquried by our MCIS set-up. (a) The image
captured with camera covered by thin glass, (b)T-OLED, and (c) P-OLED.
### A.1 Real Data
More examples in 8-bit RGB version in the UDC real dataset are shown in Fig.
A.1. Each image has a high resolution of $1024\times 2048\times 3$. Images
captured by T-OLED demonstrate a blur effect along the horizontal direction.
Some spatial frequencies (i.e. vertical bands) are missing due to diffraction
effects. Images captured by P-OLED are yellow-shifted, dark, and noisy. We
also stored the 16-bit raw sensor data, which is mainly used for training and
testing in the paper.
### A.2 Synthetic Data
Figure A.2: Real and the computed point spread function (kernel).
We follow the image formation pipeline to synthesize the near-realistic data.
Given only the display pattern, and some specific measurements of the cameras,
we could generate the blur kernels as shown in Fig. A.2 along with the
degraded images for training. Fig. A.3 compares the synthetic data with the
real data. Perceptually, two sets of data samples have similar visual
characteristics.
### A.3 Visible Bands for T-OLED
(a) Real data samples.
(b) Synthetic data samples.
Figure A.3: Comparison of real data and synthetic data. First row: T-OLED.
Second row: P-OLED.
(a) Synthetic data.
(b) Real data with bands.
Figure A.4: Visible bands in real data.
In addition to the degradation formulated in the paper, there is another minor
image artifact caused by the periodic grating-like pixel structure (i.e.
T-OLED): superposition of periodic bands over the image at low to moderate
visibility levels. As shown in the Fig. A.4, periodic bands are visible in the
real data, but not in the synthetic data. We regard it as the main gap of data
synthesis. Those bands are caused by the imperfect adhesion of the display to
the camera lens. In the degradation model, we assume the display pattern or
objects are exactly placed against the lens, while in practical set-up of our
experiments, there is still a small distance between them. We can consider the
grating as being imaged very out-of-focus on the sensor plane. There will be
an image on the image sensors consisting of the grating convoluted with the
very-out-of-focus point spread function – a circle. This problem can be
mitigated by real industrial manufacturing process, so we did not resolve it
explicitly in the paper with experimental settings. However, it still forms an
interesting problem regarding eliminating real periodic noises left for future
works.
### A.4 More Restoration Results
We show more restoration results in Fig. A.5.
(a) Display
(b) DeP
(c) UNet(Syn)
(d) UNet
(e) GT
Figure A.5: More restoration results. For each two-row group, the first row is
for T-OLED, and the second one is for P-OLED.
## References
* [1] Abdelrahman Abdelhamed, Marcus A Brubaker, and Michael S Brown. Noise flow: Noise modeling with conditional normalizing flows. In Proceedings of the IEEE International Conference on Computer Vision, pages 3165–3173, 2019.
* [2] Abdelrahman Abdelhamed, Stephen Lin, and Michael S Brown. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1692–1700, 2018.
* [3] Abdelrahman Abdelhamed, Radu Timofte, and Michael S Brown. Ntire 2019 challenge on real image denoising: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
* [4] Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 126–135, 2017.
* [5] Josue Anaya and Adrian Barbu. Renoir–a dataset for real low-light image noise reduction. Journal of Visual Communication and Image Representation, 51:144–154, 2018.
* [6] Codruta O Ancuti, Cosmin Ancuti, Mateu Sbert, and Radu Timofte. Dense haze: A benchmark for image dehazing with dense-haze and haze-free images. arXiv preprint arXiv:1904.02904, 2019.
* [7] Tim Brooks and Jonathan T Barron. Learning to synthesize motion blur. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6840–6848, 2019.
* [8] Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T Barron. Unprocessing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
* [9] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3291–3300, 2018.
* [10] Chang Chen, Zhiwei Xiong, Xinmei Tian, Zheng-Jun Zha, and Feng Wu. Camera lens super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1652–1660, 2019.
* [11] Dong-Ming Chen, Bin Xiong, and Zhen-Yu Guo. Full-screen smartphone, Sept. 3 2019. US Patent App. 29/650,323.
* [12] V David John Evans, Xinrui Jiang, Andrew E Rubin, Matthew Hershenson, and Xiaoyu Miao. Optical sensors disposed beneath the display of an electronic device, Oct. 17 2019. US Patent App. 16/450,727.
* [13] Raanan Fattal. Single image dehazing. ACM transactions on graphics (TOG), 27(3):72, 2008.
* [14] J Scott Goldstein, Irving S Reed, and Louis L Scharf. A multistage representation of the wiener filter based on orthogonal projections. IEEE Transactions on Information Theory, 44(7):2943–2959, 1998\.
* [15] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
* [16] Joseph W Goodman. Introduction to Fourier optics. Roberts and Company Publishers, 2005.
* [17] Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, and Lei Zhang. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1712–1722, 2019.
* [18] Samuel W Hasinoff. Photon, poisson noise. Computer Vision: A Reference Guide, pages 608–610, 2014.
* [19] Michael T Heath. Scientific Computing: An Introductory Survey, Revised Second Edition. SIAM, 2018.
* [20] Salman S Khan, VR Adarsh, Vivek Boominathan, Jasper Tan, Ashok Veeraraghavan, and Kaushik Mitra. Towards photorealistic reconstruction of highly multiplexed lensless images. In Proceedings of the IEEE International Conference on Computer Vision, pages 7860–7869, 2019.
* [21] Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Jiří Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8183–8192, 2018.
* [22] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017.
* [23] Sehoon Lim, Yuqian Zhou, Neil Emerton, and Tim Large. Aperture design for learning-based image restoration. In 3D Image Acquisition and Display: Technology, Perception and Applications, pages DF3A–2. Optical Society of America, 2020.
* [24] Sehoon Lim, Yuqian Zhou, Neil Emerton, Tim Large, and Steven Bathiche. 74-1: Image restoration for display-integrated camera. In SID Symposium Digest of Technical Papers, volume 51, pages 1102–1105. Wiley Online Library, 2020.
* [25] Ce Liu, Richard Szeliski, Sing Bing Kang, C Lawrence Zitnick, and William T Freeman. Automatic estimation and removal of noise from a single image. IEEE transactions on pattern analysis and machine intelligence, 30(2):299–314, 2007.
* [26] Jiaming Liu, Chi-Hao Wu, Yuzhi Wang, Qin Xu, Yuqian Zhou, Haibin Huang, Chuan Wang, Shaofan Cai, Yifan Ding, Haoqiang Fan, et al. Learning raw image denoising with bayer pattern unification and bayer preserving augmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
* [27] Kristina Monakhova, Joshua Yurtsever, Grace Kuo, Nick Antipa, Kyrollos Yanny, and Laura Waller. Learned reconstructions for practical mask-based lensless imaging. Optics express, 27(20):28075–28090, 2019.
* [28] Seungjun Nah, Radu Timofte, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, and Kyoung Mu Lee. Ntire 2019 challenge on video deblurring: Methods and results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
* [29] François Orieux, Jean-François Giovannelli, and Thomas Rodet. Bayesian estimation of regularization and point spread function parameters for wiener–hunt deconvolution. JOSA A, 27(7):1593–1607, 2010.
* [30] Yifan Peng, Qilin Sun, Xiong Dun, Gordon Wetzstein, Wolfgang Heidrich, and Felix Heide. Learned large field-of-view imaging with thin-plate optics. ACM Trans. Graph., 38(6):219–1, 2019.
* [31] Tobias Plotz and Stefan Roth. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1586–1595, 2017.
* [32] Abhijith Punnappurath and Michael S Brown. Reflection removal using a dual-pixel sensor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1556–1565, 2019.
* [33] Wenqi Ren, Si Liu, Hua Zhang, Jinshan Pan, Xiaochun Cao, and Ming-Hsuan Yang. Single image dehazing via multi-scale convolutional neural networks. In European conference on computer vision, pages 154–169. Springer, 2016.
* [34] Qilin Sun, Ethan Tseng, Qiang Fu, Wolfgang Heidrich, and Felix Heide. Learning rank-1 diffractive optics for single-shot high dynamic range imaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1386–1396, 2020.
* [35] Jasper Tan, Li Niu, Jesse K Adams, Vivek Boominathan, Jacob T Robinson, Richard G Baraniuk, and Ashok Veeraraghavan. Face detection and verification using lensless cameras. IEEE Transactions on Computational Imaging, 5(2):180–194, 2018\.
* [36] Renjie Wan, Boxin Shi, Ling-Yu Duan, Ah-Hwee Tan, and Alex C Kot. Benchmarking single-image reflection removal algorithms. In Proceedings of the IEEE International Conference on Computer Vision, pages 3922–3930, 2017.
* [37] Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0–0, 2018.
* [38] Ing G Wenke. Organic light emitting diode (oled). Research gate, 2016.
* [39] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695–704, 2018.
* [40] He Zhang, Vishwanath Sindagi, and Vishal M Patel. Image de-raining using a conditional generative adversarial network. IEEE transactions on circuits and systems for video technology, 2019\.
* [41] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499–1503, 2016.
* [42] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
* [43] Kai Zhang, Wangmeng Zuo, and Lei Zhang. Ffdnet: Toward a fast and flexible solution for cnn-based image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622, 2018.
* [44] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586–595, 2018.
* [45] Xuaner Zhang, Qifeng Chen, Ren Ng, and Vladlen Koltun. Zoom to learn, learn to zoom. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3762–3770, 2019.
* [46] Zhenhua Zhang. Image deblurring of camera under display by deep learning. In SID Symposium Digest of Technical Papers, volume 51, pages 43–46. Wiley Online Library, 2020.
* [47] Yuqian Zhou, Jianbo Jiao, Haibin Huang, Jue Wang, and Thomas Huang. Adaptation strategies for applying awgn-based denoiser to realistic noise. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 10085–10086, 2019.
* [48] Yuqian Zhou, Jianbo Jiao, Haibin Huang, Yang Wang, Jue Wang, Honghui Shi, and Thomas Huang. When awgn-based denoiser meets real noises. arXiv preprint arXiv:1904.03485, 2019.
* [49] Yuqian Zhou, Michael Kwan, Kyle Tolentino, Neil Emerton, Sehoon Lim, Tim Large, Lijiang Fu, Zhihong Pan, Baopu Li, Qirui Yang, et al. Udc 2020 challenge on image restoration of under-display camera: Methods and results. In European Conference on Computer Vision, pages 337–351. Springer, 2020.
|
2024-09-04T02:54:58.814672 | 2020-03-10T17:17:01 | 2003.04866 | {
"authors": "Ivan Vuli\\'c, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira\n Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau,\n Roi Reichart, Anna Korhonen",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26141",
"submitter": "Ivan Vuli\\'c",
"url": "https://arxiv.org/abs/2003.04866"
} | arxiv-papers | 12020
# Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual
Lexical Semantic Similarity
https://multisimlex.com/
Ivan Vulić ♠ ♠Equal contribution; English Faculty Building, 9 West Road
Cambridge CB3 9DA, United Kingdom. E-mail:
<EMAIL_ADDRESS>LTL, University of Cambridge Simon
Baker ∗♠ LTL, University of Cambridge Edoardo Maria Ponti ∗♠ LTL, University
of Cambridge Ulla Petti ∗ LTL, University of Cambridge Ira Leviant Technion
City, Haifa 3200003, Israel. E-mail:
<EMAIL_ADDRESS><EMAIL_ADDRESS>Faculty of
Industrial Engineering and Management, Technion, IIT Kelly Wing ∗ LTL,
University of Cambridge Olga Majewska ∗ LTL, University of Cambridge Eden
Bar ∗∗ Faculty of Industrial Engineering and Management, Technion, IIT Matt
Malone ∗ LTL, University of Cambridge Thierry Poibeau Rue Maurice Arnoux,
92120 Montrouge, France. E-mail<EMAIL_ADDRESS>LATTICE Lab, CNRS and
ENS/PSL and Univ. Sorbonne nouvelle/USPC Roi Reichart ∗∗ Faculty of
Industrial Engineering and Management, Technion, IIT Anna Korhonen ∗ LTL,
University of Cambridge
###### Abstract
We introduce Multi-SimLex, a large-scale lexical resource and evaluation
benchmark covering datasets for 12 typologically diverse languages, including
major languages (e.g., Mandarin Chinese, Spanish, Russian) as well as less-
resourced ones (e.g., Welsh, Kiswahili). Each language dataset is annotated
for the lexical relation of semantic similarity and contains 1,888
semantically aligned concept pairs, providing a representative coverage of
word classes (nouns, verbs, adjectives, adverbs), frequency ranks, similarity
intervals, lexical fields, and concreteness levels. Additionally, owing to the
alignment of concepts across languages, we provide a suite of 66 cross-lingual
semantic similarity datasets. Due to its extensive size and language coverage,
Multi-SimLex provides entirely novel opportunities for experimental evaluation
and analysis. On its monolingual and cross-lingual benchmarks, we evaluate and
analyze a wide array of recent state-of-the-art monolingual and cross-lingual
representation models, including static and contextualized word embeddings
(such as fastText, M-BERT and XLM), externally informed lexical
representations, as well as fully unsupervised and (weakly) supervised cross-
lingual word embeddings. We also present a step-by-step dataset creation
protocol for creating consistent, Multi-Simlex -style resources for additional
languages. We make these contributions - the public release of Multi-SimLex
datasets, their creation protocol, strong baseline results, and in-depth
analyses which can be be helpful in guiding future developments in
multilingual lexical semantics and representation learning - available via a
website which will encourage community effort in further expansion of Multi-
Simlex to many more languages. Such a large-scale semantic resource could
inspire significant further advances in NLP across languages.
††issue: 1
## 1 Introduction
The lack of annotated training and evaluation data for many tasks and domains
hinders the development of computational models for the majority of the
world’s languages Snyder and Barzilay (2010); Adams et al. (2017); Ponti et
al. (2019a). The necessity to guide and advance multilingual and cross-lingual
NLP through annotation efforts that follow cross-lingually consistent
guidelines has been recently recognized by collaborative initiatives such as
the Universal Dependency (UD) project Nivre et al. (2019). The latest version
of UD (as of March 2020) covers more than 70 languages. Crucially, this
resource continues to steadily grow and evolve through the contributions of
annotators from across the world, extending the UD’s reach to a wide array of
typologically diverse languages. Besides steering research in multilingual
parsing Zeman et al. (2018); Kondratyuk and Straka (2019); Doitch et al.
(2019) and cross-lingual parser transfer Rasooli and Collins (2017); Lin et
al. (2019); Rotman and Reichart (2019), the consistent annotations and
guidelines have also enabled a range of insightful comparative studies focused
on the languages’ syntactic (dis)similarities Bjerva and Augenstein (2018);
Ponti et al. (2018a); Pires, Schlinger, and Garrette (2019).
Inspired by the UD work and its substantial impact on research in
(multilingual) syntax, in this article we introduce Multi-SimLex, a suite of
manually and consistently annotated semantic datasets for 12 different
languages, focused on the fundamental lexical relation of semantic similarity
Budanitsky and Hirst (2006); Hill, Reichart, and Korhonen (2015). For any pair
of words, this relation measures whether their referents share the same
(functional) features, as opposed to general cognitive association captured by
co-occurrence patterns in texts (i.e., the distributional information).
Datasets that quantify the strength of true semantic similarity between
concept pairs such as SimLex-999 Hill, Reichart, and Korhonen (2015) or
SimVerb-3500 Gerz et al. (2016) have been instrumental in improving models for
distributional semantics and representation learning. Discerning between
semantic similarity and relatedness/association is not only crucial for
theoretical studies on lexical semantics (see §2), but has also been shown to
benefit a range of language understanding tasks in NLP. Examples include
dialog state tracking Mrkšić et al. (2017); Ren et al. (2018), spoken language
understanding Kim et al. (2016); Kim, de Marneffe, and Fosler-Lussier (2016),
text simplification Glavaš and Vulić (2018); Ponti et al. (2018b); Lauscher et
al. (2019), dictionary and thesaurus construction Cimiano, Hotho, and Staab
(2005); Hill et al. (2016).
Despite the proven usefulness of semantic similarity datasets, they are
available only for a small and typologically narrow sample of resource-rich
languages such as German, Italian, and Russian Leviant and Reichart (2015),
whereas some language types and low-resource languages typically lack similar
evaluation data. Even if some resources do exist, they are limited in their
size (e.g., 500 pairs in Turkish Ercan and Yıldız (2018), 500 in Farsi
Camacho-Collados et al. (2017), or 300 in Finnish Venekoski and Vankka (2017))
and coverage (e.g., all datasets which originated from the original English
SimLex-999 contain only high-frequent concepts, and are dominated by nouns).
This is why, as our departure point, we introduce a larger and more
comprehensive English word similarity dataset spanning 1,888 concept pairs
(see §4).
Most importantly, semantic similarity datasets in different languages have
been created using heterogeneous construction procedures with different
guidelines for translation and annotation, as well as different rating scales.
For instance, some datasets were obtained by directly translating the English
SimLex-999 in its entirety Leviant and Reichart (2015); Mrkšić et al. (2017)
or in part Venekoski and Vankka (2017). Other datasets were created from
scratch Ercan and Yıldız (2018) and yet others sampled English concept pairs
differently from SimLex-999 and then translated and reannotated them in target
languages Camacho-Collados et al. (2017). This heterogeneity makes these
datasets incomparable and precludes systematic cross-linguistic analyses. In
this article, consolidating the lessons learned from previous dataset
construction paradigms, we propose a carefully designed translation and
annotation protocol for developing monolingual Multi-SimLex datasets with
aligned concept pairs for typologically diverse languages. We apply this
protocol to a set of 12 languages, including a mixture of major languages
(e.g., Mandarin, Russian, and French) as well as several low-resource ones
(e.g., Kiswahili, Welsh, and Yue Chinese). We demonstrate that our proposed
dataset creation procedure yields data with high inter-annotator agreement
rates (e.g., the average mean inter-annotator agreement for Welsh is 0.742).
The unified construction protocol and alignment between concept pairs enables
a series of quantitative analyses. Preliminary studies on the influence that
polysemy and cross-lingual variation in lexical categories (see §2.3) have on
similarity judgments are provided in §5. Data created according to Multi-
SimLex protocol also allow for probing into whether similarity judgments are
universal across languages, or rather depend on linguistic affinity (in terms
of linguistic features, phylogeny, and geographical location). We investigate
this question in §5.4. Naturally, Multi-SimLex datasets can be used as an
intrinsic evaluation benchmark to assess the quality of lexical
representations based on monolingual, joint multilingual, and transfer
learning paradigms. We conduct a systematic evaluation of several state-of-
the-art representation models in §7, showing that there are large gaps between
human and system performance in all languages. The proposed construction
paradigm also supports the automatic creation of 66 cross-lingual Multi-SimLex
datasets by interleaving the monolingual ones. We outline the construction of
the cross-lingual datasets in §6, and then present a quantitative evaluation
of a series of cutting-edge cross-lingual representation models on this
benchmark in §8.
Contributions. We now summarize the main contributions of this work:
1) Building on lessons learned from prior work, we create a more comprehensive
lexical semantic similarity dataset for the English language spanning a total
of 1,888 concept pairs balanced with respect to similarity, frequency, and
concreteness, and covering four word classes: nouns, verbs, adjectives and,
for the first time, adverbs. This dataset serves as the main source for the
creation of equivalent datasets in several other languages.
2) We present a carefully designed and rigorous language-agnostic translation
and annotation protocol. These well-defined guidelines will facilitate the
development of future Multi-SimLex datasets for other languages. The proposed
protocol eliminates some crucial issues with prior efforts focused on the
creation of multi-lingual semantic resources, namely: i) limited coverage; ii)
heterogeneous annotation guidelines; and iii) concept pairs which are
semantically incomparable across different languages.
3) We offer to the community manually annotated evaluation sets of 1,888
concept pairs across 12 typologically diverse languages, and 66 large cross-
lingual evaluation sets. To the best of our knowledge, Multi-SimLex is the
most comprehensive evaluation resource to date focused on the relation of
semantic similarity.
4) We benchmark a wide array of recent state-of-the-art monolingual and cross-
lingual word representation models across our sample of languages. The results
can serve as strong baselines that lay the foundation for future improvements.
5) We present a first large-scale evaluation study on the ability of encoders
pretrained on language modeling (such as bert Devlin et al. (2019) and xlm
Conneau and Lample (2019)) to reason over word-level semantic similarity in
different languages. To our own surprise, the results show that monolingual
pretrained encoders, even when presented with word types out of context, are
sometimes competitive with static word embedding models such as fastText
Bojanowski et al. (2017) or word2vec Mikolov et al. (2013). The results also
reveal a huge gap in performance between massively multilingual pretrained
encoders and language-specific encoders in favor of the latter: our findings
support other recent empirical evidence related to the “curse of
multilinguality” Conneau et al. (2019); Bapna and Firat (2019) in
representation learning.
6) We make all of these resources available on a website which facilitates
easy creation, submission and sharing of Multi-Simlex-style datasets for a
larger number of languages. We hope that this will yield an even larger
repository of semantic resources that inspire future advances in NLP within
and across languages.
In light of the success of Universal Dependencies Nivre et al. (2019), we hope
that our initiative will instigate a collaborative public effort with
established and clear-cut guidelines that will result in additional Multi-
SimLex datasets in a large number of languages in the near future. Moreover,
we hope that it will provide means to advance our understanding of
distributional and lexical semantics across a large number of languages. All
monolingual and cross-lingual Multi-SimLex datasets–along with detailed
translation and annotation guidelines–are available online at:
https://multisimlex.com/.
## 2 Lexical Semantic Similarity
### 2.1 Similarity and Association
The focus of the Multi-SimLex initiative is on the lexical relation of pure
semantic similarity. For any pair of words, this relation measures whether
their referents share the same features. For instance, graffiti and frescos
are similar to the extent that they are both forms of painting and appear on
walls. This relation can be contrasted with the cognitive association between
two words, which often depends on how much their referents interact in the
real world, or are found in the same situations. For instance, a painter is
easily associated with frescos, although they lack any physical commonalities.
Association is also known in the literature under other names: relatedness
Budanitsky and Hirst (2006), topical similarity (McKeown et al., 2002), and
domain similarity (Turney, 2012).
Semantic similarity and association overlap to some degree, but do not
coincide Kiela, Hill, and Clark (2015); Vulić, Kiela, and Korhonen (2017). In
fact, there exist plenty of pairs that are intuitively associated but not
similar. Pairs where the converse is true can also be encountered, although
more rarely. An example are synonyms where a word is common and the other
infrequent, such as to seize and to commandeer. Hill, Reichart, and Korhonen
(2015) revealed that while similarity measures based on the WordNet graph (Wu
and Palmer, 1994) and human judgments of association in the University of
South Florida Free Association Database (Nelson, McEvoy, and Schreiber, 2004)
do correlate, a number of pairs follow opposite trends. Several studies on
human cognition also point in the same direction. For instance, semantic
priming can be triggered by similar words without association (Lucas, 2000).
On the other hand, a connection with cue words is established more quickly for
topically related words rather than for similar words in free association
tasks De Deyne and Storms (2008).
A key property of semantic similarity is its gradience: pairs of words can be
similar to a different degree. On the other hand, the relation of synonymy is
binary: pairs of words are synonyms if they can be substituted in all contexts
(or most contexts, in a looser sense), otherwise they are not. While synonyms
can be conceived as lying on one extreme of the semantic similarity continuum,
it is crucial to note that their definition is stated in purely relational
terms, rather than invoking their referential properties (Lyons, 1977; Cruse,
1986; Coseriu, 1967). This makes behavioral studies on semantic similarity
fundamentally different from lexical resources like WordNet Miller (1995),
which include paradigmatic relations (such as synonymy).
### 2.2 Similarity for NLP: Intrinsic Evaluation and Semantic Specialization
The ramifications of the distinction between similarity and association are
profound for distributional semantics. This paradigm of lexical semantics is
grounded in the distributional hypothesis, formulated by Firth (1957) and
Harris (1951). According to this hypothesis, the meaning of a word can be
recovered empirically from the contexts in which it occurs within a collection
of texts. Since both pairs of topically related words and pairs of purely
similar words tend to appear in the same contexts, their associated meaning
confounds the two distinct relations Hill, Reichart, and Korhonen (2015);
Schwartz, Reichart, and Rappoport (2015); Vulić et al. (2017b). As a result,
distributional methods obscure a crucial facet of lexical meaning.
This limitation also reflects onto word embeddings (WEs), representations of
words as low-dimensional vectors that have become indispensable for a wide
range of NLP applications (Collobert et al., 2011; Chen and Manning, 2014;
Melamud et al., 2016, inter alia). In particular, it involves both static WEs
learned from co-occurrence patterns Mikolov et al. (2013); Levy and Goldberg
(2014); Bojanowski et al. (2017) and contextualized WEs learned from modeling
word sequences (Peters et al., 2018; Devlin et al., 2019, inter alia). As a
result, in the induced representations, geometrical closeness (measured e.g.
through cosine distance) conflates genuine similarity with broad relatedness.
For instance, the vectors for antonyms such as sober and drunk, by definition
dissimilar, might be neighbors in the semantic space under the distributional
hypothesis. Turney (2012), Kiela and Clark (2014), and Melamud et al. (2016)
demonstrated that different choices of hyper-parameters in WE algorithms (such
as context window) emphasize different relations in the resulting
representations. Likewise, Agirre et al. (2009) and Levy and Goldberg (2014)
discovered that WEs learned from texts annotated with syntactic information
mirror similarity better than simple local bag-of-words neighborhoods.
The failure of WEs to capture semantic similarity, in turn, affects model
performance in several NLP applications where such knowledge is crucial. In
particular, Natural Language Understanding tasks such as statistical dialog
modeling, text simplification, or semantic text similarity Mrkšić et al.
(2016); Kim et al. (2016); Ponti et al. (2019c), among others, suffer the
most. As a consequence, resources providing clean information on semantic
similarity are key in mitigating the side effects of the distributional
signal. In particular, such databases can be employed for the intrinsic
evaluations of specific WE models as a proxy of their reliability for
downstream applications (Collobert and Weston, 2008; Baroni and Lenci, 2010;
Hill, Reichart, and Korhonen, 2015); intuitively, the more WEs are misaligned
with human judgments of similarity, the more their performance on actual tasks
is expected to be degraded. Moreover, word representations can be specialized
(a.k.a. retrofitted) by disentangling word relations of similarity and
association. In particular, linguistic constraints sourced from external
databases (such as synonyms from WordNet) can be injected into WEs (Faruqui et
al., 2015; Wieting et al., 2015; Mrkšić et al., 2017; Lauscher et al., 2019;
Kamath et al., 2019, inter alia) in order to enforce a particular relation in
a distributional semantic space while preserving the original adjacency
properties.
### 2.3 Similarity and Language Variation: Semantic Typology
In this work, we tackle the concept of (true) semantic similarity from a
multilingual perspective. While the same meaning representations may be shared
by all human speakers at a deep cognitive level, there is no one-to-one
mapping between the words in the lexicons of different languages. This makes
the comparison of similarity judgments across languages difficult, since the
meaning overlap of translationally equivalent words is sometimes far less than
exact. This results from the fact that the way languages ‘partition’ semantic
fields is partially arbitrary (Trier, 1931), although constrained cross-
lingually by common cognitive biases Majid et al. (2007). For instance,
consider the field of colors: English distinguishes between green and blue,
whereas Murle (South Sudan) has a single word for both (Kay and Maffi, 2013).
In general, semantic typology studies the variation in lexical semantics
across the world’s languages. According to (Evans, 2011), the ways languages
categorize concepts into the lexicon follow three main axes: 1) granularity:
what is the number of categories in a specific domain?; 2) boundary location:
where do the lines marking different categories lie?; 3) grouping and
dissection: what are the membership criteria of a category; which instances
are considered to be more prototypical? Different choices with respect to
these axes lead to different lexicalization patterns.111More formally,
colexification is a phenomenon when different meanings can be expressed by the
same word in a language François (2008). For instance, the two senses which
are distinguished in English as time and weather are co-lexified in Croatian:
the word vrijeme is used in both cases. For instance, distinct senses in a
polysemous word in English, such as skin (referring to both the body and
fruit), may be assigned separate words in other languages such as Italian
pelle and buccia, respectively (Rzymski et al., 2020). We later analyze
whether similarity scores obtained from native speakers also loosely follow
the patterns described by semantic typology.
## 3 Previous Work and Evaluation Data
Word Pair Datasets. Rich expert-created resources such as WordNet Miller
(1995); Fellbaum (1998), VerbNet Kipper Schuler (2005); Kipper et al. (2008),
or FrameNet Baker, Fillmore, and Lowe (1998) encode a wealth of semantic and
syntactic information, but are expensive and time-consuming to create. The
scale of this problem gets multiplied by the number of languages in
consideration. Therefore, crowd-sourcing with non-expert annotators has been
adopted as a quicker alternative to produce smaller and more focused semantic
resources and evaluation benchmarks. This alternative practice has had a
profound impact on distributional semantics and representation learning Hill,
Reichart, and Korhonen (2015). While some prominent English word pair datasets
such as WordSim-353 Finkelstein et al. (2002), MEN Bruni, Tran, and Baroni
(2014), or Stanford Rare Words Luong, Socher, and Manning (2013) did not
discriminate between similarity and relatedness, the importance of this
distinction was established by Hill, Reichart, and Korhonen (2015, see again
the discussion in §2.1) through the creation of SimLex-999. This inspired
other similar datasets which focused on different lexical properties. For
instance, SimVerb-3500 Gerz et al. (2016) provided similarity ratings for
3,500 English verbs, whereas CARD-660 Pilehvar et al. (2018) aimed at
measuring the semantic similarity of infrequent concepts.
Semantic Similarity Datasets in Other Languages. Motivated by the impact of
datasets such as SimLex-999 and SimVerb-3500 on representation learning in
English, a line of related work focused on creating similar resources in other
languages. The dominant approach is translating and reannotating the entire
original English SimLex-999 dataset, as done previously for German, Italian,
and Russian Leviant and Reichart (2015), Hebrew and Croatian Mrkšić et al.
(2017), and Polish Mykowiecka, Marciniak, and Rychlik (2018).
Venekoski:2017nodalida apply this process only to a subset of 300 concept
pairs from the English SimLex-999. On the other hand, Camacho-Collados et al.
(2017) sampled a new set of 500 English concept pairs to ensure wider topical
coverage and balance across similarity spectra, and then translated those
pairs to German, Italian, Spanish, and Farsi (SEMEVAL-500). A similar approach
was followed by Ercan and Yıldız (2018) for Turkish, by Huang et al. (2019)
for Mandarin Chinese, and by Sakaizawa and Komachi (2018) for Japanese.
Netisopakul, Wohlgenannt, and Pulich (2019) translated the concatenation of
SimLex-999, WordSim-353, and the English SEMEVAL-500 into Thai and then
reannotated it. Finally, Barzegar et al. (2018) translated English SimLex-999
and WordSim-353 to 11 resource-rich target languages (German, French, Russian,
Italian, Dutch, Chinese, Portuguese, Swedish, Spanish, Arabic, Farsi), but
they did not provide details concerning the translation process and the
resolution of translation disagreements. More importantly, they also did not
reannotate the translated pairs in the target languages. As we discussed in §
2.3 and reiterate later in §5, semantic differences among languages can have a
profound impact on the annotation scores; particulary, we show in §5.4 that
these differences even roughly define language clusters based on language
affinity.
A core issue with the current datasets concerns a lack of one unified
procedure that ensures the comparability of resources in different languages.
Further, concept pairs for different languages are sourced from different
corpora (e.g., direct translation of the English data versus sampling from
scratch in the target language). Moreover, the previous SimLex-based
multilingual datasets inherit the main deficiencies of the English original
version, such as the focus on nouns and highly frequent concepts. Finally,
prior work mostly focused on languages that are widely spoken and do not
account for the variety of the world’s languages. Our long-term goal is
devising a standardized methodology to extend the coverage also to languages
that are resource-lean and/or typologically diverse (e.g., Welsh, Kiswahili as
in this work).
Multilingual Datasets for Natural Language Understanding. The Multi-SimLex
initiative and corresponding datasets are also aligned with the recent efforts
on procuring multilingual benchmarks that can help advance computational
modeling of natural language understanding across different languages. For
instance, pretrained multilingual language models such as multilingual bert
Devlin et al. (2019) or xlm Conneau and Lample (2019) are typically probed on
XNLI test data Conneau et al. (2018b) for cross-lingual natural language
inference. XNLI was created by translating examples from the English MultiNLI
dataset, and projecting its sentence labels Williams, Nangia, and Bowman
(2018). Other recent multilingual datasets target the task of question
answering based on reading comprehension: i) MLQA Lewis et al. (2019) includes
7 languages ii) XQuAD Artetxe, Ruder, and Yogatama (2019) 10 languages; iii)
TyDiQA Clark et al. (2020) 9 widely spoken typologically diverse languages.
While MLQA and XQuAD result from the translation from an English dataset,
TyDiQA was built independently in each language. Another multilingual dataset,
PAWS-X Yang et al. (2019), focused on the paraphrase identification task and
was created translating the original English PAWS Zhang, Baldridge, and He
(2019) into 6 languages. We believe that Multi-SimLex can substantially
contribute to this endeavor by offering a comprehensive multilingual benchmark
for the fundamental lexical level relation of semantic similarity. In future
work, Multi-SimLex also offers an opportunity to investigate the correlations
between word-level semantic similarity and performance in downstream tasks
such as QA and NLI across different languages.
## 4 The Base for Multi-SimLex: Extending English SimLex-999
In this section, we discuss the design principles behind the English (eng)
Multi-SimLex dataset, which is the basis for all the Multi-SimLex datasets in
other languages, as detailed in §5. We first argue that a new, more balanced,
and more comprehensive evaluation resource for lexical semantic similarity in
English is necessary. We then describe how the 1,888 word pairs contained in
the eng Multi-SimLex were selected in such a way as to represent various
linguistic phenomena within a single integrated resource.
Construction Criteria. The following criteria have to be satisfied by any
high-quality semantic evaluation resource, as argued by previous studies
focused on the creation of such resources (Hill, Reichart, and Korhonen, 2015;
Gerz et al., 2016; Vulić et al., 2017a; Camacho-Collados et al., 2017, inter
alia):
(C1) Representative and diverse. The resource must cover the full range of
diverse concepts occurring in natural language, including different word
classes (e.g., nouns, verbs, adjectives, adverbs), concrete and abstract
concepts, a variety of lexical fields, and different frequency ranges.
(C2) Clearly defined. The resource must provide a clear understanding of which
semantic relation exactly is annotated and measured, possibly contrasting it
with other relations. For instance, the original SimLex-999 and SimVerb-3500
explicitly focus on true semantic similarity and distinguish it from broader
relatedness captured by datasets such as MEN Bruni, Tran, and Baroni (2014) or
WordSim-353 Finkelstein et al. (2002).
(C3) Consistent and reliable. The resource must ensure consistent annotations
obtained from non-expert native speakers following simple and precise
annotation guidelines.
In choosing the word pairs and constructing eng Multi-SimLex, we adhere to
these requirements. Moreover, we follow good practices established by the
research on related resources. In particular, since the introduction of the
original SimLex-999 dataset Hill, Reichart, and Korhonen (2015), follow-up
works have improved its construction protocol across several aspects,
including: 1) coverage of more lexical fields, e.g., by relying on a diverse
set of Wikipedia categories Camacho-Collados et al. (2017), 2) infrequent/rare
words Pilehvar et al. (2018), 3) focus on particular word classes, e.g., verbs
Gerz et al. (2016), 4) annotation quality control Pilehvar et al. (2018). Our
goal is to make use of these improvements towards a larger, more
representative, and more reliable lexical similarity dataset in English and,
consequently, in all other languages.
The Final Output: English Multi-SimLex. In order to ensure that the criterion
C1 is satisfied, we consolidate and integrate the data already carefully
sampled in prior work into a single, comprehensive, and representative
dataset. This way, we can control for diversity, frequency, and other
properties while avoiding to perform this time-consuming selection process
from scratch. Note that, on the other hand, the word pairs chosen for English
are scored from scratch as part of the entire Multi-SimLex annotation process,
introduced later in §5. We now describe the external data sources for the
final set of word pairs:
1) Source: SimLex-999. Hill, Reichart, and Korhonen (2015). The English Multi-
SimLex has been initially conceived as an extension of the original SimLex-999
dataset. Therefore, we include all 999 word pairs from SimLex, which span 666
noun pairs, 222 verb pairs, and 111 adjective pairs. While SimLex-999 already
provides examples representing different POS classes, it does not have a
sufficient coverage of different linguistic phenomena: for instance, it
contains only very frequent concepts, and it does not provide a representative
set of verbs (Gerz et al., 2016).
2) Source: SemEval-17: Task 2 (henceforth SEMEVAL-500; Camacho-Collados et
al., 2017). We start from the full dataset of 500 concept pairs to extract a
total of 334 concept pairs for English Multi-SimLex a) which contain only
single-word concepts, b) which are not named entities, c) where POS tags of
the two concepts are the same, d) where both concepts occur in the top 250K
most frequent word types in the English Wikipedia, and e) do not already occur
in SimLex-999. The original concepts were sampled as to span all the 34
domains available as part of BabelDomains Camacho-Collados and Navigli (2017),
which roughly correspond to the main high-level Wikipedia categories. This
ensures topical diversity in our sub-sample.
3) Source: CARD-660 Pilehvar et al. (2018). 67 word pairs are taken from this
dataset focused on rare word similarity, applying the same selection criteria
a) to e) employed for SEMEVAL-500. Words are controlled for frequency based on
their occurrence counts from the Google News data and the ukWaC corpus Baroni
et al. (2009). CARD-660 contains some words that are very rare (logboat),
domain-specific (erythroleukemia) and slang (2mrw), which might be difficult
to translate and annotate across a wide array of languages. Hence, we opt for
retaining only the concept pairs above the threshold of top 250K most frequent
Wikipedia concepts, as above.
4) Source: SimVerb-3500 Gerz et al. (2016) Since both CARD-660 and SEMEVAL-500
are heavily skewed towards noun pairs, and nouns also dominate the original
SimLex-999, we also extract additional verb pairs from the verb-specific
similarity dataset SimVerb-3500. We randomly sample 244 verb pairs from
SimVerb-3500 that represent all similarity spectra. In particular, we add 61
verb pairs for each of the similarity intervals:
$[0,1.5),[1.5,3),[3,4.5),[4.5,6]$. Since verbs in SimVerb-3500 were originally
chosen from VerbNet Kipper, Snyder, and Palmer (2004); Kipper et al. (2008),
they cover a wide range of verb classes and their related linguistic
phenomena.
5) Source: University of South Florida (USF; Nelson, McEvoy, and Schreiber,
2004) norms, the largest database of free association for English. In order to
improve the representation of different POS classes, we sample additional
adjectives and adverbs from the USF norms following the procedure established
by Hill, Reichart, and Korhonen (2015); Gerz et al. (2016). This yields
additional 122 adjective pairs, but only a limited number of adverb pairs
(e.g., later – never, now – here, once – twice). Therefore, we also create a
set of adverb pairs semi-automatically by sampling adjectives that can be
derivationally transformed into adverbs (e.g. adding the suffix -ly) from the
USF, and assessing the correctness of such derivation in WordNet. The
resulting pairs include, for instance, primarily – mainly, softly – firmly,
roughly – reliably, etc. We include a total of 123 adverb pairs into the final
English Multi-SimLex. Note that this is the first time adverbs are included
into any semantic similarity dataset.
Fulfillment of Construction Criteria. The final eng Multi-SimLex dataset spans
1,051 noun pairs, 469 verb pairs, 245 adjective pairs, and 123 adverb
pairs.222There is a very small number of adjective and verb pairs extracted
from CARD-660 and SEMEVAL-500 as well. For instance, the total number of verbs
is 469 since we augment the original 222 SimLex-999 verb pairs with 244
SimVerb-3500 pairs and 3 SEMEVAL-500 pairs; and similarly for adjectives. As
mentioned above, the criterion C1 has been fulfilled by relying only on word
pairs that already underwent meticulous sampling processes in prior work,
integrating them into a single resource. As a consequence, Multi-SimLex allows
for fine-grained analyses over different POS classes, concreteness levels,
similarity spectra, frequency intervals, relation types, morphology, lexical
fields, and it also includes some challenging orthographically similar
examples (e.g., infection – inflection).333Unlike SEMEVAL-500 and CARD-660, we
do not explicitly control for the equal representation of concept pairs across
each similarity interval for several reasons: a) Multi-SimLex contains a
substantially larger number of concept pairs, so it is possible to extract
balanced samples from the full data; b) such balance, even if imposed on the
English dataset, would be distorted in all other monolingual and cross-lingual
datasets; c) balancing over similarity intervals arguably does not reflect a
true distribution “in the wild” where most concepts are only loosely related
or completely unrelated. We ensure that the criteria C2 and C3 are satisfied
by using similar annotation guidelines as Simlex-999, SimVerb-3500, and
SEMEVAL-500 that explicitly target semantic similarity. In what follows, we
outline the carefully tailored process of translating and annotating Multi-
SimLex datasets in all target languages.
## 5 Multi-SimLex: Translation and Annotation
We now detail the development of the final Multi-SimLex resource, describing
our language selection process, as well as translation and annotation of the
resource, including the steps taken to ensure and measure the quality of this
resource. We also provide key data statistics and preliminary cross-lingual
comparative analyses.
Language Selection. Multi-SimLex comprises eleven languages in addition to
English. The main objective for our inclusion criteria has been to balance
language prominence (by number of speakers of the language) for maximum impact
of the resource, while simultaneously having a diverse suite of languages
based on their typological features (such as morphological type and language
family). Table 1 summarizes key information about the languages currently
included in Multi-SimLex. We have included a mixture of fusional,
agglutinative, isolating, and introflexive languages that come from eight
different language families. This includes languages that are very widely used
such as Chinese Mandarin and Spanish, and low-resource languages such as Welsh
and Kiswahili. We hope to further include additional languages and inspire
other researchers to contribute to the effort over the lifetime of this
project.
The work on data collection can be divided into two crucial phases: 1) a
translation phase where the extended English language dataset with 1,888 pairs
(described in §4) is translated into eleven target languages, and 2) an
annotation phase where human raters scored each pair in the translated set as
well as the English set. Detailed guidelines for both phases are available
online at: https://multisimlex.com.
Language | ISO 639-3 | Family | Type | # Speakers
---|---|---|---|---
Chinese Mandarin | cmn | Sino-Tibetan | Isolating | 1.116 B
Welsh | cym | IE: Celtic | Fusional | 0.7 M
English | eng | IE: Germanic | Fusional | 1.132 B
Estonian | est | Uralic | Agglutinative | 1.1 M
Finnish | fin | Uralic | Agglutinative | 5.4 M
French | fra | IE: Romance | Fusional | 280 M
Hebrew | heb | Afro-Asiatic | Introflexive | 9 M
Polish | pol | IE: Slavic | Fusional | 50 M
Russian | rus | IE: Slavic | Fusional | 260 M
Spanish | spa | IE: Romance | Fusional | 534.3 M
Kiswahili | swa | Niger-Congo | Agglutinative | 98 M
Yue Chinese | yue | Sino-Tibetan | Isolating | 73.5 M
Table 1: The list of 12 languages in the Multi-SimLex multilingual suite along
with their corresponding language family (IE = Indo-European), broad
morphological type, and their ISO 639-3 code. The number of speakers is based
on the total count of L1 and L2 speakers, according to ethnologue.com.
### 5.1 Word Pair Translation
Translators for each target language were instructed to find direct or
approximate translations for the 1,888 word pairs that satisfy the following
rules. (1) All pairs in the translated set must be unique (i.e., no duplicate
pairs); (2) Translating two words from the same English pair into the same
word in the target language is not allowed (e.g., it is not allowed to
translate car and automobile to the same Spanish word coche). (3) The
translated pairs must preserve the semantic relations between the two words
when possible. This means that, when multiple translations are possible, the
translation that best conveys the semantic relation between the two words
found in the original English pair is selected. (4) If it is not possible to
use a single-word translation in the target language, then a multi-word
expression (MWE) can be used to convey the nearest possible semantics given
the above points (e.g., the English word homework is translated into the
Polish MWE praca domowa).
Satisfying the above rules when finding appropriate translations for each
pair–while keeping to the spirit of the intended semantic relation in the
English version–is not always straightforward. For instance, kinship
terminology in Sinitic languages (Mandarin and Yue) uses different terms
depending on whether the family member is older or younger, and whether the
family member comes from the mother’s side or the father’s side. In Mandarin,
_brother_ has no direct translation and can be translated as either: 哥哥
(_older brother_) or 弟弟 (_younger brother_). Therefore, in such cases, the
translators are asked to choose the best option given the semantic context
(relation) expressed by the pair in English, otherwise select one of the
translations arbitrarily. This is also used to remove duplicate pairs in the
translated set, by differentiating the duplicates using a variant at each
instance. Further, many translation instances were resolved using near-
synonymous terms in the translation. For example, the words in the pair: _wood
– timber_ can only be directly translated in Estonian to _puit_ , and are not
distinguishable. Therefore, the translators approximated the translation for
timber to the compound noun _puitmaterjal_ (literally: _wood material_) in
order to produce a valid pair in the target language. In some cases, a direct
transliteration from English is used. For example, the pair: _physician_ and
_doctor_ both translate to the same word in Estonian (arst); the less formal
word _doktor_ is used as a translation of _doctor_ to generate a valid pair.
Languages: | cmn | cym | est | fin | fra | heb | pol | rus | spa | swa | yue | Avg
---|---|---|---|---|---|---|---|---|---|---|---|---
Nouns | 84.5 | 80.0 | 90.0 | 87.3 | 78.2 | 98.2 | 90.0 | 95.5 | 85.5 | 80.0 | 77.3 | 86.0
Adjectives | 88.5 | 88.5 | 61.5 | 73.1 | 69.2 | 100.0 | 84.6 | 100.0 | 69.2 | 88.5 | 84.6 | 82.5
Verbs | 88.0 | 74.0 | 82.0 | 76.0 | 78.0 | 100.0 | 74.0 | 100.0 | 74.0 | 76.0 | 86.0 | 82.5
Adverbs | 92.9 | 100.0 | 57.1 | 78.6 | 92.9 | 100.0 | 85.7 | 100.0 | 85.7 | 85.7 | 78.6 | 87.0
Overall | 86.5 | 81.0 | 82.0 | 82.0 | 78.0 | 99.0 | 85.0 | 97.5 | 80.5 | 81.0 | 80.5 | 84.8
Table 2: Inter-translator agreement (% of matched translated words) by
independent translators using a randomly selected 100-pair English sample from
the Multi-SimLex dataset, and the corresponding 100-pair samples from the
other datasets.
We measure the quality of the translated pairs by using a random sample set of
100 pairs (from the 1,888 pairs) to be translated by an independent translator
for each target language. The sample is proportionally stratified according to
the part-of-speech categories. The independent translator is given identical
instructions to the main translator; we then measure the percentage of matched
translated words between the two translations of the sample set. Table 2
summarizes the inter-translator agreement results for all languages and by
part-of-speech subsets. Overall across all languages, the agreement is 84.8%,
which is similar to prior work Camacho-Collados et al. (2017); Vulić,
Ponzetto, and Glavaš (2019).
### 5.2 Guidelines and Word Pair Scoring
Across all languages, 145 human annotators were asked to score all 1,888 pairs
(in their given language). We finally collect at least ten valid annotations
for each word pair in each language. All annotators were required to abide by
the following instructions:
1\. Each annotator must assign an integer score between 0 and 6 (inclusive)
indicating how semantically similar the two words in a given pair are. A score
of 6 indicates very high similarity (i.e., perfect synonymy), while zero
indicates no similarity.
2\. Each annotator must score the entire set of 1,888 pairs in the dataset.
The pairs must not be shared between different annotators.
3\. Annotators are able to break the workload over a period of approximately
2-3 weeks, and are able to use external sources (e.g. dictionaries, thesauri,
WordNet) if required.
4\. Annotators are kept anonymous, and are not able to communicate with each
other during the annotation process.
The selection criteria for the annotators required that all annotators must be
native speakers of the target language. Preference to annotators with
university education was given, but not required. Annotators were asked to
complete a spreadsheet containing the translated pairs of words, as well as
the part-of-speech, and a column to enter the score. The annotators did not
have access to the original pairs in English.
To ensure the quality of the collected ratings, we have employed an
adjudication protocol similar to the one proposed and validated by
Pilehvar:2018emnlp. It consists of the following three rounds:
Round 1: All annotators are asked to follow the instructions outlined above,
and to rate all 1,888 pairs with integer scores between 0 and 6.
Round 2: We compare the scores of all annotators and identify the pairs for
each annotator that have shown the most disagreement. We ask the annotators to
reconsider the assigned scores for those pairs only. The annotators may chose
to either change or keep the scores. As in the case with Round 1, the
annotators have no access to the scores of the other annotators, and the
process is anonymous. This process gives a chance for annotators to correct
errors or reconsider their judgments, and has been shown to be very effective
in reaching consensus, as reported by Pilehvar et al. (2018). We used a very
similar procedure as Pilehvar et al. (2018) to identify the pairs with the
most disagreement; for each annotator, we marked the $i$th pair if the rated
score $s_{i}$ falls within: $s_{i}\geq\mu_{i}+1.5$ or $s_{i}\leq\mu_{i}-1.5$,
where $\mu_{i}$ is the mean of the other annotators’ scores.
Round 3: We compute the average agreement for each annotator (with the other
annotators), by measuring the average Spearman’s correlation against all other
annotators. We discard the scores of annotators that have shown the least
average agreement with all other annotators, while we maintain at least ten
annotators per language by the end of this round. The actual process is done
in multiple iterations: (S1) we measure the average agreement for each
annotator with every other annotator (this corresponds to the APIAA measure,
see later); (S2) if we still have more than 10 valid annotators and the lowest
average score is higher than in the previous iteration, we remove the lowest
one, and rerun S1. Table 3 shows the number of annotators at both the start
(Round 1) and end (Round 3) of our process for each language.
We measure the agreement between annotators using two metrics, average
pairwise inter-annotator agreement (APIAA), and average mean inter-annotator
agreement (AMIAA). Both of these use Spearman’s correlation ($\rho$) between
annotators scores, the only difference is how they are averaged. They are
computed as follows:
Languages: | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue
---|---|---|---|---|---|---|---|---|---|---|---|---
R1: Start | 13 | 12 | 14 | 12 | 13 | 10 | 11 | 12 | 12 | 12 | 11 | 13
R3: End | 11 | 10 | 13 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 11
Table 3: Number of human annotators. R1 = Annotation Round 1, R3 = Round 3.
$\displaystyle{1)\textsc{apiaa}}=\frac{2\sum_{i,j}\rho(s_{i},s_{j})}{N(N-1)}\hskip
17.07164pt{2)\textsc{amiaa}}=\frac{\sum_{i}\rho(s_{i},\mu_{i})}{N}\,\text{,
where:}\;\mu_{i}=\frac{\sum_{j,j\neq i}s_{j}}{N-1}$ (1)
where $\rho(s_{i},s_{j})$ is the Spearman’s correlation between annotators $i$
and $j$’s scores ($s_{i}$,$s_{j}$) for all pairs in the dataset, and $N$ is
the number of annotators. APIAA has been used widely as the standard measure
for inter-annotator agreement, including in the original SimLex paper Hill,
Reichart, and Korhonen (2015). It simply averages the pairwise Spearman’s
correlation between all annotators. On the other hand, AMIAA compares the
average Spearman’s correlation of one held-out annotator with the average of
all the other $N-1$ annotators, and then averages across all $N$ ‘held-out’
annotators. It smooths individual annotator effects and arguably serves as a
better upper bound than APIAA (Gerz et al., 2016; Vulić et al., 2017a;
Pilehvar et al., 2018, inter alia).
Languages: | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue
---|---|---|---|---|---|---|---|---|---|---|---|---
Nouns | 0.661 | 0.622 | 0.659 | 0.558 | 0.647 | 0.698 | 0.538 | 0.606 | 0.524 | 0.582 | 0.626 | 0.727
Adjectives | 0.757 | 0.698 | 0.823 | 0.695 | 0.721 | 0.741 | 0.683 | 0.699 | 0.625 | 0.64 | 0.658 | 0.785
Verbs | 0.694 | 0.604 | 0.707 | 0.58 | 0.644 | 0.691 | 0.615 | 0.593 | 0.555 | 0.588 | 0.631 | 0.76
Adverbs | 0.699 | 0.593 | 0.695 | 0.579 | 0.646 | 0.595 | 0.561 | 0.543 | 0.535 | 0.563 | 0.562 | 0.716
Overall | 0.68 | 0.619 | 0.698 | 0.583 | 0.646 | 0.697 | 0.572 | 0.609 | 0.53 | 0.576 | 0.623 | 0.733
Table 4: Average pairwise inter-annotator agreement (APIAA). A score of $0.6$ and above indicates strong agreement. Languages: | cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue
---|---|---|---|---|---|---|---|---|---|---|---|---
Nouns | 0.757 | 0.747 | 0.766 | 0.696 | 0.766 | 0.809 | 0.68 | 0.717 | 0.657 | 0.71 | 0.725 | 0.804
Adjectives | 0.800 | 0.789 | 0.865 | 0.79 | 0.792 | 0.831 | 0.754 | 0.792 | 0.737 | 0.743 | 0.686 | 0.811
Verbs | 0.774 | 0.733 | 0.811 | 0.715 | 0.757 | 0.808 | 0.72 | 0.722 | 0.69 | 0.71 | 0.702 | 0.784
Adverbs | 0.749 | 0.693 | 0.777 | 0.697 | 0.748 | 0.729 | 0.645 | 0.655 | 0.608 | 0.671 | 0.623 | 0.716
Overall | 0.764 | 0.742 | 0.794 | 0.715 | 0.76 | 0.812 | 0.699 | 0.723 | 0.667 | 0.703 | 0.71 | 0.792
Table 5: Average mean inter-annotator agreement (AMIAA). A score of $0.6$ and
above indicates strong agreement.
We present the respective APIAA and AMIAA scores in Table 4 and Table 5 for
all part-of-speech subsets, as well as the agreement for the full datasets. As
reported in prior work Gerz et al. (2016); Vulić et al. (2017a), AMIAA scores
are typically higher than APIAA scores. Crucially, the results indicate
‘strong agreement’ (across all languages) using both measurements. The
languages with the highest annotator agreement were French (fra) and Yue
Chinese (yue), while Russian (rus) had the lowest overall IAA scores. These
scores, however, are still considered to be ‘moderately strong agreement’.
### 5.3 Data Analysis
Similarity Score Distributions. Across all languages, the average score (mean
$=1.61$, median$=1.1$) is on the lower side of the similarity scale. However,
looking closer at the scores of each language in Table 6, we indicate notable
differences in both the averages and the spread of scores. Notably, French has
the highest average of similarity scores (mean$=2.61$, median$=2.5$), while
Kiswahili has the lowest average (mean$=1.28$, median$=0.5$). Russian has the
lowest spread ($\sigma=1.37$), while Polish has the largest ($\sigma=1.62$).
All of the languages are strongly correlated with each other, as shown in
Figure 1, where all of the Spearman’s correlation coefficients are greater
than 0.6 for all language pairs. Languages that share the same language family
are highly correlated (e.g, cmn-yue, rus-pol, est-fin). In addition, we
observe high correlations between English and most other languages, as
expected. This is due to the effect of using English as the base/anchor
language to create the dataset. In simple words, if one translates to two
languages $L_{1}$ and $L_{2}$ starting from the same set of pairs in English,
it is higly likely that $L_{1}$ and $L_{2}$ will diverge from English in
different ways. Therefore, the similarity between $L_{1}$-eng and $L_{2}$-eng
is expected to be higher than between $L_{1}$-$L_{2}$, especially if $L_{1}$
and $L_{2}$ are typologically dissimilar languages (e.g., heb-cmn, see Figure
1). This phenomenon is well documented in related prior work (Leviant and
Reichart, 2015; Camacho-Collados et al., 2017; Mrkšić et al., 2017; Vulić,
Ponzetto, and Glavaš, 2019). While we acknowledge this as a slight artifact of
the dataset design, it would otherwise be impossible to construct a
semantically aligned and comprehensive dataset across a large number of
languages.
Lang: cmn cym eng est fin fra heb pol rus spa swa yue Interval $[0,1)$ 56.99
52.01 50.95 35.01 47.83 17.69 28.07 49.36 50.21 43.96 61.39 57.89 $[1,2)$ 8.74
19.54 17.06 30.67 21.35 20.39 35.86 17.32 22.40 22.35 11.86 7.84 $[2,3)$ 13.72
11.97 12.66 16.21 12.02 22.03 16.74 11.86 11.81 14.83 9.11 11.76 $[3,4)$ 11.60
8.32 8.16 10.22 10.17 17.64 8.47 8.95 8.10 9.38 7.10 12.98 $[4,5)$ 6.41 5.83
6.89 6.25 5.61 12.55 6.62 7.57 5.88 6.78 6.30 6.89 $[5,6]$ 2.54 2.33 4.29 1.64
2.97 9.64 4.24 4.93 1.59 2.70 4.24 2.65
Table 6: Fine-grained distribution of concept pairs over different rating
intervals in each Multi-SimLex language, reported as percentages. The total
number of concept pairs in each dataset is 1,888. Figure 1: Spearman’s
correlation coefficient ($\rho$) of the similarity scores for all languages in
Multi-SimLex.
We also report differences in the distribution of the frequency of words among
the languages in Multi-SimLex. Figure 2 shows six example languages, where
each bar segment shows the proportion of words in each language that occur in
the given frequency range. For example, the 10K-20K segment of the bars
represents the proportion of words in the dataset that occur in the list of
most frequent words between the frequency rank of 10,000 and 20,000 in that
language; likewise with other intervals. Frequency lists for the presented
languages are derived from Wikipedia and Common Crawl corpora.444Frequency
lists were obtained from fastText word vectors which are sorted by frequency:
https://fasttext.cc/docs/en/crawl-vectors.html While many concept pairs are
direct or approximate translations of English pairs, we can see that the
frequency distribution does vary across different languages, and is also
related to inherent language properties. For instance, in Finnish and Russian,
while we use infinitive forms of all verbs, conjugated verb inflections are
often more frequent in raw corpora than the corresponding infinitive forms.
The variance can also be partially explained by the difference in monolingual
corpora size used to derive the frequency rankings in the first place:
absolute vocabulary sizes are expected to fluctuate across different
languages. However, it is also important to note that the datasets also
contain subsets of lower-frequency and rare words, which can be used for rare
word evaluations in multiple languages, in the spirit of Pilehvar:2018emnlp’s
English rare word dataset.
Figure 2: A distribution over different frequency ranges for words from Multi-
SimLex datasets for selected languages. Multi-word expressions are excluded
from the analysis.
Cross-Linguistic Differences. Table 7 shows some examples of average
similarity scores of English, Spanish, Kiswahili and Welsh concept pairs.
Remember that the scores range from 0 to 6: the higher the score, the more
similar the participants found the concepts in the pair. The examples from
Table 7 show evidence of both the stability of average similarity scores
across languages (_unlikely – friendly_ , _book – literature_ , and _vanish –
disappear_), as well as language-specific differences (_care – caution_). Some
differences in similarity scores seem to group languages into clusters. For
example, the word pair _regular – average_ has an average similarity score of
4.0 and 4.1 in English and Spanish, respectively, whereas in Kiswahili and
Welsh the average similarity score of this pair is 0.5 and 0.8. We analyze
this phenomenon in more detail in §5.4.
Word Pair | POS | eng | spa | swa | cym
---|---|---|---|---|---
Similar average rating | | | | |
unlikely – friendly | ADV | 0 | 0 | 0 | 0
book – literature | N | 2.5 | 2.3 | 2.1 | 2.3
vanish – disappear | V | 5.2 | 5.3 | 5.5 | 5.3
Different average rating | | | | |
regular – average | ADJ | 4 | 4.1 | 0.5 | 0.8
care – caution | N | 4.1 | 5.7 | 0.2 | 3.1
One language higher | | | | |
large – big | ADJ | 5.9 | 2.7 | 3.8 | 3.8
bank – seat | N | 0 | 5.1 | 0 | 0.1
sunset - evening | N | 1.6 | 1.5 | 5.5 | 2.8
purely – completely | ADV | 2.3 | 2.3 | 1.1 | 5.4
One language lower | | | | |
woman – wife | N | 0.9 | 2.9 | 4.1 | 4.8
amazingly – fantastically | ADV | 5.1 | 0.4 | 4.1 | 4.1
wonderful – terrific | ADJ | 5.3 | 5.4 | 0.9 | 5.7
promise – swear | V | 4.8 | 5.3 | 4.3 | 0
Table 7: Examples of concept pairs with their similarity scores from four
languages. For brevity, only the original English concept pair is included,
but note that the pair is translated to all target languages, see §5.1.
There are also examples for each of the four languages having a notably higher
or lower similarity score for the same concept pair than the three other
languages. For example, _large – big_ in English has an average similarity
score of 5.9, whereas Spanish, Kiswahili and Welsh speakers rate the closest
concept pair in their native language to have a similarity score between 2.7
and 3.8. What is more, _woman – wife_ receives an average similarity of 0.9 in
English, 2.9 in Spanish, and greater than 4.0 in Kiswahili and Welsh. The
examples from Spanish include _banco – asiento_ (_bank – seat_) which receives
an average similarity score 5.1, while in the other three languages the
similarity score for this word pair does not exceed 0.1. At the same time, the
average similarity score of _espantosamente – fantásticamente_ (_amazingly –
fantastically_) is much lower in Spanish (0.4) than in other languages (4.1 –
5.1). In Kiswahili, an example of a word pair with a higher similarity score
than the rest would be _machweo – jioni_ (_sunset – evening_), having an
average score of 5.5, while the other languages receive 2.8 or less, and a
notably lower similarity score is given to _wa ajabu - mkubwa sana_
(_wonderful – terrific_), getting 0.9, while the other languages receive 5.3
or more. Welsh examples include _yn llwyr - yn gyfan gwbl_ (_purely –
completely_), which scores 5.4 among Welsh speakers but 2.3 or less in other
languages, while _addo – tyngu_ (_promise – swear_) is rated as 0 by all Welsh
annotators, but in the other three languages 4.3 or more on average.
There can be several explanations for the differences in similarity scores
across languages, including but not limited to cultural context, polysemy,
metonymy, translation, regional and generational differences, and most
commonly, the fact that words and meanings do not exactly map onto each other
across languages. For example, it is likely that the other three languages do
not have two separate words for describing the concepts in the concept pair:
_big – large_ , and the translators had to opt for similar lexical items that
were more distant in meaning, explaining why in English the concept pair
received a much higher average similarity score than in other languages. A
similar issue related to the mapping problem across languages arose in the
Welsh concept pair _yn llwye – yn gyfan gwbl_ , where Welsh speakers agreed
that the two concepts are very similar. When asked, bilingual speakers
considered the two Welsh concepts more similar than English equivalents
_purely – completely_ , potentially explaining why a higher average similarity
score was reached in Welsh. The example of _woman – wife_ can illustrate
cultural differences or another translation-related issue where the word
‘wife’ did not exist in some languages (for example, Estonian), and therefore
had to be described using other words, affecting the comparability of the
similarity scores. This was also the case with the _football – soccer_ concept
pair. The pair _bank – seat_ demonstrates the effect of the polysemy mismatch
across languages: while ‘bank’ has two different meanings in English, neither
of them is similar to the word ‘seat’, but in Spanish, ‘ _banco_ ’ can mean
‘bank’, but it can also mean ‘bench’. Quite naturally, Spanish speakers gave
the pair _banco – asiento_ a higher similarity score than the speakers of
languages where this polysemy did not occur.
An example of metonymy affecting the average similarity score can be seen in
the Kiswahili version of the word pair: _sunset – evening_ (_machweo –
jioni_). The average similarity score for this pair is much higher in
Kiswahili, likely because the word ‘sunset’ can act as a metonym of ‘evening’.
The low similarity score of _wonderful – terrific_ in Kiswahili (_wa ajabu -
mkubwa sana_) can be explained by the fact that while ‘ _mkubwa sana_ ’ can be
used as ‘terrific’ in Kiswahili, it technically means ‘very big’, adding to
the examples of translation- and mapping-related effects. The word pair
_amazingly – fantastically_ (_espantosamente – fantásticamente_) brings out
another translation-related problem: the accuracy of the translation. While ‘
_espantosamente_ ’ could arguably be translated to ‘amazingly’, more common
meanings include: ‘frightfully’, ‘terrifyingly’, and ‘shockingly’, explaining
why the average similarity score differs from the rest of the languages.
Another problem was brought out by _addo – tyngu_ (_promise – swear_) in
Welsh, where the ‘ _tyngu_ ’ may not have been a commonly used or even a known
word choice for annotators, pointing out potential regional or generational
differences in language use.
Language | Word Pair | POS | Rating all participants agree with
---|---|---|---
eng | trial – test | N | 4-5
swa | archbishop – bishop | N | 4-5
spa, cym | start – begin | V | 5-6
eng | smart – intelligent | ADJ | 5-6
eng, spa | quick – rapid | ADJ | 5-6
spa | circumstance – situation | N | 5-6
cym | football – soccer | N | 5-6
swa | football – soccer | N | 6
swa | pause – wait | V | 6
swa | money – cash | N | 6
cym | friend – buddy | N | 6
Table 8: Examples of concept pairs with their similarity scores from four
languages where all participants show strong agreement in their rating.
Table 8 presents examples of concept pairs from English, Spanish, Kiswahili,
and Welsh on which the participants agreed the most. For example, in English
all participants rated the similarity of _trial – test_ to be 4 or 5. In
Spanish and Welsh, all participants rated _start – begin_ to correspond to a
score of 5 or 6. In Kiswahili, _money – cash_ received a similarity rating of
6 from every participant. While there are numerous examples of concept pairs
in these languages where the participants agreed on a similarity score of 4 or
higher, it is worth noting that none of these languages had a single pair
where all participants agreed on either 1-2, 2-3, or 3-4 similarity rating.
Interestingly, in English all pairs where all the participants agreed on a 5-6
similarity score were adjectives.
### 5.4 Effect of Language Affinity on Similarity Scores
Based on the analysis in Figure 1 and inspecting the anecdotal examples in the
previous section, it is evident that the correlation between similarity scores
across languages is not random. To corroborate this intuition, we visualize
the vectors of similarity scores for each single language by reducing their
dimensionality to 2 via Principal Component Analysis (Pearson, 1901). The
resulting scatter plot in Figure 3 reveals that languages from the same family
or branch have similar patterns in the scores. In particular, Russian and
Polish (both Slavic), Finnish and Estonian (both Uralic), Cantonese and
Mandarin Chinese (both Sinitic), and Spanish and French (both Romance) are all
neighbors.
Figure 3: PCA of the language vectors resulting from the concatenation of
similarity judgments for all pairs.
In order to quantify exactly the effect of language affinity on the similarity
scores, we run correlation analyses between these and language features. In
particular, we extract feature vectors from URIEL (Littell et al., 2017), a
massively multilingual typological database that collects and normalizes
information compiled by grammarians and field linguists about the world’s
languages. In particular, we focus on information about geography (the areas
where the language speakers are concentrated), family (the phylogenetic tree
each language belongs to), and typology (including syntax, phonological
inventory, and phonology).555For the extraction of these features, we employed
lang2vec: github.com/antonisa/lang2vec Moreover, we consider typological
representations of languages that are not manually crafted by experts, but
rather learned from texts. Malaviya, Neubig, and Littell (2017) proposed to
construct such representations by training language-identifying vectors end-
to-end as part of neural machine translation models.
The vector for similarity judgments and the vector of linguistic features for
a given language have different dimensionality. Hence, we first construct a
distance matrix for each vector space, such that both columns and rows are
language indices, and each cell value is the cosine distance between the
vectors of the corresponding language pair. Given a set of L languages, each
resulting matrix $S$ has dimensionality of $\mathbb{R}^{|L|\times|L|}$ and is
symmetrical. To estimate the correlation between the matrix for similarity
judgments and each of the matrices for linguistic features, we run a Mantel
test (Mantel, 1967), a non-parametric statistical test based on matrix
permutations that takes into account inter-dependencies among pairwise
distances.
The results of the Mantel test reported in Table 3 show that there exist
statistically significant correlations between similarity judgments and
geography, family, and syntax, given that $p<0.05$ and $z>1.96$. The
correlation coefficient is particularly strong for geography ($r=0.647$) and
syntax ($r=0.649$). The former result is intuitive, because languages in
contact easily borrow and loan lexical units, and cultural interactions may
result in similar cognitive categorizations. The result for syntax, instead,
cannot be explained so easily, as formal properties of language do not affect
lexical semantics. Instead, we conjecture that, while no causal relation is
present, both syntactic features and similarity judgments might be linked to a
common explanatory variable (such as geography). In fact, several syntactic
properties are not uniformly spread across the globe. For instance, verbs with
Verb–Object–Subject word order are mostly concentrated in Oceania (Dryer,
2013). In turn, geographical proximity leads to similar judgment patterns, as
mentioned above. On the other hand, we find no correlation with phonology and
inventory, as expected, nor with the bottom-up typological features from
Malaviya, Neubig, and Littell (2017).
Features | Dimension | Mantel r | Mantel p | Mantel z
---|---|---|---|---
geography | 299 | 0.647 | 0.007* | 3.443
family | 3718 | 0.329 | 0.023* | 2.711
syntax | 103 | 0.649 | 0.007* | 3.787
inventory | 158 | 0.155 | 0.459 | 0.782
phonology | 28 | 0.397 | 0.046 | 1.943
Malaviya, Neubig, and Littell (2017) | 512 | -0.431 | 0.264 | -1.235
Table 9: Mantel test on the correlation between similarity judgments from
Multi-SimLex and linguistic features from typological databases.
## 6 Cross-Lingual Multi-SimLex Datasets
A crucial advantage of having semantically aligned monolingual datasets across
different languages is the potential to create cross-lingual semantic
similarity datasets. Such datasets allow for probing the quality of cross-
lingual representation learning algorithms Camacho-Collados et al. (2017);
Conneau et al. (2018a); Chen and Cardie (2018); Doval et al. (2018); Ruder,
Vulić, and Søgaard (2019); Conneau and Lample (2019); Ruder, Søgaard, and
Vulić (2019) as an intrinsic evaluation task. However, the cross-lingual
datasets previous work relied upon Camacho-Collados et al. (2017) were limited
to a homogeneous set of high-resource languages (e.g., English, German,
Italian, Spanish) and a small number of concept pairs (all less than 1K
pairs). We address both problems by 1) using a typologically more diverse
language sample, and 2) relying on a substantially larger English dataset as a
source for the cross-lingual datasets: 1,888 pairs in this work versus 500
pairs in the work of Camacho:2017semeval. As a result, each of our cross-
lingual datasets contains a substantially larger number of concept pairs, as
shown in Table 11. The cross-lingual Multi-Simlex datasets are constructed
automatically, leveraging word pair translations and annotations collected in
all 12 languages. This yields a total of 66 cross-lingual datasets, one for
each possible combination of languages. Table 11 provides the final number of
concept pairs, which lie between 2,031 and 3,480 pairs for each cross-lingual
dataset, whereas Table 10 shows some sample pairs with their corresponding
similarity scores.
The automatic creation and verification of cross-lingual datasets closely
follows the procedure first outlined by Camacho:2015acl and later adopted by
Camacho:2017semeval (for semantic similarity) and Vulic:2019acl (for graded
lexical entailment). First, given two languages, we intersect their aligned
concept pairs obtained through translation. For instance, starting from the
aligned pairs attroupement – foule in French and rahvasumm – rahvahulk in
Estonian, we construct two cross-lingual pairs attroupement – rahvaluk and
rahvasumm – foule. The scores of cross-lingual pairs are then computed as
averages of the two corresponding monolingual scores. Finally, in order to
filter out concept pairs whose semantic meaning was not preserved during this
operation, we retain only cross-lingual pairs for which the corresponding
monolingual scores $(s_{s},s_{t})$ differ at most by one fifth of the full
scale (i.e., $\mid s_{s}-s_{t}\mid\leq 1.2$). This heuristic mitigates the
noise due to cross-lingual semantic shifts Camacho-Collados et al. (2017);
Vulić, Ponzetto, and Glavaš (2019). We refer the reader to the work of
Camacho:2015acl for a detailed technical description of the procedure.
Pair | Concept-1 | Concept-2 | Score | Pair | Concept-1 | Concept-2 | Score
---|---|---|---|---|---|---|---
cym-eng | rhyddid | liberty | 5.37 | cmn-est | 可能 | optimistlikult | 0.83
cym-pol | plentynaidd | niemądry | 2.15 | fin-swa | psykologia | sayansi | 2.20
swa-eng | kutimiza | accomplish | 5.24 | eng-fra | normally | quotidiennement | 2.41
cmn-fra | 有弹性 | flexible | 4.08 | fin-spa | auto | bicicleta | 0.85
fin-spa | tietämättömyys | inteligencia | 0.55 | cmn-yue | 使灰心 | 使气馁 | 4.78
spa-fra | ganador | candidat | 2.15 | cym-swa | sefyllfa | mazingira | 1.90
est-yue | takso | 巴士 | 2.08 | est-spa | armee | legión | 3.25
eng-fin | orange | sitrushedelmä | 3.43 | fin-est | halveksuva | põlglik | 5.55
spa-pol | palabra | wskazówka | 0.55 | cmn-cym | 学生 | disgybl | 4.45
pol-swa | prawdopodobnie | uwezekano | 4.05 | pol-eng | grawitacja | meteor | 0.27
Table 10: Example concept pairs with their scores from a selection of cross-
lingual Multi-SimLex datasets.
To assess the quality of the resulting cross-lingual datasets, we have
conducted a verification experiment similar to Vulic:2019acl. We randomly
sampled 300 concept pairs in the English-Spanish, English-French, and English-
Mandarin cross-lingual datasets. Subsequently, we asked bilingual native
speakers to provide similarity judgments of each pair. The Spearman’s
correlation score $\rho$ between automatically induced and manually collected
ratings achieves $\rho\geq 0.90$ on all samples, which confirms the viability
of the automatic construction procedure.
| cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue
---|---|---|---|---|---|---|---|---|---|---|---|---
cmn | 1,888 | – | – | – | – | – | – | – | – | – | – | –
cym | 3,085 | 1,888 | – | – | – | – | – | – | – | – | – | –
eng | 3,151 | 3,380 | 1,888 | – | – | – | – | – | – | – | – | –
est | 3,188 | 3,305 | 3,364 | 1,888 | – | – | – | – | – | – | – | –
fin | 3,137 | 3,274 | 3,352 | 3,386 | 1,888 | – | – | – | – | – | – | –
fra | 2,243 | 2,301 | 2,284 | 2,787 | 2,682 | 1,888 | – | – | – | – | – | –
heb | 3,056 | 3,209 | 3,274 | 3,358 | 3,243 | 2,903 | 1,888 | – | – | – | – | –
pol | 3,009 | 3,175 | 3,274 | 3,310 | 3,294 | 2,379 | 3,201 | 1,888 | – | – | – | –
rus | 3,032 | 3,196 | 3,222 | 3,339 | 3,257 | 2,219 | 3,226 | 3,209 | 1,888 | – | – | –
spa | 3,116 | 3,205 | 3,318 | 3,312 | 3,256 | 2,645 | 3,256 | 3,250 | 3,189 | 1,888 | – | –
swa | 2,807 | 2,926 | 2,828 | 2,845 | 2,900 | 2,031 | 2,775 | 2,819 | 2,855 | 2,811 | 1,888 | –
yue | 3,480 | 3,062 | 3,099 | 3,080 | 3,063 | 2,313 | 3,005 | 2,950 | 2,966 | 3,053 | 2,821 | 1,888
Table 11: The sizes of all monolingual (main diagonal) and cross-lingual
datasets.
(a) Rating distribution
(b) Distribution over POS classes
Figure 4: (a) Rating distribution and (b) distribution of pairs over the four
POS classes in cross-lingual Multi-SimLex datasets averaged across each of the
66 language pairs ($y$ axes plot percentages as the total number of concept
pairs varies across different cross-lingual datasets). Minimum and maximum
percentages for each rating interval and POS class are also plotted.
Score and Class Distributions. The summary of score and class distributions
across all 66 cross-lingual datasets are provided in Figure 4(a) and Figure
4(b), respectively. First, it is obvious that the distribution over the four
POS classes largely adheres to that of the original monolingual Multi-SimLex
datasets, and that the variance is quite low: e.g., the eng-fra dataset
contains the lowest proportion of nouns (49.21%) and the highest proportion of
verbs (27.1%), adjectives (15.28%), and adverbs (8.41%). On the other hand,
the distribution over similarity intervals in Figure 4(a) shows a much greater
variance. This is again expected as this pattern resurfaces in monolingual
datasets (see Table 6). It is also evident that the data are skewed towards
lower-similarity concept pairs. However, due to the joint size of all cross-
lingual datasets (see Table 11), even the least represented intervals contain
a substantial number of concept pairs. For instance, the rus-yue dataset
contains the least highly similar concept pairs (in the interval $[4,6]$) of
all 66 cross-lingual datasets. Nonetheless, the absolute number of pairs (138)
in that interval for rus-yue is still substantial. If needed, this makes it
possible to create smaller datasets which are balanced across the similarity
spectra through sub-sampling.
## 7 Monolingual Evaluation of Representation Learning Models
After the numerical and qualitative analyses of the Multi-SimLex datasets
provided in §§ 5.3–5.4, we now benchmark a series of representation learning
models on the new evaluation data. We evaluate standard static word embedding
algorithms such as fastText Bojanowski et al. (2017), as well as a range of
more recent text encoders pretrained on language modeling such as multilingual
BERT (Devlin et al., 2019). These experiments provide strong baseline scores
on the new Multi-SimLex datasets and offer a first large-scale analysis of
pretrained encoders on word-level semantic similarity across diverse
languages. In addition, the experiments now enabled by Multi-SimLex aim to
answer several important questions. (Q1) Is it viable to extract high-quality
word-level representations from pretrained encoders receiving subword-level
tokens as input? Are such representations competitive with standard static
word-level embeddings? (Q2) What are the implications of monolingual
pretraining versus (massively) multilingual pretraining for performance? (Q3)
Do lightweight unsupervised post-processing techniques improve word
representations consistently across different languages? (Q4) Can we
effectively transfer available external lexical knowledge from resource-rich
languages to resource-lean languages in order to learn word representations
that distinguish between true similarity and conceptual relatedness (see the
discussion in §2.3)?
### 7.1 Models in Comparison
Static Word Embeddings in Different Languages. First, we evaluate a standard
method for inducing non-contextualized (i.e., static) word embeddings across a
plethora of different languages: fastText (ft) vectors Bojanowski et al.
(2017) are currently the most popular and robust choice given 1) the
availability of pretrained vectors in a large number of languages Grave et al.
(2018) trained on large Common Crawl (CC) plus Wikipedia (Wiki) data, and 2)
their superior performance across a range of NLP tasks Mikolov et al. (2018).
In fact, fastText is an extension of the standard word-level CBOW and skip-
gram word2vec models Mikolov et al. (2013) that takes into account subword-
level information, i.e. the contituent character n-grams of each word Zhu,
Vulić, and Korhonen (2019). For this reason, fastText is also more suited for
modeling rare words and morphologically rich languages.666We have also trained
standard word-level CBOW and skip-gram with negative sampling (SGNS) on full
Wikipedia dumps for several languages, but our preliminary experiments have
verified that they under-perform compared to fastText. This finding is
consistent with other recent studies demonstrating the usefulness of subword-
level information Vania and Lopez (2017); Mikolov et al. (2018); Zhu, Vulić,
and Korhonen (2019); Zhu et al. (2019). Therefore, we do not report the
results with CBOW and SGNS for brevity.
We rely on $300$-dimensional ft word vectors trained on CC+Wiki and available
online for 157 languages.777https://fasttext.cc/docs/en/crawl-vectors.html The
word vectors for all languages are obtained by CBOW with position-weights,
with character n-grams of length 5, a window of size 5, 10 negative examples,
and 10 training epochs. We also probe another (older) collection of ft
vectors, pretrained on full Wikipedia dumps of each
language.888https://fasttext.cc/docs/en/pretrained-vectors.html. The vectors
are 300-dimensional, trained with the skip-gram objective for 5 epochs, with 5
negative examples, a window size set to 5, and relying on all character
n-grams from length 3 to 6. Following prior work, we trim the vocabularies for
all languages to the 200K most frequent words and compute representations for
multi-word expressions by averaging the vectors of their constituent words.
Unsupervised Post-Processing. Further, we consider a variety of unsupervised
post-processing steps that can be applied post-training on top of any
pretrained input word embedding space without any external lexical semantic
resource. So far, the usefulness of such methods has been verified only on the
English language through benchmarks for lexical semantics and sentence-level
tasks Mu, Bhat, and Viswanath (2018). In this paper, we assess if unsupervised
post-processing is beneficial also in other languages. To this end, we apply
the following post-hoc transformations on the initial word embeddings:
1) Mean centering (mc) is applied after unit length normalization to ensure
that all vectors have a zero mean, and is commonly applied in data mining and
analysis Bro and Smilde (2003); van den Berg et al. (2006).
2) All-but-the-top (abtt) Mu, Bhat, and Viswanath (2018); Tang, Mousavi, and
de Sa (2019) eliminates the common mean vector and a few top dominating
directions (according to PCA) from the input distributional word vectors,
since they do not contribute towards distinguishing the actual semantic
meaning of different words. The method contains a single (tunable) hyper-
parameter $dd_{A}$ which denotes the number of the dominating directions to
remove from the initial representations. Previous work has verified the
usefulness of abtt in several English lexical semantic tasks such as semantic
similarity, word analogies, and concept categorization, as well as in
sentence-level text classification tasks Mu, Bhat, and Viswanath (2018).
3) uncovec Artetxe et al. (2018) adjusts the similarity order of an arbitrary
input word embedding space, and can emphasize either syntactic or semantic
information in the transformed vectors. In short, it transforms the input
space $\bm{X}$ into an adjusted space $\bm{X}\bm{W}_{\alpha}$ through a linear
map $\bm{W}_{\alpha}$ controlled by a single hyper-parameter $\alpha$. The
$n^{\text{th}}$-order similarity transformation of the input word vector space
$\bm{X}$ (for which $n=1$) can be obtained as
$\bm{M}_{n}(\bm{X})=\bm{M}_{1}(\bm{X}\bm{W}_{(n-1)/2})$, with
$\bm{W}_{\alpha}=\bm{Q}\bm{\Gamma}^{\alpha}$, where $\bm{Q}$ and $\bm{\Gamma}$
are the matrices obtained via eigendecomposition of
$\bm{X}^{T}\bm{X}=\bm{Q}\bm{\Gamma}\bm{Q}^{T}$. $\bm{\Gamma}$ is a diagonal
matrix containing eigenvalues of $\bm{X}^{T}\bm{X}$; $\bm{Q}$ is an orthogonal
matrix with eigenvectors of $\bm{X}^{T}\bm{X}$ as columns. While the
motivation for the uncovec methods does originate from adjusting discrete
similarity orders, note that $\alpha$ is in fact a continuous real-valued
hyper-parameter which can be carefully tuned. For more technical details we
refer the reader to the original work of Artetxe et al. (2018).
As mentioned, all post-processing methods can be seen as unsupervised
retrofitting methods that, given an arbitrary input vector space $\bm{X}$,
produce a perturbed/transformed output vector space $\bm{X}^{\prime}$, but
unlike common retrofitting methods Faruqui et al. (2015); Mrkšić et al.
(2017), the perturbation is completely unsupervised (i.e., self-contained) and
does not inject any external (semantic similarity oriented) knowledge into the
vector space. Note that different perturbations can also be stacked: e.g., we
can apply uncovec and then use abtt on top the output uncovec vectors. When
using uncovec and abtt we always length-normalize and mean-center the data
first (i.e., we apply the simple mc normalization). Finally, we tune the two
hyper-parameters $d_{A}$ (for abtt) and $\alpha$ (uncovec) on the English
Multi-SimLex and use the same values on the datasets of all other languages;
we report results with $dd_{A}=3$ or $dd_{A}=10$, and $\alpha=-0.3$.
Contextualized Word Embeddings. We also evaluate the capacity of unsupervised
pretraining architectures based on language modeling objectives to reason over
lexical semantic similarity. To the best of our knowledge, our article is the
first study performing such analyses. State-of-the-art models such as bert
Devlin et al. (2019), xlm Conneau and Lample (2019), or roberta Liu et al.
(2019b) are typically very deep neural networks based on the Transformer
architecture Vaswani et al. (2017). They receive subword-level tokens as
inputs (such as WordPieces Schuster and Nakajima (2012)) to tackle data
sparsity. In output, they return contextualized embeddings, dynamic
representations for words in context.
To represent words or multi-word expressions through a pretrained model, we
follow prior work Liu et al. (2019a) and compute an input item’s
representation by 1) feeding it to a pretrained model in isolation; then 2)
averaging the $H$ last hidden representations for each of the item’s
constituent subwords; and then finally 3) averaging the resulting subword
representations to produce the final $d$-dimensional representation, where $d$
is the embedding and hidden-layer dimensionality (e.g., $d=768$ with bert). We
opt for this approach due to its proven viability and simplicity Liu et al.
(2019a), as it does not require any additional corpora to condition the
induction of contextualized embeddings.999We also tested another encoding
method where we fed pairs instead of single words/concepts into the pretrained
encoder. The rationale is that the other concept in the pair can be used as
disambiguation signal. However, this method consistently led to sub-par
performance across all experimental runs. Other ways to extract the
representations from pretrained models Aldarmaki and Diab (2019); Wu et al.
(2019); Cao, Kitaev, and Klein (2020) are beyond the scope of this work, and
we will experiment with them in the future.
In other words, we treat each pretrained encoder enc as a black-box function
to encode a single word or a multi-word expression $x$ in each language into a
$d$-dimensional contextualized representation
$\mathbf{x}_{\textsc{enc}}\in\mathbb{R}^{d}=\textsc{enc}(x)$ (e.g., $d=768$
with bert). As multilingual pretrained encoders, we experiment with the
multilingual bert model (m-bert) Devlin et al. (2019) and xlm (Conneau and
Lample, 2019). m-bert is pretrained on monolingual Wikipedia corpora of 102
languages (comprising all Multi-SimLex languages) with a 12-layer Transformer
network, and yields $768$-dimensional representations. Since the concept pairs
in Multi-SimLex are lowercased, we use the uncased version of
m-bert.101010https://github.com/google-
research/bert/blob/master/multilingual.md m-bert comprises all Multi-SimLex
languages, and its evident ability to perform cross-lingual transfer Pires,
Schlinger, and Garrette (2019); Wu and Dredze (2019); Wang et al. (2020) also
makes it a convenient baseline model for cross-lingual experiments later in
§8. The second multilingual model we consider,
xlm-100,111111https://github.com/facebookresearch/XLM is pretrained on
Wikipedia dumps of 100 languages, and encodes each concept into a
$1,280$-dimensional representation. In contrast to m-bert, xlm-100 drops the
next-sentence prediction objective and adds a cross-lingual masked language
modeling objective. For both encoders, the representations of each concept are
computed as averages over the last $H=4$ hidden layers in all experiments, as
suggested by Wu:2019arxiv.121212In our preliminary experiments on several
language pairs, we have also verified that this choice is superior to: a)
using the output of only the last hidden layer (i.e., $H=1$) and b) averaging
over all hidden layers (i.e., $H=12$ for the bert-base architecture).
Likewise, using the special prepended ‘[CLS]’ token rather than the
constituent sub-words to encode a concept also led to much worse performance
across the board.
Besides m-bert and xlm, covering multiple languages, we also analyze the
performance of “language-specific” bert and xlm models for the languages where
they are available: Finnish, Spanish, English, Mandarin Chinese, and French.
The main goal of this comparison is to study the differences in performance
between multilingual “one-size-fits-all” encoders and language-specific
encoders. For all experiments, we rely on the pretrained models released in
the Transformers repository Wolf et al.
(2019).131313github.com/huggingface/transformers. The full list of currently
supported pretrained encoders is available here: huggingface.co/models.
Unsupervised post-processing steps devised for static word embeddings (i.e.,
mean-centering, abtt, uncovec) can also be applied on top of contextualized
embeddings if we predefine a vocabulary of word types $V$ that will be
represented in a word vector space $\mathbf{X}$. We construct such $V$ for
each language as the intersection of word types covered by the corresponding
CC+Wiki fastText vectors and the (single-word or multi-word) expressions
appearing in the corresponding Multi-SimLex dataset.
Finally, note that it is not feasible to evaluate a full range of available
pretrained encoders within the scope of this work. Our main intention is to
provide the first set of baseline results on Multi-SimLex by benchmarking a
sample of most popular encoders, at the same time also investigating other
important questions such as performance of static versus contextualized word
embeddings, or multilingual versus language-specific pretraining. Another
purpose of the experiments is to outline the wide potential and applicability
of the Multi-SimLex datasets for multilingual and cross-lingual representation
learning evaluation.
Languages: cmn cym eng est fin fra heb pol rus spa swa yue fastText (CC+Wiki)
(272) (151) (12) (319) (347) (43) (66) (326) (291) (46) (222) (–) (1) ft:init
.534 .363 .528 .469 .607 .578 .450 .405 .422 .511 .439 – (2) ft:+mc .539 .393
.535 .473 .621 .584 .480 .412 .424 .516 .469 – (3) ft:+abtt (-3) .557 .389
.536 .495 .642 .610 .501 .427 .459 .523 .473 – (4) ft:+abtt (-10) .583 .384
.551 .476 .651 .623 .503 .455 .500 .542 .462 – (5) ft:+uncovec .572 .387 .550
.465 .642 .595 .501 .435 .437 .525 .437 – (1)+(2)+(5)+(3) .574 .386 .549 .476
.655 .604 .503 .442 .452 .528 .432 – (1)+(2)+(5)+(4) .577 .376 .542 .455 .652
.613 .510 .466 .491 .540 .424 – fastText (Wiki) (429) (282) (6) (343) (345)
(73) (62) (354) (343) (57) (379) (677) (1) ft:init .315 .318 .436 .400 .575
.444 .428 .370 .359 .432 .332 .376 (2) ft:+mc .373 .337 .445 .404 .583 .463
.447 .383 .378 .447 .373 .427 (3) ft:+abtt (-3) .459 .343 .453 .404 .584 .487
.447 .387 .394 .456 .423 .429 (4) ft:+abtt (-10) .496 .323 .460 .385 .581 .494
.460 .401 .400 .477 .406 .399 (5) ft:+uncovec .518 .328 .469 .375 .568 .483
.449 .389 .387 .469 .386 .394 (1)+(2)+(5)+(3) .526 .323 .470 .369 .564 .495
.448 .392 .392 .473 .388 .388 (1)+(2)+(5)+(4) .526 .307 .471 .355 .548 .495
.450 .394 .394 .476 .382 .396 m-bert (0) (0) (0) (0) (0) (0) (0) (0) (0) (0)
(0) (0) (1) m-bert:init .408 .033 .138 .085 .162 .115 .104 .069 .085 .145 .125
.404 (2) m-bert:+mc .458 .044 .256 .122 .173 .183 .128 .097 .123 .203 .128
.469 (3) m-bert:+abtt (-3) .487 .056 .321 .137 .200 .287 .144 .126 .197 .299
.135 .492 (4) m-bert:+abtt (-10) .456 .056 .329 .122 .164 .306 .121 .126 .183
.315 .136 .467 (5) m-bert:+uncovec .464 .063 .317 .144 .213 .288 .164 .144
.198 .287 .143 .464 (1)+(2)+(5)+(3) .464 .083 .326 .130 .201 .304 .149 .122
.199 .295 .148 .456 (1)+(2)+(5)+(4) .444 .086 .326 .112 .179 .305 .135 .127
.187 .285 .119 .447
Table 12: A summary of results (Spearman’s $\rho$ correlation scores) on the
full monolingual Multi-SimLex datasets for 12 languages. We benchmark fastText
word embeddings trained on two different corpora (CC+Wiki and only Wiki) as
well the multilingual m-bert model (see §7.1). Results with the initial word
vectors are reported (i.e., without any unsupervised post-processing), as well
as with different unsupervised post-processing methods, described in §7.1. The
language codes are provided in Table 1. The numbers in the parentheses (gray
rows) refer to the number of OOV concepts excluded from the computation. The
highest scores for each language and per model are in bold.
### 7.2 Results and Discussion
The results we report are Spearman’s $\rho$ coefficients of the correlation
between the ranks derived from the scores of the evaluated models and the
human scores provided in each Multi-SimLex dataset. The main results with
static and contextualized word vectors for all test languages are summarized
in Table 12. The scores reveal several interesting patterns, and also pinpoint
the main challenges for future work.
State-of-the-Art Representation Models. The absolute scores of CC+Wiki ft,
Wiki ft, and m-bert are not directly comparable, because these models have
different coverage. In particular, Multi-SimLex contains some out-of-
vocabulary (OOV) words whose static ft embeddings are not available.141414We
acknowledge that it is possible to approximate word-level representations of
OOVs with ft by summing the constituent n-gram embeddings as proposed by
Bojanowski:2017tacl. However, we do not perform this step as the resulting
embeddings are typically of much lower quality than non-OOV embeddings Zhu,
Vulić, and Korhonen (2019). On the other hand, m-bert has perfect coverage. A
general comparison between CC+Wiki and Wiki ft vectors, however, supports the
intuition that larger corpora (such as CC+Wiki) yield higher correlations.
Another finding is that a single massively multilingual model such as m-bert
cannot produce semantically rich word-level representations. Whether this
actually happens because the training objective is different—or because the
need to represent 100+ languages reduces its language-specific capacity—is
investigated further below.
Languages: cmn cym eng est fin fra heb pol rus spa swa yue fastText (CC+Wiki)
ft:init nouns (1,051) .561 .497 .592 .627 .709 .641 .560 .538 .526 .583 .544
.426 verbs (469) .511 .265 .408 .379 .527 .551 .458 .384 .464 .499 .391 .252
adj (245) .448 .338 .564 .401 .546 .616 .467 .284 .349 .401 .344 .288 adv
(123) .622 .187 .482 .378 .547 .648 .491 .266 .514 .423 .172 .103 fastText
(CC+Wiki) ft:+abtt (-3) nouns .601 .512 .599 .621 .730 .653 .592 .585 .578
.605 .553 .431 verbs .583 .305 .454 .379 .575 .602 .520 .390 .475 .526 .381
.314 adj .526 .372 .601 .427 .592 .646 .483 .316 .409 .411 .402 .312 adv .675
.150 .504 .397 .546 .695 .491 .230 .495 .416 .223 .081 m-bert m-bert:+abtt
(-3) nouns .517 .091 .446 .191 .210 .364 .191 .188 .266 .418 .142 .539 verbs
.511 .005 .200 .039 .077 .248 .038 .107 .181 .266 .091 .503 adj .227 .050 .226
.028 .128 .193 .044 .046 .002 .099 .192 .267 adv .282 .012 .343 .112 .173 .390
.326 .036 .046 .207 161 .049 xlm-100 xlm:+abtt (-3) all .498 .096 .270 .118
.203 .234 .195 .106 .170 .289 .130 .506 nouns .551 .132 .381 .193 .238 .234
.242 .184 .292 .378 .165 .559 verbs .544 .038 .169 .006 .190 .132 .136 .073
.095 .243 .047 .570 adj .356 .140 .256 .081 .179 .185 .150 .046 .022 .100 .220
.291 adv .284 .017 .040 .086 .043 .027 .221 .014 .022 .315 .095 .156
Table 13: Spearman’s $\rho$ correlation scores over the four POS classes
represented in Multi-SimLex datasets. In addition to the word vectors
considered earlier in Table 12, we also report scores for another
contextualized model, xlm-100. The numbers in parentheses refer to the total
number of POS-class pairs in the original eng dataset and, consequently, in
all other monolingual datasets.
The overall results also clearly indicate that (i) there are differences in
performance across different monolingual Multi-SimLex datasets, and (ii)
unsupervised post-processing is universally useful, and can lead to huge
improvements in correlation scores for many languages. In what follows, we
also delve deeper into these analyses.
Impact of Unsupervised Post-Processing. First, the results in Table 12 suggest
that applying dimension-wise mean centering to the initial vector spaces has
positive impact on word similarity scores in all test languages and for all
models, both static and contextualized (see the +mc rows in Table 12). Mimno
and Thompson (2017) show that distributional word vectors have a tendency
towards narrow clusters in the vector space (i.e., they occupy a narrow cone
in the vector space and are therefore anisotropic Mu, Bhat, and Viswanath
(2018); Ethayarajh (2019)), and are prone to the undesired effect of hubness
Radovanović, Nanopoulos, and Ivanović (2010); Lazaridou, Dinu, and Baroni
(2015).151515Hubness can be defined as the tendency of some points/vectors
(i.e., “hubs”) to be nearest neighbors of many points in a high-dimensional
(vector) space Radovanović, Nanopoulos, and Ivanović (2010); Lazaridou, Dinu,
and Baroni (2015); Conneau et al. (2018a) Applying dimension-wise mean
centering has the effect of spreading the vectors across the hyper-plane and
mitigating the hubness issue, which consequently improves word-level
similarity, as it emerges from the reported results. Previous work has already
validated the importance of mean centering for clustering-based tasks Suzuki
et al. (2013), bilingual lexicon induction with cross-lingual word embeddings
Artetxe, Labaka, and Agirre (2018a); Zhang et al. (2019); Vulić et al. (2019),
and for modeling lexical semantic change Schlechtweg et al. (2019). However,
to the best of our knowledge, the results summarized in Table 12 are the first
evidence that also confirms its importance for semantic similarity in a wide
array of languages. In sum, as a general rule of thumb, we suggest to always
mean-center representations for semantic tasks.
The results further indicate that additional post-processing methods such as
abtt and uncovec on top of mean-centered vector spaces can lead to further
gains in most languages. The gains are even visible for languages which start
from high correlation scores: for instance., cmn with CC+Wiki ft increases
from 0.534 to 0.583, from 0.315 to 0.526 with Wiki ft, and from 0.408 to 0.487
with m-bert. Similarly, for rus with CC+Wiki ft we can improve from 0.422 to
0.500, and for fra the scores improve from 0.578 to 0.613. There are
additional similar cases reported in Table 12.
Overall, the unsupervised post-processing techniques seem universally useful
across languages, but their efficacy and relative performance does vary across
different languages. Note that we have not carefully fine-tuned the hyper-
parameters of the evaluated post-processing methods, so additional small
improvements can be expected for some languages. The main finding, however, is
that these post-processing techniques are robust to semantic similarity
computations beyond English, and are truly language independent. For instance,
removing dominant latent (PCA-based) components from word vectors emphasizes
semantic differences between different concepts, as only shared non-
informative latent semantic knowledge is removed from the representations.
In summary, pretrained word embeddings do contain more information pertaining
to semantic similarity than revealed in the initial vectors. This way, we have
corroborated the hypotheses from prior work Mu, Bhat, and Viswanath (2018);
Artetxe et al. (2018) which were not previously empirically verified on other
languages due to a shortage of evaluation data; this gap has now been filled
with the introduction of the Multi-SimLex datasets. In all follow-up
experiments, we always explicitly denote which post-processing configuration
is used in evaluation.
POS-Specific Subsets. We present the results for subsets of word pairs grouped
by POS class in Table 13. Prior work based on English data showed that
representations for nouns are typically of higher quality than those for the
other POS classes Schwartz, Reichart, and Rappoport (2015, 2016); Vulić et al.
(2017b). We observe a similar trend in other languages as well. This pattern
is consistent across different representation models and can be attributed to
several reasons. First, verb representations need to express a rich range of
syntactic and semantic behaviors rather than purely referential features
Gruber (1976); Levin (1993); Kipper et al. (2008). Second, low correlation
scores on the adjective and adverb subsets in some languages (e.g., pol, cym,
swa) might be due to their low frequency in monolingual texts, which yields
unreliable representations. In general, the variance in performance across
different word classes warrants further research in class-specific
representation learning Baker, Reichart, and Korhonen (2014); Vulić et al.
(2017b). The scores further attest the usefulness of unsupervised post-
processing as almost all class-specific correlation scores are improved by
applying mean-centering and abtt. Finally, the results for m-bert and xlm-100
in Table 13 further confirm that massively multilingual pretraining cannot
yield reasonable semantic representations for many languages: in fact, for
some classes they display no correlation with human ratings at all.
Differences across Languages. Naturally, the results from Tables 12 and 13
also reveal that there is variation in performance of both static word
embeddings and pretrained encoders across different languages. Among other
causes, the lowest absolute scores with ft are reported for languages with
least resources available to train monolingual word embeddings, such as
Kiswahili, Welsh, and Estonian. The low performance on Welsh is especially
indicative: Figure 1 shows that the ratings in the Welsh dataset match up very
well with the English ratings, but we cannot achieve the same level of
correlation in Welsh with Welsh ft word embeddings. Difference in performance
between two closely related languages, est (low-resource) and fin (high-
resource), provides additional evidence in this respect.
The highest reported scores with m-bert and xlm-100 are obtained for Mandarin
Chinese and Yue Chinese: this effectively points to the weaknesses of
massively multilingual training with a joint subword vocabulary spanning 102
and 100 languages. Due to the difference in scripts, “language-specific”
subwords for yue and cmn do not need to be shared across a vast amount of
languages and the quality of their representation remains unscathed. This
effectively means that m-bert’s subword vocabulary contains plenty of cmn-
specific and yue-specific subwords which are exploited by the encoder when
producing m-bert-based representations. Simultaneously, higher scores with
m-bert (and xlm in Table 13) are reported for resource-rich languages such as
French, Spanish, and English, which are better represented in m-bert’s
training data. We also observe lower absolute scores (and a larger number of
OOVs) for languages with very rich and productive morphological systems such
as the two Slavic languages (Polish and Russian) and Finnish. Since Polish and
Russian are known to have large Wikipedias and Common Crawl data Conneau et
al. (2019) (e.g., their Wikipedias are in the top 10 largest Wikipedias
worldwide), the problem with coverage can be attributed exactly to the
proliferation of morphological forms in those languages.
Finally, while Table 12 does reveal that unsupervised post-processing is
useful for all languages, it also demonstrates that peak scores are achieved
with different post-processing configurations. This finding suggests that a
more careful language-specific fine-tuning is indeed needed to refine word
embeddings towards semantic similarity. We plan to inspect the relationship
between post-processing techniques and linguistic properties in more depth in
future work.
Multilingual vs. Language-Specific Contextualized Embeddings. Recent work has
shown that—despite the usefulness of massively multilingual models such as
m-bert and xlm-100 for zero-shot cross-lingual transfer Pires, Schlinger, and
Garrette (2019); Wu and Dredze (2019)—stronger results in downstream tasks for
a particular language can be achieved by pretraining language-specific models
on language-specific data.
In this experiment, motivated by the low results of m-bert and xlm-100 (see
again Table 13), we assess if monolingual pretrained encoders can produce
higher-quality word-level representations than multilingual models. Therefore,
we evaluate language-specific bert and xlm models for a subset of the Multi-
SimLex languages for which such models are currently available: Finnish
Virtanen et al. (2019) (bert-base architecture, uncased), French Le et al.
(2019) (the FlauBERT model based on xlm), English (bert-base, uncased),
Mandarin Chinese (bert-base) Devlin et al. (2019) and Spanish (bert-base,
uncased). In addition, we also evaluate a series of pretrained encoders
available for English: (i) bert-base, bert-large, and bert-large with whole
word masking (wwm) from the original work on BERT Devlin et al. (2019), (ii)
monolingual “English-specific” xlm Conneau and Lample (2019), and (iii) two
models which employ parameter reduction techniques to build more compact
encoders: albert-b uses a configuration similar to bert-base, while albert-l
is similar to bert-large, but with an $18\times$ reduction in the number of
parameters Lan et al. (2020).161616All models and their further specifications
are available at the following link: https://huggingface.co/models.
From the results in Table 5, it is clear that monolingual pretrained encoders
yield much more reliable word-level representations. The gains are visible
even for languages such as cmn which showed reasonable performance with m-bert
and are substantial on all test languages. This further confirms the validity
of language-specific pretraining in lieu of multilingual training, if
sufficient monolingual data are available. Moreover, a comparison of
pretrained English encoders in Figure 5(b) largely follows the intuition: the
larger bert-large model yields slight improvements over bert-base, and we can
improve a bit more by relying on word-level (i.e., lexical-level)
masking.Finally, light-weight albert model variants are quite competitive with
the original bert models, with only modest drops reported, and albert-l again
outperforms albert-b. Overall, it is interesting to note that the scores
obtained with monolingual pretrained encoders are on a par with or even
outperform static ft word embeddings: this is a very intriguing finding per se
as it shows that such subword-level models trained on large corpora can
implicitly capture rich lexical semantic knowledge.
(a) Monolingual vs multilingual
(b) Pretrained eng encoders
Figure 5: (a) A performance comparison between monolingual pretrained language
encoders and massively multilingual encoders. For four languages (cmn, eng,
fin, spa), we report the scores with monolingual uncased bert-base
architectures and multilingual uncased m-bert model, while for fra we report
the results of the multilingual xlm-100 architecture and a monolingual French
FlauBERT model Le et al. (2019), which is based on the same architecture as
xlm-100. (b) A comparison of various pretrained encoders available for
English. All these models are post-processed via abtt (-3).
Similarity-Specialized Word Embeddings. Conflating distinct lexico-semantic
relations is a well-known property of distributional representations Turney
and Pantel (2010); Melamud et al. (2016). Semantic specialization fine-tunes
distributional spaces to emphasize a particular lexico-semantic relation in
the transformed space by injecting external lexical knowledge Glavaš, Ponti,
and Vulić (2019). Explicitly discerning between true semantic similarity (as
captured in Multi-SimLex) and broad conceptual relatedness benefits a number
of tasks, as discussed in §2.1.171717For an overview of specialization methods
for semantic similarity, we refer the interested reader to the recent tutorial
Glavaš, Ponti, and Vulić (2019). Since most languages lack dedicated lexical
resources, however, one viable strategy to steer monolingual word vector
spaces to emphasize semantic similarity is through cross-lingual transfer of
lexical knowledge, usually through a shared cross-lingual word vector space
Ruder, Vulić, and Søgaard (2019). Therefore, we evaluate the effectiveness of
specialization transfer methods using Multi-SimLex as our multilingual test
bed.
We evaluate a current state-of-the-art cross-lingual specialization transfer
method with minimal requirements, put forth recently by
Ponti:2019emnlp.181818We have also evaluated other specialization transfer
methods, e.g., Glavaš and Vulić (2018); Ponti et al. (2018b), but they are
consistently outperformed by the method of Ponti:2019emnlp. In a nutshell,
their li-postspec method is a multi-step procedure that operates as follows.
First, the knowledge about semantic similarity is extracted from WordNet in
the form of triplets, that is, linguistic constraints $(w_{1},w_{2},r)$, where
$w_{1}$ and $w_{2}$ are two concepts, and $r$ is a relation between them
obtained from WordNet (e.g., synonymy or antonymy). The goal is to “attract”
synonyms closer to each other in the transformed vector space as they reflect
true semantic similarity, and “repel” antonyms further apart. In the second
step, the linguistic constraints are translated from English to the target
language via a shared cross-lingual word vector space. To this end, following
Ponti:2019emnlp we rely on cross-lingual word embeddings (CLWEs) Joulin et al.
(2018) available online, which are based on Wiki ft
vectors.191919https://fasttext.cc/docs/en/aligned-vectors.html; for target
languages for which there are no pretrained CLWEs, we induce them following
the same procedure of Joulin:2018emnlp. Following that, a constraint
refinement step is applied in the target language which aims to eliminate the
noise inserted during the translation process. This is done by training a
relation classification tool: it is trained again on the English linguistic
constraints and then used on the translated target language constraints, where
the transfer is again enabled via a shared cross-lingual word vector
space.202020We again follow Ponti:2019emnlp and use a state-of-the-art
relation classifier Glavaš and Vulić (2018). We refer the reader to the
original work for additional technical details related to the classifier
design. Finally, a state-of-the-art monolingual specialization procedure from
Ponti:2018emnlp injects the (now target language) linguistic constraints into
the target language distributional space.
The scores are summarized in Table 14. Semantic specialization with li-
postspec leads to substantial improvements in correlation scores for the
majority of the target languages, demonstrating the importance of external
semantic similarity knowledge for semantic similarity reasoning. However, we
also observe deteriorated performance for the three target languages which can
be considered the lowest-resource ones in our set: cym, swa, yue. We
hypothesize that this occurs due to the inferior quality of the underlying
monolingual Wikipedia word embeddings, which generates a chain of error
accumulations. In particular, poor distributional word estimates compromise
the alignment of the embedding spaces, which in turn results in increased
translation noise, and reduced refinement ability of the relation classifier.
On a high level, this “poor get poorer” observation again points to the fact
that one of the primary causes of low performance of resource-low languages in
semantic tasks is the sheer lack of even unlabeled data for distributional
training. On the other hand, as we see from Table 13, typological
dissimilarity between the source and the target does not deteriorate the
effectiveness of semantic specialization. In fact, li-postspec does yield
substantial gains also for the typologically distant targets such as heb, cmn,
and est. The critical problem indeed seems to be insufficient raw data for
monolingual distributional training.
Languages: cmn cym eng est fin fra heb pol rus spa swa yue fastText (Wiki)
(429) (282) (6) (343) (345) (73) (62) (354) (343) (57) (379) (677) ft:init
.315 .318 – .400 .575 .444 .428 .370 .359 .432 .332 .376 li-postspec .584 .204
– .515 .619 .601 .510 .531 .547 .635 .238 .267
Table 14: The impact of vector space specialization for semantic similarity.
The scores are reported using the current state-of-the-art specialization
transfer li-postspec method of Ponti:2019emnlp, relying on English as a
resource-rich source language and the external lexical semantic knowledge from
the English WordNet.
## 8 Cross-Lingual Evaluation
Similar to monolingual evaluation in §7, we now evaluate several state-of-the-
art cross-lingual representation models on the suite of 66 automatically
constructed cross-lingual Multi-SimLex datasets. Again, note that evaluating a
full range of cross-lingual models available in the rich prior work on cross-
lingual representation learning is well beyond the scope of this article. We
therefore focus our cross-lingual analyses on several well-established and
indicative state-of-the-art cross-lingual models, again spanning both static
and contextualized cross-lingual word embeddings.
### 8.1 Models in Comparison
Static Word Embeddings. We rely on a state-of-the-art mapping-based method for
the induction of cross-lingual word embeddings (CLWEs): vecmap Artetxe,
Labaka, and Agirre (2018b). The core idea behind such mapping-based or
projection-based approaches is to learn a post-hoc alignment of independently
trained monolingual word embeddings Ruder, Vulić, and Søgaard (2019). Such
methods have gained popularity due to their conceptual simplicity and
competitive performance coupled with reduced bilingual supervision
requirements: they support CLWE induction with only as much as a few thousand
word translation pairs as the bilingual supervision Mikolov, Le, and Sutskever
(2013); Xing et al. (2015); Upadhyay et al. (2016); Ruder, Søgaard, and Vulić
(2019). More recent work has shown that CLWEs can be induced with even weaker
supervision from small dictionaries spanning several hundred pairs Vulić and
Korhonen (2016); Vulić et al. (2019), identical strings Smith et al. (2017),
or even only shared numerals Artetxe, Labaka, and Agirre (2017). In the
extreme, fully unsupervised projection-based CLWEs extract such seed bilingual
lexicons from scratch on the basis of monolingual data only (Conneau et al.,
2018a; Artetxe, Labaka, and Agirre, 2018b; Hoshen and Wolf, 2018; Alvarez-
Melis and Jaakkola, 2018; Chen and Cardie, 2018; Mohiuddin and Joty, 2019,
inter alia).
Recent empirical studies Glavaš et al. (2019); Vulić et al. (2019); Doval et
al. (2019) have compared a variety of unsupervised and weakly supervised
mapping-based CLWE methods, and vecmap emerged as the most robust and very
competitive choice. Therefore, we focus on 1) its fully unsupervised variant
(unsuper) in our comparisons. For several language pairs, we also report
scores with two other vecmap model variants: 2) a supervised variant which
learns a mapping based on an available seed lexicon (super), and 3) a
supervised variant with self-learning (super+sl) which iteratively increases
the seed lexicon and improves the mapping gradually. For a detailed
description of these variants, we refer the reader to recent work Artetxe,
Labaka, and Agirre (2018b); Vulić et al. (2019). We again use CC+Wiki ft
vectors as initial monolingual word vectors, except for yue where Wiki ft is
used. The seed dictionaries of two different sizes (1k and 5k translation
pairs) are based on PanLex Kamholz, Pool, and Colowick (2014), and are taken
directly from prior work Vulić et al.
(2019),212121https://github.com/cambridgeltl/panlex-bli or extracted from
PanLex following the same procedure as in the prior work.
Contextualized Cross-Lingual Word Embeddings. We again evaluate the capacity
of (massively) multilingual pretrained language models, m-bert and xlm-100, to
reason over cross-lingual lexical similarity. Implicitly, such an evaluation
also evaluates “the intrinsic quality” of shared cross-lingual word-level
vector spaces induced by these methods, and their ability to boost cross-
lingual transfer between different language pairs. We rely on the same
procedure of aggregating the models’ subword-level parameters into word-level
representations, already described in §7.1.
As in monolingual settings, we can apply unsupervised post-processing steps
such as abtt to both static and contextualized cross-lingual word embeddings.
| cmn | cym | eng | est | fin | fra | heb | pol | rus | spa | swa | yue
---|---|---|---|---|---|---|---|---|---|---|---|---
cmn | | .076 | .348 | .139 | .154 | .392 | .190 | .207 | .227 | .300 | .049 | .484
cym | .041 | | .087 | .017 | .049 | .095 | .033 | .072 | .085 | .089 | .002 | .083
eng | .565 | .004 | | .168 | .159 | .401 | .171 | .182 | .236 | .309 | .014 | .357
est | .014 | .097 | .335 | | .143 | .161 | .100 | .113 | .083 | .134 | .025 | .124
fin | .049 | .020 | .542 | .530 | | .195 | .077 | .110 | .111 | .157 | .029 | .167
fra | .224 | .015 | .662 | .559 | .533 | | .191 | .229 | .297 | .382 | .038 | .382
heb | .202 | .110 | .516 | .465 | .445 | .469 | | .095 | .154 | .181 | .038 | .185
pol | .121 | .028 | .464 | .415 | .465 | .534 | .412 | | .139 | .183 | .013 | .205
rus | .032 | .037 | .511 | .408 | .476 | .529 | .430 | .390 | | .248 | .037 | .226
spa | .546 | .048 | .498 | .450 | .490 | .600 | .462 | .398 | .419 | | .055 | .313
swa | -.01 | .116 | .029 | .006 | .013 | -.05 | .033 | .052 | .035 | .045 | | .043
yue | .004 | .047 | .059 | .004 | .002 | .059 | .001 | .074 | .032 | .089 | -.02 |
Table 15: Spearman’s $\rho$ correlation scores on all 66 cross-lingual
datasets. 1) The scores below the main diagonal are computed based on cross-
lingual word embeddings (CLWEs) induced by aligning CC+Wiki ft in all
languages (except for yue where we use Wiki ft) in a fully unsupervised way
(i.e., without any bilingual supervision). We rely on a standard CLWE mapping-
based (i.e., alignment) approach: vecmap Artetxe, Labaka, and Agirre (2018b).
2) The scores above the main diagonal are computed by obtaining
768-dimensional word-level vectors from pretrained multilingual BERT (m-bert)
following the procedure described in §7.1. For both fully unsupervised vecmap
and m-bert, we report the results with unsupervised postprocessing enabled:
all $2\times 66$ reported scores are obtained using the +abbt (-10) variant.
(a) Average scores
(b) Scores on eng-fra
Figure 6: Further performance analyses of cross-lingual Multi-SimLex datasets.
(a) Spearman’s $\rho$ correlation scores averaged over all 66 cross-lingual
Multi-SimLex datasets for two pretrained multilingual encoders (m-bert and
xlm). The scores are obtained with different configurations that exclude
(init) or enable unsupervised post-processing. (b) A comparison of various
pretrained encoders available for the English-French language pair, see the
main text for a short description of each benchmarked pretrained encoder.
### 8.2 Results and Discussion
Main Results and Differences across Language Pairs. A summary of the results
on the 66 cross-lingual Multi-SimLex datasets are provided in Table 15 and
Figure 6(a). The findings confirm several interesting findings from our
previous monolingual experiments (§7.2), and also corroborate several
hypotheses and findings from prior work, now on a large sample of language
pairs and for the task of cross-lingual semantic similarity.
First, we observe that the fully unsupervised vecmap model, despite being the
most robust fully unsupervised method at present, fails to produce a
meaningful cross-lingual word vector space for a large number of language
pairs (see the bottom triangle of Table 15): many correlation scores are in
fact no-correlation results, accentuating the problem of fully unsupervised
cross-lingual learning for typologically diverse languages and with fewer
amounts of monolingual data Vulić et al. (2019). The scores are particularly
low across the board for lower-resource languages such as Welsh and Kiswahili.
It also seems that the lack of monolingual data is a larger problem than
typological dissimilarity between language pairs, as we do observe reasonably
high correlation scores with vecmap for language pairs such as cmn-spa, heb-
est, and rus-fin. However, typological differences (e.g., morphological
richness) still play an important role as we observe very low scores when
pairing cmn with morphologically rich languages such fin, est, pol, and rus.
Similar to prior work of Vulic:2019we and doval2019onthe, given the fact that
unsupervised vecmap is the most robust unsupervised CLWE method at present
Glavaš et al. (2019), our results again question the usefulness of fully
unsupervised approaches for a large number of languages, and call for further
developments in the area of unsupervised and weakly supervised cross-lingual
representation learning.
The scores of m-bert and xlm-100222222The xlm-100 scores are not reported for
brevity; they largely follow the patterns observed with m-bert. The aggregated
scores between the two encoders are also very similar as indicated by Figure
6(a). lead to similar conclusions as in the monolingual settings. Reasonable
correlation scores are achieved only for a small subset of resource-rich
language pairs (e.g., eng, fra, spa, cmn) which dominate the multilingual
m-bert training. Interestingly, the scores indicate a much higher performance
of language pairs where yue is one of the languages when we use m-bert instead
of vecmap. This boils down again to the fact that yue, due to its specific
language script, has a good representation of its words and subwords in the
shared m-bert vocabulary. At the same time, a reliable vecmap mapping between
yue and other languages cannot be found due to a small monolingual yue corpus.
In cases when vecmap does not yield a degenerate cross-lingual vector space
starting from two monolingual ones, the final correlation scores seem
substantially higher than the ones obtained by the single massively
multilingual m-bert model.
Finally, the results in Figure 6(a) again verify the usefulness of
unsupervised post-processing also in cross-lingual settings. We observe
improved performance with both m-bert and xlm-100 when mean centering (+mc) is
applied, and further gains can be achieved by using abtt on the mean-centered
vector spaces. A similar finding also holds for static cross-lingual word
embeddings232323Note that vecmap does mean centering by default as one of its
preprocessing steps prior to learning the mapping function Artetxe, Labaka,
and Agirre (2018b); Vulić et al. (2019)., where applying abbt (-10) yields
higher scores on 61/66 language pairs.
Fully Unsupervised vs. Weakly Supervised Cross-Lingual Embeddings. The results
in Table 15 indicate that fully unsupervised cross-lingual learning fails for
a large number of language pairs. However, recent work Vulić et al. (2019) has
noted that these sub-optimal non-alignment solutions with the unsuper model
can be avoided by relying on (weak) cross-lingual supervision spanning only
several thousands or even hundreds of word translation pairs. Therefore, we
examine 1) if we can further improve the results on cross-lingual Multi-SimLex
resorting to (at least some) cross-lingual supervision for resource-rich
language pairs; and 2) if such available word-level supervision can also be
useful for a range of languages which displayed near-zero performance in Table
15. In other words, we test if recent “tricks of the trade” used in the rich
literature on CLWE learning reflect in gains on cross-lingual Multi-SimLex
datasets.
First, we reassess the findings established on the bilingual lexicon induction
task Søgaard, Ruder, and Vulić (2018); Vulić et al. (2019): using at least
some cross-lingual supervision is always beneficial compared to using no
supervision at all. We report improvements over the unsuper model for all 10
language pairs in Table 16, even though the unsuper method initially produced
strong correlation scores. The importance of self-learning increases with
decreasing available seed dictionary size, and the +sl model always
outperforms unsuper with 1k seed pairs; we observe the same patterns also with
even smaller dictionary sizes than reported in Table 16 (250 and 500 seed
pairs). Along the same line, the results in Table 17 indicate that at least
some supervision is crucial for the success of static CLWEs on resource-leaner
language pairs. We note substantial improvements on all language pairs; in
fact, the vecmap model is able to learn a more reliable mapping starting from
clean supervision. We again note large gains with self-learning.
| cmn-eng | eng-fra | eng-spa | eng-rus | est-fin | est-heb | fin-heb | fra-spa | pol-rus | pol-spa
---|---|---|---|---|---|---|---|---|---|---
unsuper | .565 | .662 | .498 | .511 | .510 | .465 | .445 | .600 | .390 | .398
super (1k) | .575 | .602 | .453 | .376 | .378 | .363 | .442 | .588 | .399 | .406
+sl (1k) | .577 | .703 | .547 | .548 | .591 | .513 | .488 | .639 | .439 | .456
super (5k) | .587 | .704 | .542 | .535 | .518 | .473 | .585 | .631 | .455 | .463
+sl (5k) | .581 | .707 | .548 | .551 | .556 | .525 | .589 | .645 | .432 | .476
Table 16: Results on a selection of cross-lingual Multi-SimLex datasets where the fully unsupervised (unsuper) CLWE variant yields reasonable performance. We also show the results with supervised vecmap without self-learning (super) and with self-learning (+sl), with two seed dictionary sizes: 1k and 5k pairs; see §8.1 for more detail. Highest scores for each language pair are in bold. | cmn-fin | cmn-rus | cmn-yue | cym-fin | cym-fra | cym-pol | fin-swa
---|---|---|---|---|---|---|---
unsuper | .049 | .032 | .004 | .020 | .015 | .028 | .013
super (1k) | .410 | .388 | .372 | .384 | .475 | .326 | .206
+sl (1k) | .590 | .537 | .458 | .471 | .578 | .380 | .264
Table 17: Results on a selection of cross-lingual Multi-SimLex datasets where
the fully unsupervised (unsuper) CLWE variant fails to learn a coherent shared
cross-lingual space. See also the caption of Table 16.
Multilingual vs. Bilingual Contextualized Embeddings. Similar to the
monolingual settings, we also inspect if massively multilingual training in
fact dilutes the knowledge necessary for cross-lingual reasoning on a
particular language pair. Therefore, we compare the 100-language xlm-100 model
with i) a variant of the same model trained on a smaller set of 17 languages
(xlm-17); ii) a variant of the same model trained specifically for the
particular language pair (xlm-2); and iii) a variant of the bilingual xlm-2
model that also leverages bilingual knowledge from parallel data during joint
training (xlm-2++). We again use the pretrained models made available by
Conneau:2019nips, and we refer to the original work for further technical
details.
The results are summarized in Figure 6(b), and they confirm the intuition that
massively multilingual pretraining can damage performance even on resource-
rich languages and language pairs. We observe a steep rise in performance when
the multilingual model is trained on a much smaller set of languages (17
versus 100), and further improvements can be achieved by training a dedicated
bilingual model. Finally, leveraging bilingual parallel data seems to offer
additional slight gains, but a tiny difference between xlm-2 and xlm-2++ also
suggests that this rich bilingual information is not used in the optimal way
within the xlm architecture for semantic similarity.
In summary, these results indicate that, in order to improve performance in
cross-lingual transfer tasks, more work should be invested into 1) pretraining
dedicated language pair-specific models, and 2) creative ways of leveraging
available cross-lingual supervision (e.g., word translation pairs, parallel or
comparable corpora) Liu et al. (2019a); Wu et al. (2019); Cao, Kitaev, and
Klein (2020) with pretraining paradigms such as bert and xlm. Using such
cross-lingual supervision could lead to similar benefits as indicated by the
results obtained with static cross-lingual word embeddings (see Table 16 and
Table 17). We believe that Multi-SimLex can serve as a valuable means to track
and guide future progress in this research area.
## 9 Conclusion and Future Work
We have presented Multi-SimLex, a resource containing human judgments on the
semantic similarity of word pairs for 12 monolingual and 66 cross-lingual
datasets. The languages covered are typologically diverse and include also
under-resourced ones, such as Welsh and Kiswahili. The resource covers an
unprecedented amount of 1,888 word pairs, carefully balanced according to
their similarity score, frequency, concreteness, part-of-speech class, and
lexical field. In addition to Multi-Simlex, we release the detailed protocol
we followed to create this resource. We hope that our consistent guidelines
will encourage researchers to translate and annotate Multi-Simlex -style
datasets for additional languages. This can help and create a hugely valuable,
large-scale semantic resource for multilingual NLP research.
The core Multi-SimLex we release with this paper already enables researchers
to carry out novel linguistic analysis as well as establishes a benchmark for
evaluating representation learning models. Based on our preliminary analyses,
we found that speakers of closely related languages tend to express equivalent
similarity judgments. In particular, geographical proximity seems to play a
greater role than family membership in determining the similarity of judgments
across languages. Moreover, we tested several state-of-the-art word embedding
models, both static and contextualized representations, as well as several
(supervised and unsupervised) post-processing techniques, on the newly
released Multi-SimLex. This enables future endeavors to improve multilingual
representation learning with challenging baselines. In addition, our results
provide several important insights for research on both monolingual and cross-
lingual word representations:
1) Unsupervised post-processing techniques (mean centering, elimination of top
principal components, adjusting similarity orders) are always beneficial
independently of the language, although the combination leading to the best
scores is language-specific and hence needs to be tuned.
2) Similarity rankings obtained from word embeddings for nouns are better
aligned with human judgments than all the other part-of-speech classes
considered here (verbs, adjectives, and, for the first time, adverbs). This
confirms previous generalizations based on experiments on English.
3) The factor having the greatest impact on the quality of word
representations is the availability of raw texts to train them in the first
place, rather than language properties (such as family, geographical area,
typological features).
4) Massively multilingual pretrained encoders such as m-bert (Devlin et al.,
2019) and xlm-100 (Conneau and Lample, 2019) fare quite poorly on our
benchmark, whereas pretrained encoders dedicated to a single language are more
competitive with static word embeddings such as fastText (Bojanowski et al.,
2017). Moreover, for language-specific encoders, parameter reduction
techniques reduce performance only marginally.
5) Techniques to inject clean lexical semantic knowledge from external
resources into distributional word representations were proven to be effective
in emphasizing the relation of semantic similarity. In particular, methods
capable of transferring such knowledge from resource-rich to resource-lean
languages (Ponti et al., 2019c) increased the correlation with human judgments
for most languages, except for those with limited unlabelled data.
Future work can expand our preliminary, yet large-scale study on the ability
of pretrained encoders to reason over word-level semantic similarity in
different languages. For instance, we have highlighted how sharing the same
encoder parameters across multiple languages may harm performance. However, it
remains unclear if, and to what extent, the input language embeddings present
in xlm-100 but absent in m-bert help mitigate this issue. In addition,
pretrained language embeddings can be obtained both from typological databases
(Littell et al., 2017) and from neural architectures (Malaviya, Neubig, and
Littell, 2017). Plugging these embeddings into the encoders in lieu of
embeddings trained end-to-end as suggested by prior work (Tsvetkov et al.,
2016; Ammar et al., 2016; Ponti et al., 2019b) might extend the coverage to
more resource-lean languages.
Another important follow-up analysis might involve the comparison of the
performance of representation learning models on multilingual datasets for
both word-level semantic similarity and sentence-level Natural Language
Understanding. In particular, Multi-SimLex fills a gap in available resources
for multilingual NLP and might help understand how lexical and compositional
semantics interact if put alongside existing resources such as XNLI Conneau et
al. (2018b) for natural language inference or PAWS-X Yang et al. (2019) for
cross-lingual paraphrase identification. Finally, the Multi-SimLex annotation
could turn out to be a unique source of evidence to study the effects of
polysemy in human judgments on semantic similarity: for equivalent word pairs
in multiple languages, are the similarity scores affected by how many senses
the two words (or multi-word expressions) incorporate?
In light of the success of initiatives like Universal Dependencies for
multilingual treebanks, we hope that making Multi-SimLex and its guidelines
available will encourage other researchers to expand our current sample of
languages. We particularly encourage creation and submission of comparable
Multi-SimLex datasets for under-resourced and typologically diverse languages
in future work. In particular, we have made a Multi-Simlex community website
available to facilitate easy creation, gathering, dissemination, and use of
annotated datasets: https://multisimlex.com/.
###### Acknowledgements.
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical
Acquisition Across Languages (no 648909). Thierry Poibeau is partly supported
by a PRAIRIE 3IA Institute fellowship ("Investissements d’avenir" program,
reference ANR-19-P3IA-0001).
## References
* Adams et al. (2017) Adams, Oliver, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017\. Cross-lingual word embeddings for low-resource language modeling. In _Proceedings of EACL_ , pages 937–947.
* Agirre et al. (2009) Agirre, Eneko, Enrique Alfonseca, Keith Hall, Jana Kravalová, Marius Pasca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In _Proceedings of NAACL-HLT_ , pages 19–27.
* Aldarmaki and Diab (2019) Aldarmaki, Hanan and Mona Diab. 2019. Context-aware cross-lingual mapping. In _Proceedings of NAACL-HLT_ , pages 3906–3911.
* Alvarez-Melis and Jaakkola (2018) Alvarez-Melis, David and Tommi Jaakkola. 2018. Gromov-Wasserstein alignment of word embedding spaces. In _Proceedings of EMNLP_ , pages 1881–1890.
* Ammar et al. (2016) Ammar, Waleed, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016\. Many languages, one parser. _Transactions of the ACL_ , 4:431–444.
* Artetxe, Labaka, and Agirre (2017) Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In _Proceedings of ACL_ , pages 451–462.
* Artetxe, Labaka, and Agirre (2018a) Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In _Proceedings of AAAI_ , pages 5012–5019.
* Artetxe, Labaka, and Agirre (2018b) Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In _Proceedings of ACL_ , pages 789–798.
* Artetxe et al. (2018) Artetxe, Mikel, Gorka Labaka, Iñigo Lopez-Gazpio, and Eneko Agirre. 2018. Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation. In _Proceedings of CoNLL_ , pages 282–291.
* Artetxe, Ruder, and Yogatama (2019) Artetxe, Mikel, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. _CoRR_ , abs/1910.11856.
* Baker, Fillmore, and Lowe (1998) Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In _Proceedings of ACL_ , pages 86–90.
* Baker, Reichart, and Korhonen (2014) Baker, Simon, Roi Reichart, and Anna Korhonen. 2014. An unsupervised model for instance level subcategorization acquisition. In _Proceedings of EMNLP_ , pages 278–289.
* Bapna and Firat (2019) Bapna, Ankur and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In _Proceedings of EMNLP_ , pages 1538–1548.
* Baroni et al. (2009) Baroni, Marco, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: A collection of very large linguistically processed web-crawled corpora. _Language Resources and Evaluation_ , 43(3):209–226.
* Baroni and Lenci (2010) Baroni, Marco and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. _Computational Linguistics_ , 36(4):673–721.
* Barzegar et al. (2018) Barzegar, Siamak, Brian Davis, Manel Zarrouk, Siegfried Handschuh, and André Freitas. 2018. SemR-11: A multi-lingual gold-standard for semantic similarity and relatedness for eleven languages. In _Proceedings of LREC_ , pages 3912–3916.
* van den Berg et al. (2006) van den Berg, Robert A., Huub C.J. Hoefsloot, Johan A. Westerhuis, Age K. Smilde, and Mariët J. van der Werf. 2006. Centering, scaling, and transformations: Improving the biological information content of metabolomics data. _BMC Genomics_ , 7(1):142.
* Bjerva and Augenstein (2018) Bjerva, Johannes and Isabelle Augenstein. 2018. From phonology to syntax: Unsupervised linguistic typology at different levels with language embeddings. In _Proceedings of NAACL-HLT_ , pages 907–916.
* Bojanowski et al. (2017) Bojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. _Transactions of the ACL_ , 5:135–146.
* Bro and Smilde (2003) Bro, Rasmus and Age K. Smilde. 2003. Centering and scaling in component analysis. _Journal of Chemometrics_ , 17(1):16–33.
* Bruni, Tran, and Baroni (2014) Bruni, Elia, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. _Journal of Artificial Intelligence Research_ , 49:1–47.
* Budanitsky and Hirst (2006) Budanitsky, Alexander and Graeme Hirst. 2006. Evaluating WordNet-based measures of lexical semantic relatedness. _Computational Linguistics_ , 32(1):13–47.
* Camacho-Collados and Navigli (2017) Camacho-Collados, Jose and Roberto Navigli. 2017. BabelDomains: Large-scale domain labeling of lexical resources. In _Proceedings of EACL_ , pages 223–228.
* Camacho-Collados et al. (2017) Camacho-Collados, Jose, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. Semeval-2017 task 2: Multilingual and cross-lingual semantic word similarity. In _Proceedings of SEMEVAL_ , pages 15–26.
* Camacho-Collados, Pilehvar, and Navigli (2015) Camacho-Collados, José, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. A framework for the construction of monolingual and cross-lingual word similarity datasets. In _Proceedings of ACL_ , pages 1–7.
* Cao, Kitaev, and Klein (2020) Cao, Steven, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In _Proceedings of ICLR_.
* Chen and Manning (2014) Chen, Danqi and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In _Proceedings of EMNLP_ , pages 740–750.
* Chen and Cardie (2018) Chen, Xilun and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In _Proceedings of EMNLP_ , pages 261–270.
* Cimiano, Hotho, and Staab (2005) Cimiano, Philipp, Andreas Hotho, and Steffen Staab. 2005. Learning concept hierarchies from text corpora using formal concept analysis. _Journal of Artificial Intelligence Research_ , 24:305–339.
* Clark et al. (2020) Clark, Jonathan H., Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. _Transactions of the ACL_.
* Collobert and Weston (2008) Collobert, Ronan and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In _Proceedings of ICML_ , pages 160–167.
* Collobert et al. (2011) Collobert, Ronan, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. _Journal of Machine Learning Research_ , 12:2493–2537.
* Conneau et al. (2019) Conneau, Alexis, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. _CoRR_ , abs/1911.02116.
* Conneau and Lample (2019) Conneau, Alexis and Guillaume Lample. 2019. Cross-lingual language model pretraining. In _Proceedings of NeurIPS_ , pages 7057–7067.
* Conneau et al. (2018a) Conneau, Alexis, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In _Proceedings of ICLR_.
* Conneau et al. (2018b) Conneau, Alexis, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In _Proceedings of EMNLP_ , pages 2475–2485.
* Coseriu (1967) Coseriu, Eugenio. 1967. Lexikalische solidaritäten. _Poetica_ , 1:293–303.
* Cruse (1986) Cruse, David Alan. 1986. _Lexical Semantics_. Cambridge University Press.
* De Deyne and Storms (2008) De Deyne, Simon and Gert Storms. 2008. Word associations: Network and semantic properties. _Behavior Research Methods_ , 40(1):213–231.
* Devlin et al. (2019) Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of NAACL-HLT_ , pages 4171–4186.
* Doitch et al. (2019) Doitch, Amichay, Ram Yazdi, Tamir Hazan, and Roi Reichart. 2019. Perturbation based learning for structured NLP tasks with application to dependency parsing. _Transactions of the ACL_ , 7:643–659.
* Doval et al. (2018) Doval, Yerai, Jose Camacho-Collados, Luis Espinosa-Anke, and Steven Schockaert. 2018\. Improving cross-lingual word embeddings by meeting in the middle. In _Proceedings of EMNLP_ , pages 294–304.
* Doval et al. (2019) Doval, Yerai, Jose Camacho-Collados, Luis Espinosa-Anke, and Steven Schockaert. 2019\. On the robustness of unsupervised and semi-supervised cross-lingual word embedding learning. _CoRR_ , abs/1908.07742.
* Dryer (2013) Dryer, Matthew S. 2013. Order of subject, object and verb. In Matthew S. Dryer and Martin Haspelmath, editors, _The World Atlas of Language Structures Online_. Max Planck Institute for Evolutionary Anthropology, Leipzig.
* Ercan and Yıldız (2018) Ercan, Gökhan and Olcay Taner Yıldız. 2018. AnlamVer: Semantic model evaluation dataset for Turkish - Word similarity and relatedness. In _Proceedings of COLING_ , pages 3819–3836.
* Ethayarajh (2019) Ethayarajh, Kawin. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In _Proceedings of EMNLP_ , pages 55–65.
* Evans (2011) Evans, Nicholas. 2011. Semantic Typology. In _The Oxford Handbook of Linguistic Typology_. Oxford University Press, pages 504–533.
* Faruqui et al. (2015) Faruqui, Manaal, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In _Proceedings of NAACL-HLT_ , pages 1606–1615.
* Fellbaum (1998) Fellbaum, Christiane. 1998. _WordNet_. MIT Press.
* Finkelstein et al. (2002) Finkelstein, Lev, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. _ACM Transactions on Information Systems_ , 20(1):116–131.
* Firth (1957) Firth, John R. 1957. A synopsis of linguistic theory, 1930-1955. _Studies in linguistic analysis_.
* François (2008) François, Alexandre. 2008. Semantic maps and the typology of colexification. _From polysemy to semantic change: Towards a typology of lexical semantic associations_ , 106:163.
* Gerz et al. (2016) Gerz, Daniela, Ivan Vulić, Felix Hill, Roi Reichart, and Anna Korhonen. 2016\. SimVerb-3500: A large-scale evaluation set of verb similarity. In _Proceedings of EMNLP_ , pages 2173–2182.
* Glavaš, Ponti, and Vulić (2019) Glavaš, Goran, Edoardo Maria Ponti, and Ivan Vulić. 2019. Semantic specialization of distributional word vectors. In _Proceedings of EMNLP: Tutorial Abstracts_.
* Glavaš and Vulić (2018) Glavaš, Goran and Ivan Vulić. 2018. Discriminating between lexico-semantic relations with the specialization tensor model. In _Proceedings of NAACL-HLT_ , pages 181–187.
* Glavaš and Vulić (2018) Glavaš, Goran and Ivan Vulić. 2018. Explicit retrofitting of distributional word vectors. In _Proceedings of ACL_ , pages 34–45.
* Glavaš et al. (2019) Glavaš, Goran, Robert Litschko, Sebastian Ruder, and Ivan Vulić. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In _Proceedings of ACL_ , pages 710–721.
* Grave et al. (2018) Grave, Edouard, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In _Proceedings of LREC_ , pages 3483–3487.
* Gruber (1976) Gruber, Jeffrey. 1976. _Lexical Structures in Syntax and Semantics_ , volume 25. North-Holland.
* Harris (1951) Harris, Zellig S. 1951. _Methods in Structural Linguistics_. University of Chicago Press.
* Hill et al. (2016) Hill, Felix, KyungHyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. _Transactions of the ACL_ , 4:17–30.
* Hill, Reichart, and Korhonen (2015) Hill, Felix, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. _Computational Linguistics_ , 41(4):665–695.
* Hoshen and Wolf (2018) Hoshen, Yedid and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In _Proceedings of EMNLP_ , pages 469–478.
* Huang et al. (2019) Huang, Junjie, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, and Maosong Sun. 2019. COS960: A Chinese word similarity dataset of 960 word pairs. _CoRR_ , abs/1906.00247.
* Joulin et al. (2018) Joulin, Armand, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In _Proceedings of EMNLP_ , pages 2979–2984.
* Kamath et al. (2019) Kamath, Aishwarya, Jonas Pfeiffer, Edoardo Maria Ponti, Goran Glavaš, and Ivan Vulić. 2019. Specializing distributional vectors of all words for lexical entailment. In _Proceedings of the 4th Workshop on Representation Learning for NLP_ , pages 72–83.
* Kamholz, Pool, and Colowick (2014) Kamholz, David, Jonathan Pool, and Susan M. Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In _Proceedings of LREC_ , pages 3145–3150.
* Kay and Maffi (2013) Kay, Paul and Luisa Maffi. 2013. Green and blue. In Matthew S. Dryer and Martin Haspelmath, editors, _The World Atlas of Language Structures Online_. Max Planck Institute for Evolutionary Anthropology, Leipzig.
* Kiela and Clark (2014) Kiela, Douwe and Stephen Clark. 2014. A systematic study of semantic vector space model parameters. In _Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)_ , pages 21–30.
* Kiela, Hill, and Clark (2015) Kiela, Douwe, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In _Proceedings of EMNLP_ , pages 2044–2048.
* Kim, de Marneffe, and Fosler-Lussier (2016) Kim, Joo-Kyung, Marie-Catherine de Marneffe, and Eric Fosler-Lussier. 2016. Adjusting word embeddings with semantic intensity orders. In _Proceedings of the 1st Workshop on Representation Learning for NLP_ , pages 62–69.
* Kim et al. (2016) Kim, Joo-Kyung, Gokhan Tur, Asli Celikyilmaz, Bin Cao, and Ye-Yi Wang. 2016. Intent detection using semantically enriched word embeddings. In _SLT_.
* Kipper et al. (2008) Kipper, Karin, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of English verbs. _Language Resources and Evaluation_ , 42(1):21–40.
* Kipper, Snyder, and Palmer (2004) Kipper, Karin, Benjamin Snyder, and Martha Palmer. 2004. Extending a verb-lexicon using a semantically annotated corpus. In _Proceedings of LREC_.
* Kipper Schuler (2005) Kipper Schuler, Karin. 2005. _VerbNet: A broad-coverage, comprehensive verb lexicon_. Ph.D. thesis, University of Pennsylvania.
* Kondratyuk and Straka (2019) Kondratyuk, Dan and Milan Straka. 2019. 75 languages, 1 model: Parsing Universal Dependencies universally. In _Proceedings of EMNLP-IJCNLP_ , pages 2779–2795.
* Lan et al. (2020) Lan, Zhenzhong, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In _Proceedings of ICLR_.
* Lauscher et al. (2019) Lauscher, Anne, Ivan Vulić, Edoardo Maria Ponti, Anna Korhonen, and Goran Glavaš. 2019. Informing unsupervised pretraining with external linguistic knowledge. _arXiv preprint arXiv:1909.02339_.
* Lazaridou, Dinu, and Baroni (2015) Lazaridou, Angeliki, Georgiana Dinu, and Marco Baroni. 2015. Hubness and pollution: Delving into cross-space mapping for zero-shot learning. In _Proceedings of ACL_ , pages 270–280.
* Le et al. (2019) Le, Hang, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2019. FlauBERT: Unsupervised language model pre-training for french. _CoRR_ , abs/1912.05372.
* Leviant and Reichart (2015) Leviant, Ira and Roi Reichart. 2015. Separated by an un-common language: Towards judgment language informed vector space modeling. _CoRR_ , abs/1508.00106.
* Levin (1993) Levin, Beth. 1993. _English verb classes and alternation, A preliminary investigation_. The University of Chicago Press.
* Levy and Goldberg (2014) Levy, Omer and Yoav Goldberg. 2014. Dependency-based word embeddings. In _Proceedings of ACL_ , pages 302–308.
* Lewis et al. (2019) Lewis, Patrick S. H., Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. MLQA: Evaluating cross-lingual extractive question answering. _CoRR_ , abs/1910.07475.
* Lin et al. (2019) Lin, Yu-Hsiang, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In _Proceedings of ACL_ , pages 3125–3135.
* Littell et al. (2017) Littell, Patrick, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In _Proceedings of EACL_ , pages 8–14.
* Liu et al. (2019a) Liu, Qianchu, Diana McCarthy, Ivan Vulić, and Anna Korhonen. 2019a. Investigating cross-lingual alignment methods for contextualized embeddings with token-level evaluation. In _Proceedings of CoNLL_ , pages 33–43.
* Liu et al. (2019b) Liu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. _CoRR_ , abs/1907.11692.
* Lucas (2000) Lucas, Margery. 2000. Semantic priming without association: A meta-analytic review. _Psychonomic Bulletin & Review_, 7(4):618–630.
* Luong, Socher, and Manning (2013) Luong, Thang, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In _Proceedings of CoNLL_ , pages 104–113.
* Lyons (1977) Lyons, John. 1977. _Semantics_ , volume 2. Cambridge University Press.
* Majid et al. (2007) Majid, Asifa, Melissa Bowerman, Miriam van Staden, and James S Boster. 2007. The semantic categories of cutting and breaking events: A crosslinguistic perspective. _Cognitive Linguistics_ , 18(2):133–152.
* Malaviya, Neubig, and Littell (2017) Malaviya, Chaitanya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In _Proceedings of EMNLP_ , pages 2529–2535.
* Mantel (1967) Mantel, Nathan. 1967. The detection of disease clustering and a generalized regression approach. _Cancer Research_ , 27(2 Part 1):209–220.
* McKeown et al. (2002) McKeown, Kathleen R., Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with Columbia’s newsblaster. In _Proceedings of HLT_ , page 280–285.
* Melamud et al. (2016) Melamud, Oren, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In _Proceedings of NAACL-HLT_ , pages 1030–1040.
* Mikolov et al. (2018) Mikolov, Tomas, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In _Proceedings of LREC_.
* Mikolov, Le, and Sutskever (2013) Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. _arXiv preprint, CoRR_ , abs/1309.4168.
* Mikolov et al. (2013) Mikolov, Tomas, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013\. Distributed representations of words and phrases and their compositionality. In _Proceedings of NeurIPS_ , pages 3111–3119.
* Miller (1995) Miller, George A. 1995. WordNet: A lexical database for English. _Communications of the ACM_ , pages 39–41.
* Mimno and Thompson (2017) Mimno, David and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In _Proceedings of EMNLP_ , pages 2873–2878.
* Mohiuddin and Joty (2019) Mohiuddin, Tasnim and Shafiq Joty. 2019. Revisiting adversarial autoencoder for unsupervised word translation with cycle consistency and improved training. In _Proceedings of NAACL-HLT_ , pages 3857–3867.
* Mrkšić et al. (2016) Mrkšić, Nikola, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In _Proceedings of NAACL-HLT_ , pages 142–148.
* Mrkšić et al. (2017) Mrkšić, Nikola, Ivan Vulić, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. _Transactions of the ACL_ , 5:309–324.
* Mu, Bhat, and Viswanath (2018) Mu, Jiaqi, Suma Bhat, and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In _Proceedings of ICLR_.
* Mykowiecka, Marciniak, and Rychlik (2018) Mykowiecka, Agnieszka, Małgorzata Marciniak, and Piotr Rychlik. 2018. SimLex-999 for Polish. In _Proceedings of LREC_.
* Nelson, McEvoy, and Schreiber (2004) Nelson, Douglas L., Cathy L. McEvoy, and Thomas A. Schreiber. 2004. The University of South Florida free association, rhyme, and word fragment norms. _Behavior Research Methods_ , 36(3):402–407.
* Netisopakul, Wohlgenannt, and Pulich (2019) Netisopakul, Ponrudee, Gerhard Wohlgenannt, and Aleksei Pulich. 2019. Word similarity datasets for Thai: Construction and evaluation. _CoRR_ , abs/1904.04307.
* Nivre et al. (2019) Nivre, Joakim, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Gabrielė Aleksandravičiūtė, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, et al. 2019. Universal Dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
* Pearson (1901) Pearson, Karl. 1901. On lines and planes of closest fit to systems of points in space. _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_ , 2(11):559–572.
* Peters et al. (2018) Peters, Matthew, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of NAACL-HLT_ , pages 2227–2237.
* Pilehvar et al. (2018) Pilehvar, Mohammad Taher, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. 2018. Card-660: Cambridge rare word dataset - a reliable benchmark for infrequent word representation models. In _Proceedings of EMNLP_ , pages 1391–1401.
* Pires, Schlinger, and Garrette (2019) Pires, Telmo, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In _Proceedings of ACL_ , pages 4996–5001.
* Ponti et al. (2019a) Ponti, Edoardo Maria, Helen O’Horan, Yevgeni Berzak, Ivan Vulić, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2019a. Modeling language variation and universals: A survey on typological linguistics for natural language processing. _Computational Linguistics_ , 45(3):559–601.
* Ponti et al. (2018a) Ponti, Edoardo Maria, Roi Reichart, Anna Korhonen, and Ivan Vulić. 2018a. Isomorphic transfer of syntactic structures in cross-lingual nlp. In _Proceedings of ACL_ , pages 1531–1542.
* Ponti et al. (2019b) Ponti, Edoardo Maria, Ivan Vulić, Ryan Cotterell, Roi Reichart, and Anna Korhonen. 2019b. Towards zero-shot language modeling. In _Proceedings of EMNLP-IJCNLP_ , pages 2893–2903.
* Ponti et al. (2018b) Ponti, Edoardo Maria, Ivan Vulić, Goran Glavaš, Nikola Mrkšić, and Anna Korhonen. 2018b. Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization. In _Proceedings of EMNLP_ , pages 282–293.
* Ponti et al. (2019c) Ponti, Edoardo Maria, Ivan Vulić, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019c. Cross-lingual semantic specialization via lexical relation induction. In _Proceedings of EMNLP_ , pages 2206–2217.
* Radovanović, Nanopoulos, and Ivanović (2010) Radovanović, Miloš, Alexandros Nanopoulos, and Mirjana Ivanović. 2010\. Hubs in space: Popular nearest neighbors in high-dimensional data. _Journal of Machine Learning Research_ , 11:2487–2531.
* Rasooli and Collins (2017) Rasooli, Mohammad Sadegh and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. _Transactions of the ACL_ , 5:279–293.
* Ren et al. (2018) Ren, Liliang, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In _Proceedings of EMNLP_ , pages 2780–2786.
* Rotman and Reichart (2019) Rotman, Guy and Roi Reichart. 2019. Deep contextualized self-training for low resource dependency parsing. _Transactions of the ACL_ , 7:695–713.
* Ruder, Søgaard, and Vulić (2019) Ruder, Sebastian, Anders Søgaard, and Ivan Vulić. 2019. Unsupervised cross-lingual representation learning. In _Proceedings of ACL: Tutorial Abstracts_ , pages 31–38.
* Ruder, Vulić, and Søgaard (2019) Ruder, Sebastian, Ivan Vulić, and Anders Søgaard. 2019. A survey of cross-lingual embedding models. _Journal of Artificial Intelligence Research_ , 65:569–631.
* Rzymski et al. (2020) Rzymski, Christoph, Tiago Tresoldi, Simon J Greenhill, Mei-Shin Wu, Nathanael E Schweikhard, Maria Koptjevskaja-Tamm, Volker Gast, Timotheus A Bodt, Abbie Hantgan, Gereon A Kaiping, et al. 2020. The database of cross-linguistic colexifications, reproducible analysis of cross-linguistic polysemies. _Scientific Data_ , 7(1):1–12.
* Sakaizawa and Komachi (2018) Sakaizawa, Yuya and Mamoru Komachi. 2018. Construction of a Japanese word similarity dataset. In _Proceedings of LREC_ , pages 948–951.
* Schlechtweg et al. (2019) Schlechtweg, Dominik, Anna Hätty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A wind of change: Detecting and evaluating lexical semantic change across times and domains. In _Proceedings of ACL_ , pages 732–746.
* Schuster and Nakajima (2012) Schuster, Mike and Kaisuke Nakajima. 2012. Japanese and korean voice search. In _International Conference on Acoustics, Speech and Signal Processing_ , pages 5149–5152.
* Schwartz, Reichart, and Rappoport (2015) Schwartz, Roy, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In _Proceedings of CoNLL_ , pages 258–267.
* Schwartz, Reichart, and Rappoport (2016) Schwartz, Roy, Roi Reichart, and Ari Rappoport. 2016. Symmetric patterns and coordinations: Fast and enhanced representations of verbs and adjectives. In _Proceedings of NAACL-HLT_ , pages 499–505.
* Smith et al. (2017) Smith, Samuel L., David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017\. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In _Proceedings of ICLR (Conference Track)_.
* Snyder and Barzilay (2010) Snyder, Benjamin and Regina Barzilay. 2010. Climbing the tower of Babel: Unsupervised multilingual learning. In _Proceedings of ICML_ , pages 29–36.
* Søgaard, Ruder, and Vulić (2018) Søgaard, Anders, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In _Proceedings of ACL_ , pages 778–788.
* Suzuki et al. (2013) Suzuki, Ikumi, Kazuo Hara, Masashi Shimbo, Marco Saerens, and Kenji Fukumizu. 2013\. Centering similarity measures to reduce hubs. In _Proceedings of EMNLP_ , pages 613–623.
* Tang, Mousavi, and de Sa (2019) Tang, Shuai, Mahta Mousavi, and Virginia R. de Sa. 2019. An empirical study on post-processing methods for word embeddings. _CoRR_ , abs/1905.10971.
* Trier (1931) Trier, Jost. 1931. _Der Deutsche Wortschatz im Sinnbezirk des Verstandes: Die Geschichte eines sprachlichen Feldes. 1. Von den Anfängen bis zum Beginn des 13. Jahrhunderts_. Ph.D. thesis, University of Bonn.
* Tsvetkov et al. (2016) Tsvetkov, Yulia, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W. Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In _Proceedings of NAACL-HLT_ , pages 1357–1366.
* Turney (2012) Turney, Peter D. 2012. Domain and function: A dual-space model of semantic relations and compositions. _Journal of Artificial Intelligence Research_ , 44:533–585.
* Turney and Pantel (2010) Turney, Peter D. and Patrick Pantel. 2010. From frequency to meaning: vector space models of semantics. _Journal of Artifical Intelligence Research_ , 37(1):141–188.
* Upadhyay et al. (2016) Upadhyay, Shyam, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In _Proceedings of ACL_ , pages 1661–1670.
* Vania and Lopez (2017) Vania, Clara and Adam Lopez. 2017. From characters to words to in between: Do we capture morphology? In _Proceedings of ACL_ , pages 2016–2027.
* Vaswani et al. (2017) Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Proceedings of NeurIPS_ , pages 6000–6010.
* Venekoski and Vankka (2017) Venekoski, Viljami and Jouko Vankka. 2017. Finnish resources for evaluating language model semantics. In _Proceedings of NODALIDA_ , pages 231–236.
* Virtanen et al. (2019) Virtanen, Antti, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: BERT for Finnish. _CoRR_ , abs/1912.07076.
* Vulić et al. (2017a) Vulić, Ivan, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017a. Hyperlex: A large-scale evaluation of graded lexical entailment. _Computational Linguistics_ , 43(4):781–835.
* Vulić et al. (2019) Vulić, Ivan, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In _Proceedings of EMNLP_ , pages 4407–4418.
* Vulić, Kiela, and Korhonen (2017) Vulić, Ivan, Douwe Kiela, and Anna Korhonen. 2017. Evaluation by association: A systematic study of quantitative word association evaluation. In _Proceedings of EACL_ , pages 163–175.
* Vulić and Korhonen (2016) Vulić, Ivan and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In _Proceedings of ACL_ , pages 247–257.
* Vulić, Ponzetto, and Glavaš (2019) Vulić, Ivan, Simone Paolo Ponzetto, and Goran Glavaš. 2019. Multilingual and cross-lingual graded lexical entailment. In _Proceedings of ACL_ , pages 4963–4974.
* Vulić et al. (2017b) Vulić, Ivan, Roy Schwartz, Ari Rappoport, Roi Reichart, and Anna Korhonen. 2017b. Automatic selection of context configurations for improved class-specific word representations. In _Proceedings of CoNLL_ , pages 112–122.
* Wang et al. (2020) Wang, Zihan, Karthikeyan K, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual BERT: An empirical study. In _Proceedings of ICLR_.
* Wieting et al. (2015) Wieting, John, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. _Transactions of the ACL_ , 3:345–358.
* Williams, Nangia, and Bowman (2018) Williams, Adina, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of NAACL-HLT_ , pages 1112–1122.
* Wolf et al. (2019) Wolf, Thomas, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. _ArXiv_ , abs/1910.03771.
* Wu et al. (2019) Wu, Shijie, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019\. Emerging cross-lingual structure in pretrained language models. _CoRR_ , abs/1911.01464.
* Wu and Dredze (2019) Wu, Shijie and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In _Proceedings of EMNLP_ , pages 833–844.
* Wu and Palmer (1994) Wu, Zhibiao and Martha Palmer. 1994. Verb semantics and lexical selection. In _Proceedings of ACL_ , pages 133–138.
* Xing et al. (2015) Xing, Chao, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In _Proceedings of NAACL-HLT_ , pages 1006–1011.
* Yang et al. (2019) Yang, Yinfei, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. In _Proceedings of EMNLP_ , pages 3687–3692.
* Zeman et al. (2018) Zeman, Daniel, Jan Hajič, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 1–21.
* Zhang et al. (2019) Zhang, Mozhi, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or shōjo? Cross-lingual alignment of non-isomorphic embeddings with iterative normalization. In _Proceedings of ACL_ , pages 3180–3189.
* Zhang, Baldridge, and He (2019) Zhang, Yuan, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In _Proceedings of NAACL-HLT_ , pages 1298–1308.
* Zhu et al. (2019) Zhu, Yi, Benjamin Heinzerling, Ivan Vulić, Michael Strube, Roi Reichart, and Anna Korhonen. 2019. On the importance of subword information for morphological tasks in truly low-resource languages. In _Proceedings of CoNLL_ , pages 216–226.
* Zhu, Vulić, and Korhonen (2019) Zhu, Yi, Ivan Vulić, and Anna Korhonen. 2019. A systematic study of leveraging subword information for learning word representations. In _Proceedings of NAACL-HLT_ , pages 912–932.
|
2024-09-04T02:54:58.838230 | 2020-03-10T17:42:28 | 2003.04875 | {
"authors": "Manuel Schilling, \\'Etienne Wodey, Ludger Timmen, Dorothee Tell, Klaus\n H. Zipfel, Dennis Schlippert, Christian Schubert, Ernst M. Rasel, J\\\"urgen\n M\\\"uller",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26142",
"submitter": "Manuel Schilling",
"url": "https://arxiv.org/abs/2003.04875"
} | arxiv-papers | # Gravity field modelling for the Hannover $10\text{\,}\mathrm{m}$ atom
interferometer
Manuel Schilling German Aerospace Center (DLR), Institute for Satellite
Geodesy and Inertial Sensing, c/o Leibniz Universität Hannover, DLR-Institut,
Welfengarten 1, 30167 Hannover, Germany Leibniz Universität Hannover,
Institut für Erdmessung, Schneiderberg 50, 30167 Hannover, Germany Étienne
Wodey Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1,
30167 Hannover, Germany Ludger Timmen Leibniz Universität Hannover, Institut
für Erdmessung, Schneiderberg 50, 30167 Hannover, Germany Dorothee Tell
Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167
Hannover, Germany Klaus H. Zipfel Leibniz Universität Hannover, Institut für
Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Dennis Schlippert
Leibniz Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167
Hannover, Germany Christian Schubert German Aerospace Center (DLR),
Institute for Satellite Geodesy and Inertial Sensing, c/o Leibniz Universität
Hannover, DLR-Institut, Welfengarten 1, 30167 Hannover, Germany Leibniz
Universität Hannover, Institut für Quantenoptik, Welfengarten 1, 30167
Hannover, Germany Ernst M. Rasel Leibniz Universität Hannover, Institut für
Quantenoptik, Welfengarten 1, 30167 Hannover, Germany Jürgen Müller Leibniz
Universität Hannover, Institut für Erdmessung, Schneiderberg 50, 30167
Hannover, Germany
(This is a post-peer-review, pre-copyedit version of an article published in
Journal of Geodesy 94:122. The final authenticated version is available online
at: https://dx.doi.org/10.1007/s00190-020-01451-y)
Absolute gravimeters are used in geodesy, geophysics, and physics for a wide
spectrum of applications. Stable gravimetric measurements over timescales from
several days to decades are required to provide relevant insight into
geophysical processes. Users of absolute gravimeters participate in
comparisons with a metrological reference in order to monitor the temporal
stability of the instruments and determine the bias to that reference.
However, since no measurement standard of higher-order accuracy currently
exists, users of absolute gravimeters participate in key comparisons led by
the International Committee for Weights and Measures. These comparisons
provide the reference values of highest accuracy compared to the calibration
against a single gravimeter operated at a metrological institute. The
construction of stationary, large scale atom interferometers paves the way
towards a new measurement standard in absolute gravimetry used as a reference
with a potential stability up to
$1\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at $1\text{\,}\mathrm{s}$
integration time. At the Leibniz University Hannover, we are currently
building such a very long baseline atom interferometer with a
$10\text{\,}\mathrm{m}$ long interaction zone. The knowledge of local gravity
and its gradient along and around the baseline is required to establish the
instrument’s uncertainty budget and enable transfers of gravimetric
measurements to nearby devices for comparison and calibration purposes. We
therefore established a control network for relative gravimeters and
repeatedly measured its connections during the construction of the atom
interferometer. We additionally developed a 3D model of the host building to
investigate the self-attraction effect and studied the impact of mass changes
due to groundwater hydrology on the gravity field around the reference
instrument. The gravitational effect from the building 3D model is in
excellent agreement with the latest gravimetric measurement campaign which
opens the possibility to transfer gravity values with an uncertainty below the
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level.
Keywords: atom interferometry, gravity acceleration, absolute gravimetry,
gravimeter reference
## Introduction
A variety of applications in geodesy, geophysics and physics require the
knowledge of local gravity g [67]. These applications include observing
temporal variations of the mass distribution in the hydrosphere, atmosphere
and cryosphere and furthermore the establishment and monitoring of height and
gravity reference frames, the determination of glacial isostatic adjustment,
and the realisation of SI111Système International d’unités units, e. g., of
force and mass [32, 28, 54]. The absolute value of gravity g is usually
measured by tracking the free-fall of a test mass using a laser interferometer
[33]. The operation of an absolute gravimeter (AG), especially the combination
of several instruments in a project, requires special consideration of the
offset to _true g_ and the change thereof. In addition, the long-term
stability of absolute gravimeters is of particular relevance when measuring
small gravity trends. For example, the determination of the glacial isostatic
adjustment (GIA) on regional scales of around $1000\text{\,}\mathrm{km}$ [63]
requires an instrument stable to the
$20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level over several years.
Extending this effort by deploying several AGs also requires the knowledge of
the biases of all the instruments involved [36]. The lack of a calibration
service with a
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
uncertainty requires the participation in key comparisons [9, KC, e. g. ]
where the reference values are determined with an uncertainty of approximately
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. This uncertainty level
requires the participation of multiple gravimeters and cannot be achieved by
comparison against a single gravimeter operated at a metrological institute.
However, the development of stationary atom interferometers, which can be
operated as gravimeters, so-called quantum gravimeters (QG), may result in
such a superior reference in the future available for regular comparisons or
on demand by the user. A major requirement in this respect is the control of
systematic effects like wavefront aberration or the Coriolis effect. In this
paper, we focus on the modelling and measurement of the local gravity field.
We start by discussing the typical approaches for monitoring the long-term
stability of an AG and tracing the measurements back to the SI (section 2).
Then, after briefly describing the working principle of atomic gravimeters and
the case for very long baseline atom interferometry (section 3), we present a
gravity model for the Hannover Very Long Baseline Atom Interferometry
(Hannover-VLBAI) facility, a new $10\text{\,}\mathrm{m}$-scale baseline atom
interferometer in commissioning at the Leibniz University Hannover (section
4). Finally, we present the micro-gravimetric surveys performed at the
instrument’s site (section 5) to assess the accuracy of the gravity model
(section 6). This paves the way towards control of the systematics in the atom
interferometer and accurate transfers of measured g values between the VLBAI
operating as a gravimeter and transportable AGs in a nearby laboratory.
## Gravimeter bias and SI traceability
Micro-g LaCoste FG5(X) [34] instruments represent the current state of the art
in absolute gravimetry. They track the trajectories of a free-falling test
mass with corner cubes by means of laser interferometry to determine the local
acceleration of gravity g. These types of absolute gravimeters are referred to
as _classical absolute gravimeters_ in the following text.
As described by the 2015 CCM-IAG222Consultative Committee for Mass and related
quantities – International Association of Geodesy Strategy for Metrology in
Absolute Gravimetry [5], there are two complementary paths for the
traceability of absolute gravity measurements: a) calibration of incorporated
frequency generators and b) additional gravimeter comparisons against a
reference. The direct way of tracing absolute gravity measurements back to the
SI goes through the calibration of their incorporated laser and oscillator to
standards of length and time [68]. In high-accuracy instruments, the laser
frequency is typically locked to a standard transition of molecular iodine [6,
44]. The time reference is usually given by a rubidium oscillator which needs
to be regularly compared with a reference oscillator to ensure its accuracy as
external higher-accuracy time sources are typically not available at
measurement sites. In most cases, the oscillator’s frequency drift is linear
$($<0.5\text{\,}\mathrm{mHz}\text{/}\mathrm{month}$\text{ or
}$<1\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$/$\mathrm{month}$)$ and a
few calibrations per year are sufficient. However, [30] and [53] report on
sudden jumps in frequency333Current publications refer to the Microsemi
(formerly Symmetricon) SA.22c rubidium oscillator equivalent to several tens
of $\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ due to increased concentrations of
gaseous helium [43] when measuring near superconducting gravimeters. Such
higher concentrations might occur after installation, maintenance, or repair
of a superconducting gravimeter and are unlikely during normal operation. The
frequency drift changes to an exponential decrease after the helium event and
may remain this way for years [51].
Figure 1: Degree of Equivalence (DoE) of joint participants of EURAMET.M.G-K1
[10, ], CCM.G-K2 [11, ], EURAMET.M.G-K2 [37, ] and EURAMET.M.G-K3 [9, ]. The
participants are sorted by DoE of the first KC. The expanded uncertainty is
given only for the last KC. Pilot Study (PS) indicates instruments of non
NMI/DI institutions. All AGs shown are laser interferometers of which eight
are FG5(X) type instruments.
The equivalence of gravity measurement standards and the definition of the
gravity reference are established by international comparisons in the
framework of the CIPM MRA444Mutual Recognition Agreement of the Comité
International des Poids et Mesures. Since no higher-order reference instrument
is available, key comparisons are held in an approximately two-year interval,
alternating between CIPM key comparisons and regional comparisons. There, the
instruments operated by National Metrology Institutes (NMI) and Designated
Institutes (DI) are used to determine the Key Comparison Reference Value
(KCRV). The bias to the KCRV, or Degree of Equivalence (DoE) is then
calculated for all individual instruments, including those without NMI/DI
status participating in the so-called pilot study (PS), and serves as
validation for their uncertainty.
Figure 1 shows the common participants, out of a total number of 35
gravimeters participating in the comparisons, to the last four KC held in
Europe [10, 11, 37, 9]. One observes that the spread of DoE over all
instruments is around $\pm 75\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$,
and at a similar level for the most extreme cases of individual instruments.
Even though the DoEs of the instruments in these comparisons are typically
within the uncertainties declared by the participants, figure 1 also shows the
necessity of determining these biases of gravimeters, classical and quantum
alike, to monitor an instrument’s stability in time. Biases can then be taken
into account in gravimetric projects. The variation of the bias of an
instrument can be explained by a variety of factors. For example, [35] show
that a permanent change in the bias of a classical AG can occur during
manufacturer service or unusual transport conditions (e. g. aviation
transport). Also, [25, 26] identified, characterised and partially removed
biases originating in the signal processing chain of FG5 gravimeters, e. g.
due to cable length and fringe signal amplitude.
Regional KCs are linked to a CIPM KC by a small number of common NMI/DI
participants applying the so-called linking converter [22, typically around
$\pm 10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$,]. The underlying
assumption is that instrumental biases of the NMI/DI instruments remain stable
[8]. Otherwise, this would introduce an additional shift in the bias of all
participating instruments of the regional KC and PS.
Quantum gravimeters, based on matter wave interferometry with cold atoms,
offer a fully independent design. They have demonstrated stabilities and
accuracies at levels comparable to those from state of the art classical AGs
by participating in KCs [16, 23] or common surveys with other instruments at
various locations [13, 51]. The availability of improved QGs as gravity
references provides an opportunity to enhance the stability of reference
values obtained during key comparisons and therefore lead to an international
gravity datum of better stability in time. Just by that alone QGs could become
a serious alternative to classical absolute gravimeters.
## Very long baseline atomic gravimetry
### Atominterferometric gravimetry
Most atomic gravimeters use cold matter waves as free-falling test masses to
measure absolute gravity. They exploit the coherent manipulation of the
external degrees of freedom of these atomic test masses with light pulses to
realise interferometers sensitive to inertial quantities and other forces.
These techniques are for example used to perform precision measurements of
fundamental constants [49, 3, 39], test fundamental physics [55, 48, 21],
sense small forces [2] and perform gravimetry, gravity-gradiometry, and
measure rotations with record instabilities and inaccuracies [31, 12, 15, 72,
50, 57].
Figure 2: Mach–Zehnder light-pulse atom interferometer geometry in a uniform
acceleration field $\mathbf{a}$. At time $t_{0}$, the atomic matterwave is put
in a superposition of momenta $p$ ( ) and $p+\hbar k_{\mathrm{eff}}$ ( ). The
momenta are reversed at time $t_{0}+T$ to recombine the wave packets with a
last light pulse at time $t_{0}+2T$. The populations in the two momentum
classes after the last light pulse allow extracting the interferometric phase
$\Delta\phi$.
Atomic gravimeters typically realise the Mach–Zehnder light-pulse atom
interferometer geometry [24] depicted in figure 2. In this analogon to the
eponymous configuration for optical interferometers, the leading-order
interferometric phase $\Delta\phi$ scales with the space-time area enclosed by
the interferometer:
$\Delta\phi=\mathbf{k}_{\mathrm{eff}}\cdot\mathbf{a}T^{2}$ (1)
where $\hbar\mathbf{k}_{\mathrm{eff}}$ is the recoil transfered to the atomic
wave packets by the atom-light interaction processes (cf. figure 2, $\hbar$ is
the reduced Planck constant and $\mathbf{k}_{\mathrm{eff}}$ the effective
optical wave vector), $\mathbf{a}$ the uniform acceleration experienced by the
atoms during the interferometric sequence, and $T$ the pulse separation time.
The full interferometer has a duration of $2T$. The knowledge of the
instrument’s scale factor $k_{\mathrm{eff}}T^{2}$ and the measurement of the
phase $\Delta\phi$ allow determining the projection of the acceleration
$\mathbf{a}$ along $\mathbf{k}_{\mathrm{eff}}$. When
$\mathbf{k}_{\mathrm{eff}}$ is parallel to $\mathbf{g}$, such an instrument
can therefore be used as a gravimeter, measuring the total vertical
acceleration of the matter waves used as test masses.
The Mach–Zehnder light-pulse atom interferometer works as follows. For each
interferometric sequence, a sample of cold atoms is prepared in a time
$T_{p}$. Then, at time $t=t_{0}$, the first atom-light interaction pulse puts
the matter wave in a superposition of quantum states with different momenta
$\mathbf{p}$ and $\mathbf{p}+\hbar\mathbf{k}_{\mathrm{eff}}$, thus effectively
creating two distinct semi-classical trajectories. At time $t=t_{0}+T$, a
second atom-light interaction process redirects the two atomic trajectories to
allow closing the interferometer at time $t=t_{0}+2T$ with a third light
pulse. Counting the population of atoms in the two momentum states provides an
estimation of the interferometric phase $\Delta\phi$. Finally, the cycle of
preparation of the cold atoms, coherent manipulation of the matter waves, and
detection is repeated. Since the atom-light interaction imprints the local
phase of the light on the matter waves, the above measurement principle can be
interpreted as measuring the successive positions of a free-falling matter
wave at known times $t_{0}$, $t_{0}+T$, and $t_{0}+2T$ with respect to the
light field. The inertial reference frame for the measurement system, similar
to the superspring in FG5(X) gravimeters, is usually realised by a mirror
retro-reflecting the light pulses, creating well-defined equiphase fronts.
Practically, the interferometric phase $\Delta\phi$ is scanned by accelerating
the optical wave fronts at a constant rate $\alpha$, effectively continuously
tuning the differential velocity between the matter waves and the optical
equiphase fronts. Assuming that $\mathbf{k}_{\mathrm{eff}}$ and $\mathbf{a}$
are parallel, the interferometric phase reads:
$\Delta\phi=k_{\mathrm{eff}}\left(a-\frac{\alpha}{k_{\mathrm{eff}}}\right)T^{2}\
.$ (2)
When $\alpha=k_{\mathrm{eff}}a$, the interferometric phase vanishes
independently of the interferometer’s duration $2T$, allowing to unambiguously
identify this operation point. Physically, $\alpha=k_{\mathrm{eff}}a$ exactly
compensates the Doppler effect experienced by the atomic matter waves due to
the acceleration $a$. Therefore, the measurement of the acceleration $a$
amounts to a measurement of the acceleration rate $\alpha$ which can be traced
back to the SI since it corresponds to frequency generation in the radio-
frequency domain.
Assuming white noise at a level $\delta\phi$ for the detection of the
interferometric phase, the instrument’s instability is given by:
$\delta
a(\tau)=\sqrt{2T+T_{p}}\cdot\frac{\delta\phi}{k_{\mathrm{eff}}T^{2}}\cdot\frac{1}{\sqrt{\tau}}\
.$ (3)
where $\tau$ is the measurement’s integration time. This expression reveals
the three levers for reducing the measurement instability: decreasing the
single shot noise level $\delta\phi$, increasing the scale factor
$k_{\mathrm{eff}}T^{2}$, and minimising the sample preparation time $T_{p}$,
as it contributes to the total cycle time without providing phase information.
In transportable devices, record instabilities have been achieved by [12] with
$\delta a=$96\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$$ at
$\tau=$1\text{\,}\mathrm{s}$$. Commercial instruments like the Muquans AQG
[31] reached instabilities of
$500\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at
$\tau=$1\text{\,}\mathrm{s}$$ with sample rates up to $2\text{\,}\mathrm{Hz}$.
The dominant noise source is vibrations of the mirror realising the reference
frame for the measurements.
The accuracy of such quantum gravimeters stems from the well-controlled
interaction between the test masses and their environment during the
measurement sequence. The main sources of inaccuracy in such instruments
originate from uncertainties in the atom-light interaction parameters (e. g.
imperfections of the equiphase fronts of the light wave), stray
electromagnetic field gradients creating spurious forces, thus breaking the
free-fall assumption, and knowledge of the inhomogeneous gravity field along
the trajectories. Extensive characterisation of these effects led to
uncertainties in QGs below $40\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$,
consistent with the results from CIPM key comparisons [15] or common surveys
with classical AGs [12].
### Very Long Baseline Atom Interferometry
Very Long Baseline Atom Interferometry (VLBAI) represents a new class of
ground-based atom interferometric platforms which extends the length of the
interferometer's baseline from tens of centimetres like in typical
transportable instruments [12, 15] to multiple meters. According to equation
(1), the vertical acceleration sensitivity of a Mach–Zehnder type atom
interferometer scales linearly with the length of the baseline ($\sim
aT^{2}$). Therefore, an increase in the length of the baseline potentially
enables a finer sensitivity for the atomic gravimeter through an increased
scale factor $k_{\mathrm{eff}}T^{2}$. A $10\text{\,}\mathrm{m}$-long baseline
instrument can for example extend the interferometric time $2T$ to around
$1\text{\,}\mathrm{s}$ if the atoms are simply dropped along the baseline or
up to $2.4\text{\,}\mathrm{s}$ if they are launched upwards in a fountain-like
fashion. In the simple drop case, the velocity acquired by the atoms between
their release from the source and the start of the interferometer leads to an
interferometer duration shorter than half of the one for the launch case. For
our apparatus, the distance between the top source chamber and the region of
interest is around $2\text{\,}\mathrm{m}$ (see figure 3), constraining
$T<$400\text{\,}\mathrm{ms}$$ for simple drops.
Using realistic parameters ($T_{p}=$3\text{\,}\mathrm{s}$$,
$\delta\phi=$10\text{\,}\mathrm{mrad}$$), equation (3) yields potential short-
term instabilities for VLBAIs ($\tau=$1\text{\,}\mathrm{s}$$ integration
time):
$\begin{array}[]{ll}T=$400\text{\,}\mathrm{ms}$\text{:\leavevmode\nobreak\
}&\delta a=$8\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$\\\
T=$1.2\text{\,}\mathrm{s}$\text{:\leavevmode\nobreak\ }&\delta
a=$1\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$\end{array}$ (4)
competing with the noise level of superconducting gravimeters [45, 46] while
providing absolute values of the gravity acceleration g.
Nevertheless, the increased scale factor $k_{\mathrm{eff}}T^{2}$ gained by the
expanded baseline comes at the price of a stationary device with added
complexity due to its size, and a vibration noise sensitivity magnified by the
same scale factor as the gravitational acceleration for frequencies below
$\nicefrac{{1}}{{(2T)}}$. Hence, the use of VLBAIs as ultra stable gravimeters
requires new developments in the control of environmental vibrations [19].
Also, time- and space-varying electromagnetic and gravity fields along the
free-fall trajectories of the matter waves have a direct impact on the
accuracy and stability of the instrument, as the corresponding spurious forces
depart from the assumptions of equation (1), therefore leading to biases [7]
and impacting the instrument’s effective height [60].
### Effective height
In order to compare measurements of a VLBAI gravimeter with other instruments,
it is crucial to determine the effective height $z_{\mathrm{eff}}$ defined by:
$g_{0}-\gamma
z_{\mathrm{eff}}=\frac{\Delta\phi_{\mathrm{tot}}}{k_{\mathrm{eff}}T^{2}}$ (5)
where $g_{0}\approx$9.81\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$$ is the
value of gravity at $z=0$, $\gamma\approx$3\text{\,}\mathrm{\SIUnitSymbolMicro
m}\text{\,}{\mathrm{s}}^{-2}\text{\,}{\mathrm{m}}^{-1}$$ the magnitude of the
linear gravity gradient, and $\Delta\phi_{\mathrm{tot}}$ the phase shift
measured by the interferometer. The right-hand side is the value of gravity
measured by the atom interferometer, including all bias sources. Restricting
to first order in the gravity-gradient $\gamma$, and applying a path-integral
formalism, one gets [40]:
$z_{\mathrm{eff}}=z_{0}-\dfrac{\Delta g}{\gamma}\quad\text{with}\quad\Delta
g=\frac{7}{12}\gamma g_{0}T^{2}-\gamma\bar{v}_{0}T$ (6)
where $z_{0}$ is the height of the start of the interferometer and
$\bar{v}_{0}=v_{0}+\nicefrac{{\hbar k_{\mathrm{eff}}}}{{(2m)}}$ the mean
atomic velocity just after the interferometer opens ($v_{0}$ is the atomic
velocity before the first beamsplitter, and $m$ is the atomic mass). This
expression for $z_{\mathrm{eff}}$ is compatible with the one given for FG5
gravimeters by [38]. In particular, it only depends on the value of the
gradient $\gamma$ through $v_{0}$ and $z_{0}$. Indeed, the interferometer is
controlled in time and the initial position and velocity $z_{0}$ and $v_{0}$
are therefore given by the free-fall motion of the atoms between the source
chamber and the region of interest. In general, $z_{\mathrm{eff}}$ depends on
the geometry of the atom interferometer. For the simple drop case in the
Hannover VLBAI facility (see section 3.4),
$z_{\mathrm{eff}}\approx$9.2\text{\,}\mathrm{m}$$.
Corrections to equation (6) must be taken into account to constrain the
uncertainty on gravity at $z_{\mathrm{eff}}$ below
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. On the one hand, terms of
order $\gamma^{2}$ and higher in $\Delta\phi_{\mathrm{tot}}$ contribute at the
sub-$\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level. On the other hand, one can
use perturbation theory [66] to estimate the effect of the non-homogeneous
gravity gradient along the interferometer’s baseline. Using the data discussed
here, we evaluate this effect below
$5\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$, therefore lying within the
model’s uncertainty (see section 6) and similar to the known contribution for
FG5(X) gravimeters [60].
Finally, when using multiple concurrent interferometers at different heights,
the effect of a homogeneous gravity gradient can be mitigated by measuring it
simultaneously with the acceleration value [4]. In this case, the effective
height corresponds to the position of the mirror giving the inertial
reference. Detailed modelling is however still necessary to push the
uncertainty budget in the
sub-$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ and calibrate the
instrument to the level of its instability.
### The Hannover VLBAI facility
We introduce the Hannover Very Long Baseline Atom Interferometry facility, an
instrument developed at the newly founded Hannover Institute of Technology
(HITec) of the Leibniz University Hannover, Germany. It builds on the concepts
outlined in section 3.2 to provide a platform to tackle challenges in extended
baseline atom interferometry. In the long term, it aims at tests of our
physical laws and postulates like for example the universality of free fall
[20], searches for new forces or phenomena, and the development of new methods
for absolute gravimetry and gravity gradiometry [56].
Upper atomic sourceLower atomic sourceBaselineultra-high vacuum chamberand
magnetic shieldInertial referencevibration isolated mirrorRegion of
interestfor precisionatom
interferometry$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$11$$12$$13$$14$$15$Height /
$\mathrm{m}$ Figure 3: The Hannover Very Long Baseline Atom Interferometry
(VLBAI) facility and its three main elements: source chambers , baseline , and
inertial reference system and vacuum vessel (VTS) . The baseline and upper
source chambers are supported by an aluminium structure (VSS, dark blue). The
region of interest for atom interferometry is shaded in light blue.
The Hannover VLBAI facility is built around three main elements shown in
figure 3:
1. 1.
Ultra cold samples of rubidium and ytterbium atoms are prepared in the two
_source chambers_ , allowing for both drop (max $T=$400\text{\,}\mathrm{ms}$$)
and launch (max $T=$1.2\text{\,}\mathrm{s}$$) modes of operation. Advanced
atom-optics promise enhanced free-fall times by relaunching the wave packets
during the interferometric sequence [1];
2. 2.
The reference frame for the inertial measurements is realised by a
_seismically isolated mirror_ at the bottom of the apparatus. The seismic
attenuation system (SAS) uses geometric anti-spring filters [69] to achieve
vibration isolation above its natural resonance frequency of
$320\text{\,}\mathrm{mHz}$. The isolation platform is operated under high
vacuum conditions to reduce acoustic and thermal coupling. The vacuum vessel
containing the SAS is denoted VTS in sections 4–6;
3. 3.
The $10.5\text{\,}\mathrm{m}$-long _baseline_ consists of a
$20\text{\,}\mathrm{cm}$ diameter cylindrical aluminium vacuum chamber and a
high-performance magnetic shield [71]. The interferometric sequences take
place along this baseline, in the $8\text{\,}\mathrm{m}$-long central _region
of interest_ where the longitudinal magnetic field gradients fall below
$2.5\text{\,}\mathrm{nT}\text{/}\mathrm{m}$.
In order to decouple the instrument from oscillations of the walls of the
building, the apparatus is only rigidly connected to the foundations of the
building. The VTS (and SAS) and lower source chamber are mounted on a
baseplate directly connected to the foundation. The baseline and upper source
chamber are supported by a $10\text{\,}\mathrm{m}$ high aluminium tower,
denoted as VLBAI support structure (VSS) in the following sections. The
footprint of the device on the floor is
$$2.5\text{\,}\mathrm{m}$\times$2.5\text{\,}\mathrm{m}$$. Traceability to the
SI is ensured by locking the instrument’s frequency references to standards at
the German NMI (PTB Braunschweig) via an optical link [42]. All heights are
measured from the instrument’s baseplate. The altitude of this reference point
in the German height datum is $50.545\text{\,}\mathrm{m}$.
## Environmental model
(a) HITec cross-section (not to scale)
(b) HITec top view
Figure 4: Views of HITec: cross-section (4(a)) of the VLBAI laboratories with
the gravimetric network of 2019 along two vertical profiles and region of
interest (blue). The indicated groundwater variation (thick bar) refers to an
average annual amplitude of $0.3\text{\,}\mathrm{m}$. The thin bar indicates
extreme low and high levels. The height $z=$0\text{\,}\mathrm{m}$$ refers to
the top of the baseplate. The top view of HITec (4(b)) shows the orientation
of our coordinate system, the location of the VLBAI facility (blue) and the
gravimetry lab including piers for gravimeters (light grey).
The VLBAI facility is implemented in the laboratory building of the Hannover
Institute of Technology. The building consists of three floors (one basement
level, two above street level) and is divided into a technical part mainly
containing the climate control systems, and a section with the laboratories
(see figure 4). In the laboratory part, a so-called backbone gives
laboratories access to the technical infrastructure and divides the building
in two parts along its long axis. The backbone and southern row of
laboratories have a footprint of
$$13.4\text{\,}\mathrm{m}$\times$55.4\text{\,}\mathrm{m}$$ and extend
approximately $5\text{\,}\mathrm{m}$ below surface level. The northern row of
laboratories is fully above ground except for the gravimetry laboratory which
is on an intermediate level, around $1.5\text{\,}\mathrm{m}$ below street
level and $3.4\text{\,}\mathrm{m}$ above basement level (see figure 4(a)). The
foundation of the building is $0.5\text{\,}\mathrm{m}$ thick except beneath
the gravimetry laboratory, which has a separate and $0.8\text{\,}\mathrm{m}$
thick one. Figure 4(a) also shows the measurement points for the relative
gravimeters along the VLBAI main axis and a second validation profile,
occupied using tripods, next to the VLBAI which were used for the measurements
presented in section 5.
### Physical model
Following the methods described by [27], we discretise the HITec building into
a model of rectangular prisms that accounts for more than $500$ elements. The
geometry is extracted from the construction plans, and we verified all the
heights by levelling, also including a benchmark with a known elevation in the
German height datum. The building is embedded in a sedimentary ground of sand,
clay, and marl ($2050\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$). For the
edifice itself, we include all walls and floors made of reinforced concrete
($2500\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$), the
$7\text{\,}\mathrm{cm}13\text{\,}\mathrm{cm}$ thick liquid flow screed
covering the concrete floors in the labs
($2100\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$), and the gypsum drywalls
($800\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$). We also incorporate the
insulation material ($150\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$) and
gravel on the roof ($1350\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$). We
use a simplified geometry to model the large research facilities in the
surroundings. This is for example the case for the Einstein-Elevator [29], a
free-fall simulator with a weight of $165\text{\,}\mathrm{t}$ and horizontal
distances of $32\text{\,}\mathrm{m}$ and $16\text{\,}\mathrm{m}$ to the VLBAI
facility and gravimetry laboratory, respectively. Finally, we account for
laboratory equipment, e. g. optical tables ($550\text{\,}\mathrm{kg}$ each)
according to the configuration at the time of the gravimetric measurement
campaigns.
During the first measurements (2017), the interior construction was still in
progress, and the laboratories were empty. By the time of the second campaign
(2019), the building was fully equipped. The VLBAI support structure (VSS) and
the vacuum tank (VTS) for the seismic attenuation system were in place. The
VLBAI instrument (atomic sources, magnetic shield, $10\text{\,}\mathrm{m}$
vacuum tube) and seismic attenuation system were completed after the second
campaign.
Due to their inclined or rounded surfaces, the VLBAI experimental apparatus
and its support structure require a more flexible method than rectangular
prisms to model their geometry. We apply the method described by [41] and
divide the surface of the bodies to be modelled into polygonal faces to
calculate the gravitational attraction from surface integrals. Contrary to the
rectangular prisms method, there are only few restrictions on the underlying
geometry. Most notably, all vertices of a face must lie in one plane and the
normal vectors of all surfaces must point outward of the mass. For example,
normal vectors of faces describing the outside surface of a hollow sphere must
point away from the sphere and normal vectors on the inside surface must point
towards the centre, away from the mass of the wall of the sphere. We extract
the geometry of the VLBAI facility components from their tridimensional CAD
model through an export in STL555Stereolithography or standard triangulation
language format [47]. This divides the surface of the bodies into triangular
faces, therefore ensuring planar faces by default. Moreover, the STL format
encodes normal vectors pointing away from the object. Both prerequisites for
the polygonal method by [41] are thus met. Using this method, the VSS
(aluminium, $2650\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$, total weight
$5825\text{\,}\mathrm{kg}$) consists of roughly $86000$ faces and the VTS and
corresponding baseplates (stainless steel,
$8000\text{\,}\mathrm{kg}\text{/}{\mathrm{m}}^{3}$, total weight
$2810\text{\,}\mathrm{kg}$) contain $187000$ faces, mostly due to the round
shape and fixtures of the VTS. As the overall computation time to extract the
attraction of these components with a $\mathrm{c}\mathrm{m}$-resolution on
both vertical profiles remains in the range of minutes on a desktop PC, we do
not need to simplify the models. The Monte Carlo simulations described in
section 6 nevertheless require the computing cluster of the Leibniz University
Hannover (LUH).
We use MATLAB666MATLAB Version 9.4.0.813654 (R2018a) to perform the numerical
calculations. As a cross-check, we implemented both the rectangular prisms and
polyhedral bodies methods for the calculation of the attraction effect of the
main frame of the HITec building. Both approaches agree within floating point
numerical accuracy.
### Time variable gravity changes
Mostly for the benefit of the future operations of the VLBAI, we include the
effects of groundwater level changes, atmospheric mass change, and Earth’s
body and ocean tides in our modelling. This is necessary for the individual
gravimetry experiment (and other physics experiments as well) in the VLBAI on
one hand, and for comparing measurements from different epochs, e. g. with
different groundwater levels, on the other hand. Previous investigations in
the gravimetry lab of a neighbouring building showed a linear coefficient of
$170\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ per meter change in the
local groundwater table [64]. This corresponds to a porosity of
$>30\text{\,}\mathrm{\char 37\relax}$ of the soil [17]. For our model, we
adapt a porevolume of $30\text{\,}\mathrm{\char 37\relax}$, which has to be
verified by gravimetric measurements and correlation with local groundwater
measurements. Two automatic groundwater gauges are available around the
building: one installed during the construction work and a second with records
dating back several decades also used by [64]. The effect of atmospheric mass
changes is calculated using the ERA5 atmospheric model provided by the
European Centre for Medium-Range Weather Forecasts777https://www.ecmwf.int and
the methods described by [51]. Tidal parameters are extracted from
observational time series [58, 52]. Other temporal gravity changes are not in
the scope of this work.
Currently, time variable gravity is also monitored with the gPhone-98
gravimeter of the Institute of Geodesy (IfE) at the LUH. In the long term, we
consider the addition of a superconducting gravimeter for this purpose when
the VLBAI facility is fully implemented and the experimental work is
beginning. The support of a superconducting gravimeter is also vital in the
characterisation of new gravimeters [12].
### Self-attraction results
Figure 5 shows the vertical component of the gravitational acceleration
generated by the building, equipment, VSS and VTS. The VLBAI main axis is in
the centre of the left plot ($x=$0\text{\,}\mathrm{m}$$). The large structures
around $5\text{\,}\mathrm{m}$ and $10\text{\,}\mathrm{m}$ correspond to the
floor levels. Smaller structures are associated to, for example, optical
tables or the VSS. The right panel of figure 5 highlights the attraction
calculated for the main axis ($x=$0\text{\,}\mathrm{m}$$) and for a second
profile along $x=$-1.8\text{\,}\mathrm{m}$$ and $y=$0\text{\,}\mathrm{m}$$.
The first profile shows a smooth curve except for the bottom
$2\text{\,}\mathrm{m}$, which are affected by the VTS. In this model, the part
above $2\text{\,}\mathrm{m}$ on the main axis is empty space. The second
profile, chosen as a sample from the xz-plane, passes through the floors,
hence the zig-zag features around $5\text{\,}\mathrm{m}$ and
$10\text{\,}\mathrm{m}$. While the main axis will later be occupied by the
instrument’s baseline, this second profile, similar to the validation profile,
represents a location that will always remain accessible to gravimeters.
Figure 5: Calculated gravitational attraction from the building, large
laboratory equipment, VSS and VTS in the xz-plane (left) and exemplarily on
two profiles (right).
### Effect of groundwater level changes
Based on the extensive groundwater level recordings from the gauge nearby the
HITec building, we study the impact of groundwater level changes [67, see
also] on gravitational attraction inside the building, specifically along the
VLBAI main and validation profiles, as well as in the gravimetry laboratory.
Due to the layout of the different basement levels in the building (see figure
4(a)), a change of the groundwater table affects gravity in the VLBAI
laboratories differently than in the gravimetry lab. Depending on the
groundwater level, the foundation beneath the VLBAI laboratories can be
partially within the groundwater table, whereas this is never the case for the
gravimetry laboratory. As shown on figure 4(a), the mean groundwater table is
nevertheless below the level of the foundation below the VLBAI laboratories.
Therefore, at certain points of the average annual cycle of amplitude
$0.3\text{\,}\mathrm{m}$, the groundwater table will rise only around the
foundation of the VLBAI laboratories, whereas its level will still increase
below the gravimetry laboratory. This effect is even more stringent for years
where the average cycle amplitude is exceeded (around one in four years).
Figure 6: Effect of groundwater variations (all heights in the height system
of the model, cf. figure 4(a)) on gravity in the gravimetry lab (left) and
along the VLBAI axis (right) with respect to the mean groundwater level
(dotted line ). The dashed line ( ) indicates the bottom of the foundation
below the VLBAI. The coloured lines indicate the change of gravity $\delta
g_{\mathrm{gw}}$ in various heights in the gravimetry and VLBAI laboratories.
The height of the gravimetry piers in the height system of the model is
$3.35\text{\,}\mathrm{m}$.
Figure 6 illustrates the different influence of the groundwater table level on
gravity in the VLBAI and gravimetry laboratories. The estimated change of
gravity $\delta g_{\mathrm{gw}}$ due to the attraction corresponding to
groundwater level variations is presented for different heights above the
gravimetry pier and along the VLBAI main axis. As the groundwater level is
always changing directly beneath the instrument piers in the gravimetry
laboratory, we expect an almost linear change of gravity with changing
groundwater level. The change of gravity is also almost independent of the
height above the pier, as shown by the almost identical lines for
$z=$3.35\text{\,}\mathrm{m}$$ directly on the pier and
$1.4\text{\,}\mathrm{m}$ above the pier, covering the instrumental heights of
transportable gravimeters. Therefore, AGs with various sensor heights, e. g.,
A-10 and FG5X, are affected in the same manner. The increase of $\delta
g_{\mathrm{gw}}$ is $32\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ in an
average year. This behaviour is different in the VLBAI laboratories. In
current records, the groundwater level never fell below the foundation of the
backbone (cf. figure 4(a)). This effect is seen in the small divergence (up to
$3\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$) for groundwater levels
below the foundation of the VLBAI (dashed line). Once the groundwater level
reaches the lower edge of the VLBAI foundation, gravity will not increase
linearly along the VLBAI main axis as the groundwater rises further. Moreover,
in this situation, the effect has a different magnitude depending on the
height in the room. In a year with the average amplitude of groundwater level
variation, ca. $\pm 0.15\text{\,}\mathrm{m}$ around the line indicating the
mean groundwater level, $\delta g_{\mathrm{gw}}$ will differ by
$5\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ between basement and the top
floor. In years exceeding the average groundwater variation, the difference
between the basement and upper levels increases further. This effect is within
$\pm 2\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ on the validation
profile in the average groundwater cycle.
These observations will be crucial when comparing AGs in the gravimetry
laboratory to the VLBAI facility operated as a quantum gravimeter. Depending
on the geometry of a specific atom interferometer realisation, the
instrumental height of the VLBAI gravimeter changes and can introduce changes
in the measured value of g of more than
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ as a result of the
groundwater effect in years with a higher than usual groundwater level. The
magnitude of $10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ is larger than
the targeted accuracy of the VLBAI and also a relevant size for classical AGs
in comparisons. It should also be noted, that the model only calculates the
gravitational attraction of the groundwater variation. A potential vertical
displacement of the ground itself is currently not taken into account, leading
to a possible underestimation of the effect.
In order to track the effect of groundwater level changes more accurately, we
plan to extend the findings of [64] by correlating periodic gravimetric
measurements on the validation profile in the VLBAI laboratories with the
recordings of the two groundwater level gauges around the building. This
should in particular allow us to take into account that, due to capillarity
effects, the groundwater level will probably not sink uniformly below the
foundation beneath the VLBAI laboratories once it reaches that level.
## Gravimetric measurements
In June 2017 and August 2019, we performed surveys using relative gravimeters
to verify our model from section 4 along the VLBAI main and validation
profiles. This approach was already demonstrated in [54], in which the gravity
field impact of a $200\text{\,}\mathrm{kN}$ force standard machine at the
Physikalisch-Technische Bundesanstalt in Braunschweig was modelled. That model
was verified with gravimetric measurements prior and after the installation of
the force machine. The difference between the modelled impact and the
measurement was within the uncertainty of the gravimeters used. For each
measurement point, we measured its connection to at least another point and
applied the step method with ten connections [65]. A connection corresponds to
one gravity difference observation between two points. Ten connections require
five occupations of a measurement point with a gravimeter. We measured most
connections with at least two different instruments, reducing the outcomes to
a mean instrumental height of $0.22\text{\,}\mathrm{m}$ above ground or
platform. We then performed a global least-squares adjustment using the
Gravimetry Net Least Squares Adjustment software from IfE [70, GNLSA,]. The
measurements are also calibrated in this process. We determined the individual
calibration factors of the gravimeters on the Vertical Gravimeter Calibration
Line in Hannover [61, 59] at least once in the week prior to the measurement
campaigns. The software also corrects Earth tides, applying our observed
parameters, and atmospheric mass changes by means of the linear factor of
$3\text{\,}\mathrm{nm}\text{/}{\mathrm{s}}^{2}\text{/}\mathrm{hPa}$ with
respect to normal air pressure at station elevation. In order to account for
instrumental drift in the global adjustment, we treat each day and each
instrument independently and use a variance component estimation to weight the
measurements in the global network adjustment. The specific groundwater effect
discussed in section 4.4, considering different magnitudes depending on
height, does not apply for either 2017 or 2019 because the groundwater levels
were below the foundation of the VLBAI in both years.
### 2017 Gravimetry campaign
We first mapped the gravity profile along the VLBAI profiles in June 2017,
when the HITec building was still under construction and the VLBAI
experimental apparatus not yet installed. Using the Scintrex CG3M-4492 (short
CG3M) and ZLS Burris B-144 (B-114) spring gravimeters [62, 52], we measured a
total of $147$ connections between seven positions spaced by ca.
$2\text{\,}\mathrm{m}$ along the VLBAI main axis, nine positions on the
validation profile, and two points outside of the building. We used a
scaffolding to access the measurement points on the main axis. However,
although the scaffold was anchored against the walls, the uppermost platforms
were too unstable to ensure reliable measurements. The B-114 was only able to
measure on the bottom three positions, because the feedback system was not
powerful enough to null the oscillating beam on the upper levels. The four
upper levels were only occupied by the CG3M. We connected each point on the
scaffold to another one on the same structure and to the closest fixed floor
level, at a point part of the validation profile. As shown in figure 4(a), the
validation profile included measurements on the floor and on different sized
tripods to determine the gradients.
The variance component estimation gives a posteriori standard deviations for a
single gravity tie observation of
$50\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the B-114 and
$100\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the CG3M. The standard
deviations for the adjusted gravity values range from
$15\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}42\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
with a mean value of $28\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. The
standard deviations of the adjusted gravity differences vary from
$21\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ between fixed floor levels
to $59\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ between consecutive
levels on the scaffold. The transfer of height from the upper floor to the
basement through the intermediate levels on the scaffold showed a
$2\text{\,}\mathrm{mm}$ discrepancy compared to the heights from levelling. We
included the corresponding
$$2\text{\,}\mathrm{mm}$\cdot$3\text{\,}\mathrm{nm}\text{/}{\mathrm{s}}^{2}\text{/}\mathrm{mm}$=$6\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$$
as a systematic uncertainty for the adjusted gravity values for the values
measured on the scaffold. We also account for a $1\text{\,}\mathrm{mm}$
uncertainty on the determination of the relative gravimeter sensor height.
### 2019 Gravimetry campaign
Figure 7: Measurement at the VSS in 2019 with the B-64 (foreground) on the
validation profile and the CG6 (background) inside the VSS on a platform with
an operator wearing a security harness. The B-64 is operated on a small tripod
to raise the sensor height closer to the CG6 sensor height.
We mapped the gravity profile along the VLBAI axes in a more extensive manner
in summer and fall 2019. Most measurements were performed in one week of
August 2019, adding two days in October and November 2019. We used moveable
platforms inside the VSS, installed in June 2019, and could measure on $16$
levels on the main axis, spaced by
$0.45\text{\,}\mathrm{m}0.95\text{\,}\mathrm{m}$. The scheme for the
validation profile did not change. The layout of the network is depicted in
figure 4(a). For this campaign, we used the CG3M, the Scintrex CG6-0171 (CG6),
and ZLS Burris B-64 (B-64) spring gravimeters [62, 59, 52]. Owing to the high
mechanical stability of the VSS, measurements along the main axis were
unproblematic for all instruments and the measurement noise was at a similar
level on the moveable platforms and on the fixed floors (see figure 7). All
but one position were occupied with at least two gravimeters, amounting to
$439$ connections in the network adjustment.
The a posteriori standard deviations (single gravity tie measurement) of the
observations range from
$15\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}60\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
with more than $50\text{\,}\mathrm{\char 37\relax}$ below
$30\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. The higher standard
deviations are a result of two days of measurements with the CG3M and
connections to two particular positions outside of the region of interest of
the VLBAI. The standard deviations of adjusted gravity values in the network
range from
$7\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}19\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
with a mean of $9\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. This
improvement, compared to the previous campaign, can be attributed to the
stability of the VSS, the addition of the CG6 and the total number of
measurements performed. The height of the moveable platforms inside the VSS
was determined by a combination of levelling and laser distance
measurements888Leica Disto D210 to two fixed platforms and the ceiling. For
the height determination of the platforms, the uncertainty is
$1\text{\,}\mathrm{mm}$ due to the laser distance measurement. We also account
for an $1\text{\,}\mathrm{mm}$ uncertainty in the determination of the
instrumental height above the platforms.
## Combination of model and measurement
The measurement and model results along the VLBAI main and validation profiles
are presented in figure 8. Figure 8(a) shows the total variation of gravity
along the main axis. The plot is dominated by the normal decrease of gravity
with height. The effect of the building can be better seen when removing the
change of gravity with height and visualising only the attraction effect of
the building and laboratory equipment, as on figure 8(b). There, the model
corresponds to the configuration for the 2019 campaign and is identical to the
$x=$0\text{\,}\mathrm{m}$$, $y=$0\text{\,}\mathrm{m}$$ line in figure 5.
Figure 8(d) shows the model and measurements along the validation profile.
The models presented in figure 8 use the nominal values for the densities of
building elements (concrete floors and walls, drywalls, etc.). Since these can
have variations over the building, we performed a Monte Carlo simulation
($50000$ runs) varying the densities of the corresponding model elements by
$\pm 5\text{\,}\mathrm{\char 37\relax}$ according to a normal distribution.
This leads to a variation of attraction of $\pm
27\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}\pm
37\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for heights between
$4\text{\,}\mathrm{m}$ and $13\text{\,}\mathrm{m}$, as shown by the thin blue
lines on figures 88(b)–8(d). Using a uniform distribution of the density
parameters increases the variability by around
$20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$. The VSS and VTS are not
part of the Monte Carlo simulation since their geometry and materials are well
known.
(a) central axis: gravity variation
(b) central axis: model
(c) central axis: residuals
(d) validation profile: model
Figure 8: Measurement and model results on the VLBAI central axis (8(a)–8(c))
and the validation profile (8(d)). The shaded area in (8(a)–8(c)) indicates
the region of interest. The total variation of gravity along the central axis
is shown in (8(a)). The modelled and measured attraction by the environment
(with the change of gravity with height removed) on the central and validation
profile is shown in (8(b)) and (8(d)). The errorbars indicate the standard
deviations from the network adjustment and the model simulations according to
equation (7). The maximum and minimum results of the $\pm
5\text{\,}\mathrm{\char 37\relax}$ density variations from Monte Carlo (MC)
simulation of model parameters are indicated by the thin blue lines. The
residuals of observations minus model $\delta g_{\mathrm{omc}}$ are given in
(8(c)) along with the standard deviation of the model $\sigma_{\text{mod}}$
according to equation (8).
The final location of the VLBAI facility and its main axis could only be
approximated to the $\mathrm{cm}$-level during the measurement campaigns
because of necessary installation tolerances. We estimated the effect of a
horizontal variation of $\pm 3\text{\,}\mathrm{cm}$ and a vertical variation
of $\pm 2\text{\,}\mathrm{mm}$ in a Monte Carlo simulation. The total
amplitude of the variations at the locations of the gravimetric measurements
is within $\pm 2\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ with a mean
standard deviation of $0.3\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for
the horizontal and $0.4\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ for the
vertical component along the main axis.
The measurements, i. e. the markers in figure 8, are the result of the gravity
network adjustment. Additionally, we removed the effect of the change of
gravity with height for figures 88(b)–8(d). For this, the free air gradient is
modified with a model of the soil surrounding HITec. As the density is only
known to a certain degree, the Monte Carlo simulation also included the ground
around HITec. The standard deviation of the simulation results for each
gravimeter position is added to the measurements standard deviation by error
propagation. The simulations’ standard deviations range from
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at the height of
$4\text{\,}\mathrm{m}$ to $35\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
at the topmost position. This is also reflected in the increase in the
standard deviations indicated by the errorbars in figure 8(c).
The uncertainty of the measurements now consists of the following components:
$\sigma_{\rm
obs}=\sqrt{\sigma_{g}^{2}+\sigma_{h,\mathrm{geo}}^{2}+\sigma_{z,\mathrm{mod}}^{2}+\sigma_{\mathrm{grad}}^{2}}\
.$ (7)
Here, the standard deviation of the network adjustment is $\sigma_{g}$. The
contribution of the determination of the height of the gravimeter is
$\sigma_{h,\mathrm{geo}}$. The result of the Monte Carlo simulations of the
vertical component of geometric position of the central axis
$\sigma_{z,\mathrm{mod}}$, and the modelling of the gravity gradient
$\sigma_{\mathrm{grad}}$ are also attributed to the measurements.
The standard deviation of the model consists of the following components:
$\sigma_{\mathrm{mod}}=\sqrt{\sigma_{\mathrm{MC}}^{2}+\sigma_{hz,\mathrm{mod}}^{2}}\
,$ (8)
where $\sigma_{\mathrm{MC}}$ is the standard deviation of the Monte Carlo
simulations of the model density, calculated in the heights of the gravimetric
measurements, and $\sigma_{hz,\mathrm{mod}}$ is the standard deviation of the
Monte Carlo simulations for the horizontal component of the geometric
positions along the VLBAI main axis. $\sigma_{\mathrm{mod}}$ is shown in
figure 8(c) with a range of
$6\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}11\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
in the region of interest and about
$8\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ at
$z_{\mathrm{eff}}=$9.2\text{\,}\mathrm{m}$$ (see section 3.3).
Furthermore, a single parameter is estimated to reduce the gravity values from
the magnitude of $9.81\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-2}$ to the
order of magnitude of the model values for the attraction. This parameter is
the mean difference of observed minus computed results at the location of the
observation in the region of interest. The measurements of 2017 are also
corrected for the changes within the building with respect to 2019. No
additional parameters were estimated to fit the measurements to the model or
vice versa. The remaining signal should now contain the effect of the HITec
building on gravity.
In general, the 2017 measurements and the main axis model do not show a good
agreement [51, see also] due to the instability of the scaffolding used as a
platform [18, see also]. The agreement on the validation profile is better,
and only the two topmost points do not agree with the model and simulation.
These earlier measurements serve as a proof of concept and are given for the
sake of completeness. The following discussion concerns only the 2019
measurements.
The 2019 campaign provides a clear improvement considering the number of
stations along the VLBAI main axis, the stability of the platforms in the VSS
and therefore data quality. Consequently, the agreement between measurement
and model is significantly improved. The measurement scheme on the validation
profile remained unchanged compared to the 2017 campaign. Figure 8(c) shows
the difference between the measurements and the model on the central axis. The
region of interest for experiments in the VLBAI is approximately between
$4\text{\,}\mathrm{m}$ and $13\text{\,}\mathrm{m}$ (see figure 3). Within this
region, only the second-highest point is not within the simulation’s $\pm
5\text{\,}\mathrm{\char 37\relax}$ density variations. The two tailed
statistical test ($\alpha=0.05$) on the equality of model $\delta
g_{\mathrm{mod},i}$ and measurement $\delta g_{\mathrm{obs},i}$ at point $i$
according to
Null hypothesis: $\displaystyle\delta g_{\mathrm{omc},i}=$
$\displaystyle\delta g_{\mathrm{obs},i}-\delta g_{\mathrm{mod},i}=0$
Alternative hypothesis: $\displaystyle\delta g_{\mathrm{omc},i}\neq$
$\displaystyle 0$ Test statistics: $\displaystyle t_{i}=$
$\displaystyle\frac{\left|\delta
g_{\mathrm{omc},i}\right|}{\sqrt{\sigma_{\mathrm{obs},i}^{2}+\sigma_{\mathrm{mod},i}^{2}}}$
passes for all but three points. The null hypothesis, considering the symmetry
of the normal distribution, is rejected if
$t_{i}>N_{(0,1,1-\nicefrac{{\alpha}}{{2}})}$. The test fails for the points at
$z=$1.72\text{\,}\mathrm{m}5.55\text{\,}\mathrm{m}12.99\text{\,}\mathrm{m}$$.
The lowest point at $z=$1.72\text{\,}\mathrm{m}$$, directly on the VTS, was
challenging to measure, as the pump of the vacuum tank was active during the
measurements causing high-frequency vibrations. As this position is outside of
the experimental region of interest, no additional measurements were taken.
The cause for the significant deviation from the model at
$z=$12.99\text{\,}\mathrm{m}$$, which was measured with only one gravimeter,
is unknown. The height difference to the point above is only
$0.16\text{\,}\mathrm{m}$ of free space, so a real gravity variation appears
unlikely. Treating this point as an outlier, and repeating the test after
calculating the offset between adjusted gravity values and model without this
measurement, the test also passes for the point at
$z=$5.55\text{\,}\mathrm{m}$$. All points on the validation profile pass the
statistical test. The standard deviation of observations minus model is
$20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$
($31\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ if the second-highest
point is included) for the central axis in the region of interest and
$34\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ on the validation profile.
The density of the different model components, chosen initially from technical
documentation, are sufficient to generate a model which is identical to in
situ measurements at a $95\text{\,}\mathrm{\char 37\relax}$ confidence level.
Modelling a $5\text{\,}\mathrm{\char 37\relax}$ normally distributed variation
of these densities results in a narrow range of possible model variations,
which covers almost all measurements used to verify the model. We expect that
using individual densities for each floor instead of one common density value
for all concrete components in the building would improve the agreement
between model and observations on the validation profile. Such extra modelling
step should however be constrained not to deteriorate the model accuracy in
the experimental region of interest.
As a final step, the VLBAI magnetic shield and vacuum system [71] will be
added to the model. Similarly to the VSS and VTS, this component was designed
using CAD, built with known materials, and can be exported into the required
format for our model. While the assembly is significantly more complex, we
expect the octagonal symmetry of the magnetic shield to simplify the numerical
calculations and allow us to reach the same level of accuracy in the gravity
model as for the VSS and VTS. It will however only be possible to check the
quality of the extended model with measurements on the validation profile, as
the main axis is obstructed by the instrument’s vacuum chamber. Nevertheless,
the understanding of environmental variations (mostly hydrology) outlined in
section 4.4 will render this possible with good accuracy. Due to the work
associated with the installation of the VLBAI baseline components, this last
model extension and its corresponding validation have not been done yet.
Extending our model with the VLBAI baseline components will allow us to
connect gravimetric measurements along the validation profile and future data
acquired by a VLBAI quantum gravimeter along its main axis in our adjusted
gravimetric network. Since the measurement positions along the validation
profile will remain free during operation of the VLBAI facility, this will for
example enable comparisons of the VLBAI QG with FG5(X)-type classical AGs
positioned in the VLBAI laboratories. In this specific setup, contributions of
time variable gravity to the measurements are minimal for the VLBAI and
instrument under test. To further minimize the height dependency due to the
groundwater effect, the atom interferometer could be realised with an
effective height close to the instrumental height of the classical AG, e. g.
with the AG on the groundfloor. Taking into consideration the mean standard
deviation of the relative gravimeter network of
$9\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$, we expect to be able to
transfer g with an uncertainty of
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ and possibly below from the
VLBAI baseline. Furthermore, creating a similar network including stations
along the validation profile and in the HITec gravimetry laboratory would
permit gravimetric comparisons between the VLBAI QG and instruments operated
on the gravimetric piers. The estimates so far exclude the inevitable
contribution of the VLBAI gravity measurement. The determination and
validation of the VLBAI uncertainty budget will be published in a separate
study.
## Conclusions
We established a gravimetric control network for the Hannover VLBAI facility,
a novel $10\text{\,}\mathrm{m}$-scale atom interferometer. The network
consists of $439$ connections measured by relative gravimeters. A least
squares adjustment of the network results in a mean standard deviation of the
adjusted gravity values of $9\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$.
In addition, we developed a structural model of the building hosting the VLBAI
facility and its surroundings. When compared, the model and the measurements
agree with $95\text{\,}\mathrm{\char 37\relax}$ confidence, with standard
deviations of the residuals of
$20\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ along the atom
interferometer’s baseline, and
$34\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ on a second, parallel
profile. Moreover, we gained insight on some dynamical aspects of the gravity
field around the instrument, namely the effect of groundwater level
variations.
We anticipate this gravimetric network to contribute to the assessment of the
quantum gravimeter’s uncertainty budget, which is currently not included in
our study. The current work is also essential to help determining the
effective instrumental height (g-value reference position) and enable
transfers of g values from the atom interferometer’s baseline to the
validation profile, accessible to mobile gravimeters for comparison and
possibly calibration purposes, at the
$10\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ repeatability level
(relative to the VLBAI deduced g-values). Completing the model by including
the VLBAI baseline, refining the description of the soil surrounding the host
building, and including better estimates for the building material densities,
we expect to shift the possibility for gravity field measurement transfers and
mobile instrument calibration towards the
$5\text{\,}\mathrm{nm}\text{\,}{\mathrm{s}}^{-2}$ level, improving the
temporal stability of the current state of the art, which is still largely
based on gravimeter comparisons. This paves the way for the realisation of a
new gravity standard based on atom interferometry. Finally, the knowledge of
the dynamical gravity field and its gradients is key to reaching new frontiers
in fundamental physics tests with very long baseline atom interferometry.
###### Acknowledgements.
The Hannover Very Long Baseline Atom Interferometry facility is a major
research equipment funded by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation). This work was supported by the DFG Collaborative
Research Center 1128 “geo-Q” (project A02, Contract Number 239994235) and is
supported by the CRC 1227 “DQ-mat” (project B07, Contract Number 274200144),
Germany's Excellence Strategy – EXC-2123 “QuantumFrontiers” – 390837967, and
the computing cluster of the Leibniz University Hannover under patronage of
the Lower Saxony Ministry of Science and Culture (MWK) and the DFG.
M. S., É. W., and C. S. acknowledge support from “Niedersächsisches Vorab”
through the “Quantum- and Nano-Metrology (QUANOMET)” initiative (project QT3),
and for initial funding of research in the DLR-SI institute. D. S.
acknowledges funding from the German Federal Ministry of Education and
Research (BMBF) through the funding program Photonics Research Germany
(contract number 13N14875).
The VLBAI support structure was conceived by the engineering office Heinz
Berlin (Wennigsen, Germany) in collaboration with the VLBAI science team, and
produced by Aljo Aluminium-Bau Jonuscheit GmbH (Berne, Germany).
We thank W. Ertmer for his vision and long lasting support on very long
baseline atom interferometry and the acquisition of funding for the Hannover
Institute of Technology. We are grateful to T. Froböse and A. Wanner for their
assistance during the installation of the vacuum tank and support structure.
We thank the three reviewers for their valuable input to improve this article.
###### Author contributions.
M.S., É.W., L.T. planned geometric and gravimetric measurements, evaluated the
data and prepared the initial draft. É.W., D.T., D.S., C.S., E.M.R.
conceptualised VSS, VTS. É.W., D.T., K.H.Z. designed and built measurement
platforms for VSS. M.S., É.W., L.T., D.T., K.H.Z. carried out the
measurements. M.S. developed and implemented the gravity model. D.T., K.H.Z.,
D.S., C.S., E.M.R., J.M. provided critical input to the manuscript and
approved the final version.
###### Data availability statement.
Data of absolute gravimeter key comparisons is available in the Key Comparison
Database (https://www.bipm.org/kcdb) and cited literature. Gravimetric
measurements in instrument specific ascii data formats and datasets generated
in this study are available from the corresponding author on reasonable
request.
## References
* [1] S. Abend, M. Gebbe, M. Gersemann, H. Ahlers, H. Müntinga, E. Giese, N. Gaaloul, C. Schubert, C. Lämmerzahl, W. Ertmer, W.. Schleich and E.. Rasel “Atom-chip fountain gravimeter” In _Phys. Rev. Lett._ 117.20 American Physical Society, 2016 DOI: 10.1103/PhysRevLett.117.203003
* [2] Xavier Alauze, Alexis Bonnin, Cyrille Solaro and F Pereira Dos Santos “A trapped ultracold atom force sensor with a $\mu$m-scale spatial resolution” In _New Journal of Physics_ 20.8 IOP Publishing, 2018 DOI: 10.1088/1367-2630/aad716
* [3] Rym Bouchendira, Pierre Cladé, Saı̈da Guellati-Khélifa, Francois Nez and Francois Biraben “New determination of the fine structure constant and test of the quantum electrodynamics” In _Phys. Rev. Lett._ 106.8 American Physical Society, 2011 DOI: 10.1103/PhysRevLett.106.080801
* [4] R. Caldani, K.. Weng, S. Merlet and F. Pereira Dos Santos “Simultaneous accurate determination of both gravity and its vertical gradient” In _Phys. Rev. A_ 99 American Physical Society, 2019 DOI: 10.1103/PhysRevA.99.033601
* [5] CCM-IAG “CCM - IAG strategy for metrology in absolute gravimetry - role of CCM and IAG” Last Update 2015-01-28, 2015 URL: http://www.bipm.org/wg/AllowedDocuments.jsp?wg=CCM-WGG
* [6] J.-M. Chartier, J. Labot, G. Sasagawa, T.. Niebauer and W. Hollander “A portable iodine stabilized He-Ne laser and its use in an absolute gravimeter” In _IEEE Transactions on Instrumentation and Measurement_ 42.2 Institute of ElectricalElectronics Engineers (IEEE), 1993, pp. 420–422 DOI: 10.1109/19.278595
* [7] Giancarlo D’Agostino, S Merlet, A Landragin and F Pereira Dos Santos “Perturbations of the local gravity field due to mass distribution on precise measuring instruments: a numerical method applied to a cold atom gravimeter” In _Metrologia_ 48.5 IOP Publishing, 2011, pp. 299–305 DOI: 10.1088/0026-1394/48/5/009
* [8] F. Delahaye and T.. Witt “Linking the results of key comparison CCEM-K4 with the 10 pF results of EUROMET.EM-K4” In _Metrologia_ 39.1A IOP Publishing, 2002 DOI: 10.1088/0026-1394/39/1a/5
* [9] R. Falk, V. Pálinkáš, H. Wziontek, A. Rülke, M. Val’ko, Ch. Ullrich, H. Butta, J. Kostelecký, M. Bilker-Koivula, J. Näränen, A. Prato, F. Mazzoleni, C. Kirbaş, İ Coşkun, M. Van Camp, S. Castelein, J.. Bernard, A. Lothhammer, M. Schilling, L. Timmen, D. Iacovone, G. Nettis, F. Greco, A.. Messina, R. Reudink, M. Petrini, P. Dykowski, M. Sękowski, J. Janák, J. Papčo, A. Engfeldt and H. Steffen “Final report of EURAMET.M.G-K3 regional comparison of absolute gravimeters” In _Metrologia_ 57.1A, 2020 DOI: 10.1088/0026-1394/57/1A/07019
* [10] Olivier Francis, Henri Baumann, Tomas Volarik, Christian Rothleitner, Gilbert Klein, Marc Seil, Nicolas Dando, Ray Tracey, Christian Ullrich, Stefaan Castelein, Hu Hua, Wu Kang, Shen Chongyang, Xuan Songbo, Tan Hongbo, Li Zhengyuan, Vojtech Pálinkáš, Jakub Kostelecký, Jaakko Mäkinen, Jyri Näränen, Sébastien Merlet, Tristan Farah, Christine Guerlin, Franck Pereira Dos Santos, Nicolas Le Moigne, Cédric Champollion, Sabrina Deville, Ludger Timmen, Reinhard Falk, Herbert Wilmes, Domenico Iacovone, Francesco Baccaro, Alessandro Germak, Emanuele Biolcati, Jan Krynski, Marcin Sękowski, Tomasz Olszak, Andrzej Pachuta, Jonas Ågren, Andreas Engfeldt, René Reudink, Pedro Inacio, Daniel McLaughlin, Geoff Shannon, Marc Eckl, Tim Wilkins, Derek Westrum and Ryan Billson “The European Comparison of Absolute Gravimeters 2011 (ECAG-2011) in Walferdange, Luxembourg: results and recommendations” In _Metrologia_ 50.3 IOP Publishing, 2013, pp. 257–268 DOI: 10.1088/0026-1394/50/3/257
* [11] Olivier Francis, Henri Baumann, Christian Ullrich, Stefaan Castelein, Michel Van Camp, M. Andrade de Sousa, R.. Melhorato, C. Li, J. Xu, D. Su, S. Wu, H. Hu, K. Wu, G. Li, Z. Li, W.-C. Hsieh, V. Pálinkáš, J. Kostelecký, J. Mäkinen, J. Näränen, S. Merlet, F. Pereira Dos Santos, P. Gillot, J. Hinderer, J.-D. Bernard, N. Le Moigne, B. Fores, O. Gitlein, M. Schilling, R. Falk, H. Wilmes, A. Germak, E. Biolcati, C. Origlia, D. Iacovone, F. Baccaro, S. Mizushima, R. De Plaen, G. Klein, M. Seil, R. Radinovic, M. Sękowski, P. Dykowski, I.-M. Choi, M.-S. Kim, A. Borreguero, S. Sainz-Maza, M. Calvo, A. Engfeldt, J. Ågren, R. Reudink, M. Eckl, D. Westrum, R. Billson and B. Ellis “CCM.G-K2 key comparison” In _Metrologia_ 52.1A IOP Publishing, 2015 DOI: 10.1088/0026-1394/52/1a/07009
* [12] C Freier, M Hauth, V Schkolnik, B Leykauf, M Schilling, H Wziontek, H-G Scherneck, J Müller and A Peters “Mobile quantum gravity sensor with unprecedented stability” In _8th symposium on frequency standards and metrology 2015_ 723, Journal of Physics: Conference Series IOP Publishing, Bristol, 2016, pp. 012050 DOI: 10.1088/1742-6596/723/1/012050
* [13] C. Freier “Atom interferometry at geodetic observatories”, 2017 DOI: 10.18452/17795
* [14] “International symposium on Earth and environmental sciences for future generations” 147, International Association of Geodesy Symposia Springer, Cham, 2016 DOI: 10.1007/978-3-319-69170-1
* [15] P Gillot, O Francis, A Landragin, F. Pereira Dos Santos and S Merlet “Stability comparison of two absolute gravimeters: optical versus atomic interferometers” In _Metrologia_ 51.5 IOP Publishing, 2014, pp. L15–L17 DOI: 10.1088/0026-1394/51/5/l15
* [16] P. Gillot, B. Cheng, A. Imanaliev, S. Merlet and F. Pereira Dos Santos “The LNE-SYRTE cold atom gravimeter” In _Proceedings of the European Frequency and Time Forum (EFTF)_ IEEE, 2016 DOI: 10.1109/eftf.2016.7477832
* [17] Olga Gitlein “Absolutgravimetrische Bestimmung der Fennoskandischen Landhebung mit dem FG5-220”, 2009
* [18] Filippo Greco, Valerio Iafolla, Antonio Pistorio, Emiliano Fiorenza, Gilda Currenti, Rosalba Napoli, Alessandro Bonaccorso and Ciro Del Negro “Characterization of the response of spring-based relative gravimeters during paroxysmal eruptions at Etna volcano” In _Earth, Planets and Space_ 66.1, 2014 DOI: 10.1186/1880-5981-66-44
* [19] Kyle S Hardman “A BEC Based Precision Gravimeter and Magnetic Gradiometer: Design and Implementation” https://doi.org/10.25911/5d723b873573a, 2016
* [20] J. Hartwig, S. Abend, C. Schubert, D. Schlippert, H. Ahlers, K. Posso-Trujillo, N. Gaaloul, W. Ertmer and E.. Rasel “Testing the universality of free fall with rubidium and ytterbium in a very large baseline atom interferometer” In _New Journal of Physics_ 17.3 IOP Publishing, 2015 DOI: 10.1088/1367-2630/17/3/035011
* [21] Matt Jaffe, Philipp Haslinger, Victoria Xu, Paul Hamilton, Amol Upadhye, Benjamin Elder, Justin Khoury and Holger Müller “Testing sub-gravitational forces on atoms from a miniature in-vacuum source mass” In _Nature Physics_ 13.10 Nature Publishing Group, 2017, pp. 938–942 DOI: 10.1038/nphys4189
* [22] Z. Jiang, V. Pálinkáš, O. Francis, H. Baumann, J. Mäkinen, L. Vitushkin, S. Merlet, L. Tisserand, P. Jousset, C. Rothleitner, M. Becker, L. Robertsson and E.. Arias “On the gravimetric contribution to watt balance experiments” In _Metrologia_ 50.5 IOP Publishing, 2013, pp. 452–471 DOI: 10.1088/0026-1394/50/5/452
* [23] R. Karcher, A. Imanaliev, S. Merlet and F. Pereira Dos Santos “Improving the accuracy of atom interferometers with ultracold sources” In _New Journal of Physics_ 20.11 IOP Publishing, 2018 DOI: 10.1088/1367-2630/aaf07d
* [24] Mark A. Kasevich and Steven Chu “Atomic interferometry using stimulated Raman transitions” In _Phys. Rev. Lett._ 67.2 American Physical Society (APS), 1991, pp. 181–184 DOI: 10.1103/physrevlett.67.181
* [25] Petr Křen, Vojtech Pálinkáš, Pavel Mašika and Miloš Vaľko “Effects of impedance mismatch and coaxial cable length on absolute gravimeters” In _Metrologia_ 54.2 IOP Publishing, 2017, pp. 161–170 DOI: 10.1088/1681-7575/aa5ba1
* [26] Petr Křen, Vojtech Pálinkáš, Pavel Mašika and Miloš Val’ko “FFT swept filtering: a bias-free method for processing fringe signals in absolute gravimeters” In _Journal of Geodesy_ 93.2, 2019, pp. 219–227 DOI: 10.1007/s00190-018-1154-y
* [27] Xiong Li and Michel Chouteau “Three-dimensional gravity modeling in all space” In _Surveys in Geophysics_ 19.4 Springer, 1998, pp. 339–368 DOI: 10.1023/A:1006554408567
* [28] J.. Liard, C.. Sanchez, B.. Wood, A.. Inglis and R.. Silliker “Gravimetry for watt balance measurements” In _Metrologia_ 51.2 IOP Publishing, 2014, pp. S32–S41 DOI: 10.1088/0026-1394/51/2/S32
* [29] Christoph Lotz, Yvonne Wessarges, Jörg Hermsdorf, Wolfgang Ertmer and Ludger Overmeyer “Novel active driven drop tower facility for microgravity experiments investigating production technologies on the example of substrate-free additive manufacturing” In _Advances in Space Research_ 61.8, 2018, pp. 1967–1974 DOI: 10.1016/j.asr.2018.01.010
* [30] Jaakko Mäkinen, Heikki Virtanen, Mirjam Bilker-Koivula, Hannu Ruotsalainen, Jyri Näränen and Arttu Raja-Halli “The effect of helium emissions by a superconducting gravimeter on the rubidium frequency standards of absolute gravimeters” In _Proceedings of the 3rd International Gravity Field Service (IGFS)_ 144, International Association of Geodesy Symposia Springer, Cham, 2015, pp. 45–51 DOI: 10.1007/1345_2015_205
* [31] Vincent Ménoret, Pierre Vermeulen, Nicolas Le Moigne, Sylvain Bonvalot, Philippe Bouyer, Arnaud Landragin and Bruno Desruelle “Gravity measurements below $10^{-9}$ g with a transportable absolute quantum gravimeter” In _Scientific Reports_ 8.1 Nature Publishing Group, 2018 DOI: 10.1038/s41598-018-30608-1
* [32] Sébastien Merlet, Alexander Kopaev, Michel Diament, Gérard Geneves, Arnaud Landragin and Franck Pereira Dos Santos “Micro-gravity investigations for the LNE watt balance project” In _Metrologia_ 45.3 IOP Publishing, 2008, pp. 265–274 DOI: 10.1088/0026-1394/45/3/002
* [33] T.. Niebauer, G.. Sasagawa, J.. Faller, R. Hilt and F. Klopping “A new generation of absolute gravimeters” In _Metrologia_ 32.3, 1995, pp. 159–180 DOI: 10.1088/0026-1394/32/3/004
* [34] T.. Niebauer, R. Billson, A. Schiel, D. Westrum and F. Klopping “The self-attraction correction for the FG5X absolute gravity meter” In _Metrologia_ 50.1 IOP Publishing, 2013, pp. 1–8 DOI: 10.1088/0026-1394/50/1/1
* [35] Per-Anders Olsson, Andreas Engfeldt and Jonas Ågren “Investigations of a suspected jump in Swedish repeated absolute gravity time series” In _International symposium on Earth and environmental sciences for future generations_ 147, International Association of Geodesy Symposia Springer, Cham, 2016, pp. 137–143 DOI: 10.1007/1345_2016_250
* [36] Per-Anders Olsson, Kristian Breili, Vegard Ophaug, Holger Steffen, Mirjam Bilker-Koivula, Emil Nielsen, Tõnis Oja and Ludger Timmen “Postglacial gravity change in Fennoscandia – three decades of repeated absolute gravity observations” In _Geophysical Journal International_ 217.2 Oxford University Press (OUP), 2019, pp. 1141–1156 DOI: 10.1093/gji/ggz054
* [37] V. Pálinkáš, O. Francis, M. Val’ko, J. Kostelecký, M. Van Camp, S. Castelein, M. Bilker-Koivula, J. Näränen, A. Lothhammer, R. Falk, M. Schilling, L. Timmen, D. Iacovone, F. Baccaro, A. Germak, E. Biolcati, C. Origlia, F. Greco, A. Pistorio, R. Plaen, G. Klein, M. Seil, R. Radinovic, R. Reudink, P. Dykowski, M. Sękowski, D. Próchniewicz, R. Szpunar, M. Mojzeš, J. Janák, J. Papčo, A. Engfeldt, P.. Olsson, V. Smith, D. Westrum, B. Ellis and B. Lucero “Regional comparison of absolute gravimeters, EURAMET.M.G-K2 key comparison” In _Metrologia_ 54.1A IOP Publishing, 2017 DOI: 10.1088/0026-1394/54/1a/07012
* [38] V. Pálinkáš, J. Liard and Z. Jiang “On the effective position of the free-fall solution and the self-attraction effect of the FG5 gravimeters” In _Metrologia_ 49.4 IOP Publishing, 2012, pp. 552–559 DOI: 10.1088/0026-1394/49/4/552
* [39] Richard H Parker, Chenghui Yu, Weicheng Zhong, Brian Estey and Holger Müller “Measurement of the fine-structure constant as a test of the Standard Model” In _Science_ 360.6385 American Association for the Advancement of Science, 2018, pp. 191–195 DOI: 10.1126/science.aap7706
* [40] A. Peters, K.Y. Chung and S. Chu “High-precision gravity measurements using atom interferometry” In _Metrologia_ 38 IOP Publishing, 2001, pp. 25–61 DOI: 10.1088/0026-1394/38/1/4
* [41] V. Pohánka “Optimum expression for computation of the gravity field of a homogeneous polyhedral body” In _Geophysical Prospecting_ 36.7 Wiley-Blackwell, 1988, pp. 733–751 DOI: 10.1111/j.1365-2478.1988.tb02190.x
* [42] Sebastian M.. Raupach, Andreas Koczwara and Gesine Grosche “Brillouin amplification supports $1\times{}{10}^{-20}$ uncertainty in optical frequency transfer over 1400 km of underground fiber” In _Phys. Rev. A_ 92.2 American Physical Society, 2015 DOI: 10.1103/PhysRevA.92.021801
* [43] Fritz Riehle “Frequency standards” Wiley-VCH, Weinheim, 2004
* [44] Fritz Riehle, Patrick Gill, Felicitas Arias and Lennart Robertsson “The CIPM list of recommended frequency standard values: guidelines and procedures” In _Metrologia_ 55.2 IOP Publishing, 2018, pp. 188–200 DOI: 10.1088/1681-7575/aaa302
* [45] S. Rosat and J. Hinderer “Noise levels of superconducting gravimeters: updated comparison and time stability” In _Bulletin of the Seismological Society of America_ 101.3 Seismological Society of America (SSA), 2011, pp. 1233–1241 DOI: 10.1785/0120100217
* [46] S. Rosat, J. Hinderer, J.-P. Boy, F. Littel, Jean-Daniel Bernard, D. Boyer, A. Mémin, Y. Rogister and S. Gaffet “A two-year analysis of the iOSG-24 superconducting gravimeter at the low noise underground laboratory (LSBB URL) of Rustrel, France: Environmental noise estimate” In _Journal of Geodynamics_ 119 Elsevier BV, 2018, pp. 1–8 DOI: 10.1016/j.jog.2018.05.009
* [47] L. Roscoe “Stereolithography interface specification”, 1988
* [48] G Rosi, G D’Amico, L Cacciapuoti, F Sorrentino, M Prevedelli, M Zych, Č Brukner and GM Tino “Quantum test of the equivalence principle for atoms in coherent superposition of internal energy states” In _Nature Communications_ 8 Nature Publishing Group, 2017 DOI: 10.1038/ncomms15529
* [49] G Rosi, F Sorrentino, L Cacciapuoti, M Prevedelli and GM Tino “Precision measurement of the Newtonian gravitational constant using cold atoms” In _Nature_ 510.7506 Nature Publishing Group, 2014, pp. 518–521 DOI: 10.1038/nature13433
* [50] D. Savoie, M. Altorio, B. Fang, L.. Sidorenkov, R. Geiger and A. Landragin “Interleaved atom interferometry for high-sensitivity inertial measurements” In _Science Advances_ 4.12, 2018 DOI: 10.1126/sciadv.aau7948
* [51] Manuel Schilling “Kombination von klassischen Gravimetern mit Quantensensoren”, 2019 DOI: 10.15488/4710
* [52] Manuel Schilling and Olga Gitlein “Accuracy estimation of the IfE gravimeters Micro-g LaCoste gPhone-98 and ZLS Burris Gravity Meter B-64” In _IAG 150 years_ 143, International Association of Geodesy Symposia Springer, Cham, 2015, pp. 249–256 DOI: 10.1007/1345_2015_29
* [53] Manuel Schilling and Ludger Timmen “Traceability of the Hannover FG5X-220 to the SI units” In _International symposium on Earth and environmental sciences for future generations_ 147, International Association of Geodesy Symposia Springer, Cham, 2016, pp. 69–75 DOI: 10.1007/1345_2016_226
* [54] Manuel Schilling, Ludger Timmen and R. Kumme “The gravity field in force standard machines” In _Proceedings of the IMEKO TC3, TC5, TC22 Joint Conference_ , 2017 DOI: 10.15488/3073
* [55] D. Schlippert, J. Hartwig, H. Albers, L.. Richardson, C. Schubert, A. Roura, W.. Schleich, W. Ertmer and E.. Rasel “Quantum test of the universality of free fall” In _Phys. Rev. Lett._ 112.20 American Physical Society, 2014 DOI: 10.1103/PhysRevLett.112.203002
* [56] D. Schlippert, C. Meiners, R.. Rengelink, C. Schubert, D. Tell, É. Wodey, K.. Zipfel, W. Ertmer and E.. Rasel “Matter wave interferometry for inertial sensing and tests of fundamental physics” In _Proceedings of the 8th meeting on CPT and Lorentz symmetry_ , 2020, pp. 37–40 DOI: 10.1142/9789811213984_0010
* [57] F. Sorrentino, Q. Bodart, L. Cacciapuoti, Y.-H. Lien, M. Prevedelli, G. Rosi, L. Salvi and G.. Tino “Sensitivity limits of a Raman atom interferometer as a gravity gradiometer” In _Phys. Rev. A_ 89.2 American Physical Society, 2014 DOI: 10.1103/PhysRevA.89.023607
* [58] L Timmen and H-G Wenzel “Improved gravimetric Earth tide parameters for station Hannover” In _Bulletin d’Information des Marées Terrestres_ 119, 1994, pp. 8834–8846
* [59] L. Timmen, Ch. Rothleitner, M. Reich, S. Schröder and M. Cieslack “Investigation of Scintrex CG-6 gravimeters in the Gravity Meter Calibration System Hannover” In _avn - Allgemeine Vermessungs Nachrichten_ 127.4, 2020, pp. 155–162
* [60] Ludger Timmen “Precise definition of the effective measurement height of free-fall absolute gravimeters” In _Metrologia_ 40.2 IOP Publishing, 2003, pp. 62–65 DOI: 10.1088/0026-1394/40/2/310
* [61] Ludger Timmen, Reinhard Falk, Gerald Gabriel, Alexander Lothhammer, Manuel Schilling and Detlef Vogel “Das Relativgravimeter-Kalibriersystem Hannover für $10^{-4}$-Maßstabsbestimmungen (The Relative Gravimeter Calibration System Hannover for $10^{-4}$ Scale Determination)” In _avn - Allgemeine Vermessungs Nachrichten_ 125.5, 2018, pp. 140–150
* [62] Ludger Timmen and Olga Gitlein “The capacity of the Scintrex Autograv CG-3M no. 4492 gravimeter for “absolute-scale” surveys” In _Revista Brasileira de Cartografia_ 2.56, 2004, pp. 89–95
* [63] Ludger Timmen, Olga Gitlein, Volker Klemann and Detlef Wolf “Observing gravity change in the Fennoscandian uplift area with the Hanover absolute gravimeter” In _Pure and Applied Geophysics_ 169.8 Springer ScienceBusiness Media LLC, 2011, pp. 1331–1342 DOI: 10.1007/s00024-011-0397-9
* [64] Ludger Timmen, Olga Gitlein, J. Müller, G. Strykowski and R. Forsberg “Absolute gravimetry with the Hannover meters JILAg-3 and FG5-220, and their deployment in a Danish-German cooperation” In _zfv – Zeitschrift für Geodäsie, Geoinformation und Landmanagement_ 133.3, 2008, pp. 149–163
* [65] W. Torge and Jürgen Müller “Geodesy” Walter de Gruyter, Berlin/Boston, 2012
* [66] C. Ufrecht and E. Giese “Perturbative operator approach to high-precision light-pulse atom interferometry” In _Phys. Rev. A_ 101 American Physical Society, 2020 DOI: 10.1103/PhysRevA.101.053615
* [67] M. Van Camp, O. Viron, A. Watlet, B. Meurers, O. Francis and C. Caudron “Geophysics from terrestrial time-variable gravity measurements” In _Reviews of Geophysics_ 55.4, 2017, pp. 938–992 DOI: 10.1002/2017rg000566
* [68] L.. Vitushkin “Measurement standards in gravimetry” In _Gyroscopy and Navigation_ 2.3 Pleiades Publishing Ltd, 2011, pp. 184–191 DOI: 10.1134/s2075108711030126
* [69] A. Wanner, G. Bergmann, A. Bertolini, T. Fricke, H. Lück, C.. Mow-Lowry, K.. Strain, S. Gossler and K. Danzmann “Seismic attenuation system for the AEI 10 meter Prototype” In _Classical and Quantum Gravity_ 29.24 IOP Publishing, 2012 DOI: 10.1088/0264-9381/29/24/245007
* [70] Hans-Georg Wenzel “Schwerenetze” In _Geodätische Netze in der Landes-und Ingenieurvermessung II: Vorträge des Kontaktstudiums Februar 1985 in Hannover_ Konrad Wittwer, Stuttgart, 1985, pp. 457–486
* [71] É. Wodey, D. Tell, E.. Rasel, D. Schlippert, R. Baur, U. Kissling, B. Kölliker, M. Lorenz, M. Marrer, U. Schläpfer, M. Widmer, C. Ufrecht, S. Stuiber and P. Fierlinger “A scalable high-performance magnetic shield for Very Long Baseline Atom Interferometry” In _Review of Scientific Instruments_ 91, 2020 DOI: 10.1063/1.5141340
* [72] M.-K. Zhou, Z.-K. Hu, X.-C. Duan, B.-L. Sun, L.-L. Chen, Q.-Z. Zhang and J. Luo “Performance of a cold-atom gravimeter with an active vibration isolator” In _Phys. Rev. A_ 86.4 American Physical Society, 2012 DOI: 10.1103/PhysRevA.86.043630
|
2024-09-04T02:54:58.853231 | 2020-03-10T17:48:47 | 2003.04878 | {
"authors": "Aritra De, Christopher Plumberg and Joseph I. Kapusta",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26143",
"submitter": "Aritra De",
"url": "https://arxiv.org/abs/2003.04878"
} | arxiv-papers | # Calculating Fluctuations and Self-Correlations Numerically for Causal Charge
Diffusion in Relativistic Heavy-Ion Collisions
Aritra De School of Physics & Astronomy, University of Minnesota,
Minneapolis, MN 55455, USA Christopher Plumberg Department of Astronomy and
Theoretical Physics, Lund University, Sölvegatan 14A, SE-223 62 Lund, Sweden
Joseph I. Kapusta School of Physics & Astronomy, University of Minnesota,
Minneapolis, MN 55455, USA
###### Abstract
We study the propagation and diffusion of electric charge fluctuations in the
Bjorken hydrodynamic model with both white and Catteneo noise using purely
numerical methods. We show that a global lattice of noise fluctuations is
required to fully calculate the two-point correlators of charge. We solve the
stochastic differential equations that arise from the charge conservation
equation on the lattice. We explicitly identify the self-correlation term in
the case of Catteneo noise and provide a physical interpretation. We provide a
numerical recipe to remove this contribution from the full two-point
correlators. Finally, we calculate the balance functions for charged hadrons.
By limiting the speed of signal propagation, we observe the expected narrowing
of the balance functions after removing the self-correlations.
## I Introduction
Relativistic hydrodynamics is used to study not only the equation of state but
also dynamical quantities, such as the transport coefficients, of the quark-
gluon plasma. The applicability of hydrodynamics is justified if the mean free
paths of the particles are small compared to the distances over which
thermodynamic quantities vary. It turns out that hydrodynamics is very
successful in modeling high energy nuclear collisions. There are experimental
facilities which produce and study quark-gluon plasma: the Relativistic Heavy
Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron
Collider (LHC) at CERN. Fluid equations describe conservation of energy,
momentum, baryon number, electric charge, and strangeness. Anisotropic
particle production, such as elliptic flow, in heavy ion collisions gives
credence to the use of hydrodynamics in simulating these collisions. It has
been successful in describing various properties like particle spectra,
particle correlations, and in obtaining values of transport quantities like
the ratio of shear viscosity to entropy density $\eta/s$. By comparing
particle spectra with experimental data, hydrodynamical simulations also help
in understanding the initial state, its fluctuations, and hence properties of
strongly interacting matter in general. Initially, the assumption of ideal
hydrodynamics worked very well in describing the data which indicated that the
system was strongly interacting. Later, important aspects like the lattice-
computed QCD equation of state and viscous properties were taken into account
to study transport properties of quark-gluon plasma with more precision.
The fluctuation-dissipation theorem relates the dissipative properties of a
system to its hydrodynamical fluctuations. In particular, it allows us to
infer quantities like shear and bulk viscosity, and electrical conductivity,
from the magnitude of fluctuations. Hydrodynamical fluctuations have also been
used to study static critical phenomena torres near a possible critical
point. More recently, there have been studies of dynamic critical phenomena
near a QCD critical point teaney ; yiyin ; nahrgang ; Bluhm . Critical points
are characterized by large fluctuations. This led to the suggestion to study
fluctuations in conserved quantities, such as electric charge, baryon number,
and strangeness on an event-by-event basis stephanov_shuryak . It has also
been suggested to study non-gaussianities (higher order cumulants) of the
fluctuations near critical points as they are more sensitive to large
correlation lengths stephanovprl . Correlation lengths theoretically diverge
near the critical points, but in the scenario of the heavy ion collisions are
limited by the finite system size stephanov_shuryak as well as by the finite
lifetime of the system berdnikov .
Thus it becomes imperative to study the hydrodynamics of fluctuations in the
context of heavy ion collisions. The relativistic theory of hydrodynamic
fluctuations in the context of heavy-ion collisions was introduced in Ref.
kapusta2012 . In the current work, we focus on calculating the two-point
correlations of charge fluctuations and the resulting balance functions of
pions. The balance function measures the difference in probability of finding
a particle of opposite charge in another fluid cell versus a particle of same
charge given a charged particle in a given fluid cell spratt . This problem
was studied analytically in the 1+1 dimensional Bjorken model in Ref.
plumbergkapusta . Analytic calculations are not possible for state of the art
3+1 dimensional, non-boost invariant, hydrodynamics. In preparation for
extensions to modern hydrodynamic models, we develop numerical methods to
solve the relevant stochastic differential equations numerically. Particular
attention is paid to the physical interpretation of self-correlations and how
they can be subtracted to make comparison to experimental data.
The outline of the paper is as follows. In Sec. II we review the normal
diffusion, the Cattaneo, and the Gurtin-Pipkin equations, and discuss how
self-correlations arise. In Sec. III we outline the application of the
relevant stochastic differential equations in the context of the Bjorken
hydrodynamic model for heavy-ion collisions. In Sec. IV we present solutions
to those equations. In Sec. V we show how self-correlations can be clearly
identified. In Sec. VI we calculate the balance functions that relate theory
to experiment. Our conclusions are presented in Sec. VII. Details of how the
stochastic differential equations are solved are presented in the Appendices.
The numerical method is readily transferrable to heavy-ion collisions which
have no spatial symmetries and thus useful for future calculations.
## II Noise, Fluctuations and Self-Correlations
Since the usual diffusion equation leads to instantaneous signal propagation,
which is inconsistent with special relativity, one needs a diffusion equation
which is the same order in spatial and temporal derivatives with
characteristic relaxation times and lengths. In this paper we solve the
simplest diffusion equation satisfying this condition, called the Catteneo
equation catteneo , numerically. The resulting differential equation is a
stochastic differential equation (SDE) because it contains random noise terms.
The way to solve SDEs is to solve the differential equation for a large number
of events (here on the order of 1 million or more) and study the correlation
functions. A finite difference method is used to solve the SDE.
Consider the ordinary diffusion equation with white noise. In the context of
the Bjorken model, which has boost invariance and no dependence on transverse
coordinates, the two variables are proper time $\tau=\sqrt{t^{2}-z^{2}}$ and
spatial rapidity $\xi={\textstyle{\frac{1}{2}}}\ln[(t+z)/(t-z)]$, where the
beam axis is along the $z$ direction. The noise $f$, appropriately defined
(see below), is a dimensionless random variable with correlator
$\langle
f(\tau_{1},\xi_{1})f(\tau_{2},\xi_{2})\rangle=\frac{N(\tau_{2})}{2\pi}\delta(\tau_{1}-\tau_{2})\delta(\xi_{1}-\xi_{2})\,,$
(1)
which is a product of Dirac $\delta$-functions in time and space with
normalization determined by the fluctuation-dissipation theorem
$N(\tau)=\frac{4\pi\sigma_{Q}(\tau)T(\tau)}{A\tau s^{2}(\tau)}\,.$ (2)
Here $\sigma_{Q}$ is the charge conductivity, $T$ is the temperature, $s$ is
the entropy density, and $A$ is the transverse area. To generate this
numerically on a discrete lattice with spacings $\Delta\xi$ and $\Delta\tau$,
we sample $f$ from a normal distribution with zero mean and variance
$N(\tau)/(2\pi\Delta\xi\Delta\tau)$. The analysis of how finite difference
methods work computationally for solving SDEs is discussed in Appendix A.
Consider the difference between white and colored noise. The standard two-
point function for white noise in frequency and momentum space is
$\displaystyle\langle\tilde{f}(\omega_{1},k_{1})\tilde{f}(\omega_{2},k_{2})\rangle$
$\displaystyle=$ $\displaystyle\int
d\tau_{1}\,d\tau_{2}\,d\xi_{1}\,d\xi_{2}\,e^{-i(k_{1}\xi_{1}+k_{2}\xi_{2})}\,e^{-i(\omega_{1}\tau_{1}+\omega_{2}\tau_{2})}\langle
f(\tau_{1},\xi_{1})f(\tau_{2},\xi_{2})\rangle$ (3) $\displaystyle=$
$\displaystyle\delta(k_{1}+k_{2})\tilde{N}(\omega_{1}+\omega_{2})$
where $\tilde{N}$ is the Fourier transform of $N$. Generalizing this to
Catteneo noise (which is an example of colored noise), we recall that the two-
point function for the noise obeys kapustayoung .
$\langle(\tau_{Q}\,\partial\tau_{1}+1)\tilde{f}(k_{1},\tau_{1})(\tau_{Q}\,\partial\tau_{2}+1)\tilde{f}(k_{2},\tau_{2})\rangle=N(\tau_{1})\delta(\tau_{1}-\tau_{2})\delta(k_{1}+k_{2})\,$
(4)
where $\tau_{Q}$ is a relaxation time. In frequency and momentum space this
becomes
$\langle\tilde{f}(\omega_{1},k_{1})\tilde{f}(\omega_{2},k_{2})\rangle=\frac{\delta(k_{1}+k_{2})\tilde{N}(\omega_{1}+\omega_{2})}{(i\tau_{Q}\omega_{1}+1)(i\tau_{Q}\omega_{2}+1)}\,.$
(5)
The noise correlator is no longer a Dirac $\delta$-function in time anymore;
instead, it is smeared out, hence the name colored noise.
The following three figures will help illustrate some of the physics to come.
Figure 1 shows a fluctuation, represented by a star, in a particular spacetime
cell. The signal, represented by bursts, is transmitted to the two adjacent
spatial cells in the next time step. Hence those two cells have correlated
fluctuations. This type of correlation can arise from either white or colored
noise. Figure 2 shows a fluctuation in one spacetime cell with its signal
transmitted to two spacetime cells two time steps later. This type of
correlation can also happen with either white or colored noise. Figure 3 shows
a situation that only happens with colored noise. The two stars are
correlated, and their signals lead to correlations between the same two cells
as shown in the previous figures.
Self-correlations arise from correlations in the same spatial cell. For white
noise this means the star and the burst are in the same spacetime cell. In
discretized spacetime this leads to a Kronecker $\delta$-function in $\xi$,
while in the continuum limit this leads to a Dirac $\delta$-function. The
latter is somewhat unphysical, since all correlations have some finite extent.
For colored noise, the self-correlation begins in the cell hosting the
original fluctuation, and then continues in subsequent time steps but always
in the same spatial cell due to the time-correlated nature of colored noise.
Noise generated at a previous time in the same spatial cell will
hydrodynamically evolve to a correlated charge fluctuation in a different
spatial fluid cell. Hence the self-correlation will be non-trivial for colored
noise.
Figure 1: An example of either white or colored noise. A fluctuation in one
cell, represented by a star, causes a correlation between two cells in the
next time step, represented by bursts, separated in space from each other and
from the original fluctuation. (color online) Figure 2: An example of either
white or colored noise. A fluctuation in one cell, represented by a star,
causes a correlation between two cells two time steps later, represented by
bursts, but only one is separated in space from the original fluctuation.
(color online) Figure 3: An example of colored noise. Fluctuations at the same
point in space but at different times are correlated, as represented by the
stars. This results in a correlation between the two cells, represented by
bursts. (color online)
Figure 4 shows another way to visualize the colored Cattaneo noise. At a fixed
spatial cell, correlations arise at different times due to $\tau_{Q}>0$.
Correlations also propagate to other spatial cells with increasing time via a
Green’s function. The mathematical formalism and details of how it is
implemented numerically with be presented in the following sections.
Figure 4: Schematic of the lattice setup for Catteneo noise.
The final charge correlations are determined at some $\tau_{f}$. One must
integrate over all prior times $\tau_{i}\leq\tau\leq\tau_{f}$ to obtain the
final time charge correlators. Thus one can define self-correlations as the
correlation of a charge fluctuation generated in $\xi_{1}$ at final time
$\tau_{f}$ with another charge fluctuation generated at the same $\xi_{1}$ but
at a previous time and hence had time to travel to a different $\xi_{2}$ at
$\tau_{f}$. It is non-trivial for colored noise because colored noise
generated in same $\xi$ are correlated in time.
One can go further and consider the Gurtin-Pipkin noise gurtin which
introduces a noise correlation in spatial rapidity in addition to the
correlation in proper time. Gurtin-Pipkin noise has been dealt with
analytically in Ref. kapustayoung . In Cartesian coordinates Gurtin-Pipkin
noise results in the following diffusion equation
$\left[\frac{\partial}{\partial
t}-D_{Q}\nabla^{2}+\tau_{Q}\frac{\partial^{2}}{\partial
t^{2}}+\tau_{2}^{2}\frac{\partial^{3}}{\partial
t^{3}}-\tau_{3}D_{Q}\frac{\partial}{\partial t}\nabla^{2}\right]n_{Q}=0\,.$
(6)
Numerical simulation of Gurtin-Pipkin noise is deferred to a future work.
## III Diffusion in Boost Invariant 1+1 Hydrodynamics
This section is a mini-review of the problem addressed previously in Ref.
plumbergkapusta to help setup the use of numerical methods for solving the
resulting SDE. We will work in 1+1 dimensional boost-invariant Bjorken
hydrodynamics. The longitudinal boost-invariance implies that the initial
conditions for local variables are only functions of the proper time $\tau$.
We neglect the bulk and shear viscosities in order to focus on charge
transport.
The energy-momentum tensor for an ideal fluid is
$T^{\mu\nu}=wu^{\mu}u^{\nu}-pg^{\mu\nu}\,.$ (7)
We take the Landau-Lifshitz approach where $u^{\mu}$ is the velocity of energy
transport. The electric current takes the form
$J_{Q}^{\mu}=n_{Q}u^{\mu}+\Delta J^{\mu}$ (8)
where $n_{Q}$ is the proper charge density and $\Delta J^{\mu}$ is the
dissipative part. In first-order viscous fluid dynamics $\Delta J^{\mu}$ takes
the form
$\Delta J^{\mu}=D_{Q}\Delta^{\mu}n_{Q}=\sigma_{Q}\Delta^{\mu}\mu_{Q}$ (9)
where $\mu_{Q}$ is the charge chemical potential, $\sigma_{Q}$ is the charge
conductivity and $\Delta^{\mu}$ is the transverse derivative
$\Delta^{\mu}=\partial^{\mu}-u^{\mu}(u\cdot\partial)\,.$ (10)
Conventional charge diffusion follows the usual diffusion equation
$\left(\frac{\partial}{\partial t}-D_{Q}\nabla^{2}\right)n_{Q}=0\,.$ (11)
The diffusion constant $D_{Q}$ and charge conductivity are related by the
Einstein relation $D_{Q}=\sigma_{Q}/\chi_{Q}$, where $\chi_{Q}$ is the
electric charge susceptibility defined by
$\chi_{Q}=\frac{\partial n_{Q}(T,\mu_{Q})}{\partial\mu_{Q}}\,.$ (12)
The diffusion equation leads to an infinite speed of propagation which is
unphysical and not suitable for hydrodynamic simulations of heavy-ion
collision. Therefore the usual diffusion equation is replaced by one with a
double derivative in time with a relaxation time factor $\tau_{Q}$.
$\left(\frac{\partial}{\partial
t}-D_{Q}\nabla^{2}+\tau_{Q}\frac{\partial^{2}}{\partial t^{2}}\right)n_{Q}=0$
(13)
This equation is called the Cattaneo equation catteneo . It is a combination
of the diffusion equation with the wave equation. The dissipative current gets
modified to
$\Delta
J^{\mu}=D_{Q}\Delta^{\mu}\left[\frac{1}{1+\tau_{Q}(u\cdot\partial)}\right]n_{Q}$
(14)
One can show that high frequency waves travel at a speed of
$v_{Q}=\sqrt{D_{Q}/\tau_{Q}}$ kapustayoung . The fluctuation-dissipation
theorem relates the two-point function, which provides a measure of the
variance of fluctuations, to the dissipation from diffusion. A stochastic
noise term $I^{\mu}$ is therefore added to the charge current.
$J^{\mu}=n_{Q}u^{\mu}+\Delta J^{\mu}+I^{\mu}$ (15)
One-point functions vanish and the two-point functions are determined by the
fluctuation-dissipation theorem. For the usual diffusion equation
$\langle I^{\mu}(x)\rangle=0\qquad\langle
I^{\mu}(x_{1})I^{\nu}(x_{2})\rangle=2\sigma_{Q}T\,h^{\mu\nu}\delta(x_{1}-x_{2})$
(16)
where $h^{\mu\nu}=u^{\mu}u^{\nu}-g^{\mu\nu}$ is the transverse projector. This
is white noise. In the Catteneo equation the fluctuations are
$\langle
I^{i}(x_{1})I^{j}(x_{2})\rangle=\frac{\sigma_{Q}T}{\tau_{Q}}\delta(\mbox{\boldmath$x$}_{1}-\mbox{\boldmath$x$}_{2})\,e^{-|t_{1}-t_{2}|/\tau_{Q}}\,\delta_{ij}$
(17)
The delta function in time is replaced by an exponential decay function. In
the limit $\tau_{Q}\rightarrow 0$ this two-point function becomes the Dirac
$\delta$\- function for white noise.
The following are the relations between the Cartesian coordinates and the
proper time and spatial rapidity appropriate for the Bjorken model.
$\displaystyle\begin{aligned} t&=&\tau\cosh\xi\qquad z&=\tau\sinh\xi\\\
\tau&=&\sqrt{t^{2}-z^{2}}\qquad\xi&=\tanh^{-1}\left(\frac{z}{t}\right)\\\
\end{aligned}$ (18)
The flow velocity is
$u^{0}=\cosh\xi\quad u^{z}=\sinh\xi\,.$ (19)
The transverse derivatives are
$\displaystyle\Delta^{0}=-\frac{\sinh\xi}{\tau}\frac{\partial}{\partial\xi}\qquad\Delta^{3}=-\frac{\cosh\xi}{\tau}\frac{\partial}{\partial\xi}\quad\text{with}\quad
u\cdot\partial=\frac{\partial}{\partial\tau}\,.$ (20)
The fluctuating contribution to the current is written as
$\displaystyle I^{0}$ $\displaystyle=$ $\displaystyle
s(\tau)f(\xi,\tau)\sinh\xi$ (21) $\displaystyle I^{3}$ $\displaystyle=$
$\displaystyle s(\tau)f(\xi,\tau)\cosh\xi\,.$ (22)
The entropy density $s$ is factored out to make $f$ dimensionless. The
background fluid equations for the proper charge density and entropy density
are
$\displaystyle\frac{ds}{d\tau}+\frac{s}{\tau}=0\;\;$
$\displaystyle\Rightarrow$
$\displaystyle\;\;s(\tau)=\frac{s_{i}\tau_{i}}{\tau}$ (23)
$\displaystyle\frac{dn_{Q}}{d\tau}+\frac{n_{Q}}{\tau}=0\;\;$
$\displaystyle\Rightarrow$
$\displaystyle\;\;n_{Q}(\tau)=\frac{n_{i}\tau_{i}}{\tau}\,.$ (24)
These are a manifestation of the conservation of entropy and charge,
respectively. The $s_{i}$ and $n_{i}$ are the densities at some initial time
$\tau_{i}$. We take the initial proper charge density $n_{i}$ to be zero,
hence the average charge density for subsequent times is zero as well.
Now let us look at the charge current conservation equation
$\partial_{\mu}J^{\mu}=0$. It is convenient to define the variable
$X=\tau\delta n$ because, in the absence of fluctuations, this quantity is
conserved during the hydrodynamic evolution. After a few steps of algebra the
full charge conservation equation becomes
$\left[\frac{\tau}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau}{D_{Q}\chi_{Q}T}\right)\right]\frac{\partial
X}{\partial\tau}+\frac{\tau_{Q}\tau}{D_{Q}\chi_{Q}T}\frac{\partial^{2}X}{\partial\tau^{2}}-\frac{1}{\tau\chi_{Q}T}\frac{\partial^{2}X}{\partial\xi^{2}}$
$+\left[\frac{\tau
s}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau
s}{D_{Q}\chi_{Q}T}\right)\right]\frac{\partial
f}{\partial\xi}+\frac{\tau_{Q}\tau
s}{D_{Q}\chi_{Q}T}\frac{\partial^{2}f}{\partial\xi\partial\tau}=0\,.$ (25)
For the case $\tau_{Q}=0$ (usual diffusion equation) this simplifies to
$\frac{\partial
X}{\partial\tau}-\frac{D_{Q}}{\tau^{2}}\frac{\partial^{2}X}{\partial\xi^{2}}+s\frac{\partial
f}{\partial\xi}=0\,.$ (26)
Due to boost invariance it is useful to use the Fourier transform
$X(\xi,\tau)=\int_{-\infty}^{\infty}\frac{dk}{2\pi}e^{ik\xi}\tilde{X}(k,\tau)\,,$
(27)
and similarly for $f$. Then the SDE for white noise is
$\frac{\partial}{\partial\tau}\tilde{X}+\frac{D_{Q}k^{2}}{\tau^{2}}\tilde{X}=-iks\tilde{f}$
(28)
and for colored Cattaneo noise
$\left[\frac{\tau}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau}{D_{Q}\chi_{Q}T}\right)\right]\frac{\partial\tilde{X}}{\partial\tau}+\frac{\tau_{Q}\tau}{D_{Q}\chi_{Q}T}\frac{\partial^{2}\tilde{X}}{\partial\tau^{2}}+\frac{k^{2}}{\tau\chi_{Q}T}\tilde{X}$
$=-ik\left[\frac{\tau
s}{D_{Q}\chi_{Q}T}+\tau_{Q}\frac{\partial}{\partial\tau}\left(\frac{\tau
s}{D_{Q}\chi_{Q}T}\right)\right]\tilde{f}-i\frac{k\tau_{Q}\tau
s}{D_{Q}\chi_{Q}T}\frac{\partial\tilde{f}}{\partial\tau}\,.$ (29)
For the sake of comparison and for definiteness, we follow Ref.
plumbergkapusta and assume both $D_{Q}$ and $\tau_{Q}$ are constant within
the range of temperature to be considered. This means that high frequency
waves propagate with a constant value of $v_{Q}$. For the same reasons we
assume that $s\sim T^{3}$ and $\chi\sim T^{2}$. Hence $T\sim\tau^{-1/3}$.
## IV Solving the Stochastic Differential Equations
We start by solving the stochastic differential equation for white noise. As
explained earlier, we will solve it on a spacetime lattice and choose spacings
$\Delta\xi=0.09$ and $\Delta\tau=10^{-4}$ fm/c. We set the parameters such
that $(\chi(\tau_{f})T_{f})/(\tau_{f}\Delta\xi)=0.5122$ MeV3 fm-3. We source
the noise function $f$ from a normal distribution with mean $0$ and variance
$1/\sqrt{\Delta t\Delta\xi}$. The density-density correlator arising from the
noise fluctuation which is a solution to the SDE in our discretized system,
evaluted at the final time $\tau_{f}$, has the analytical form
$\langle\delta n(\xi_{1},\tau_{f})\delta
n(\xi_{2},\tau_{f})\rangle=\frac{\chi_{Q}(\tau_{f})T_{f}}{A\tau_{f}}\left[\frac{\delta_{\xi_{1},\xi_{2}}}{\Delta\xi}-\frac{1}{\sqrt{\pi
w^{2}}}e^{-(\xi_{1}-\xi_{2})^{2}/w^{2}}\right]$ (30)
where
$w^{2}=8D_{Q}\left(\frac{1}{\tau_{i}}-\frac{1}{\tau_{f}}\right)\,.$ (31)
In the continuum limit
$\delta_{\xi_{1},\xi_{2}}/\Delta\xi\rightarrow\delta(\xi_{1}-\xi_{2})$. The
parameters chosen for this work are the same as in Ref. plumbergkapusta ,
namely $\tau_{i}=0.5$ fm/c, $\tau_{f}=6.352$ fm/c, $T_{i}=350$ MeV, and
$T_{f}=150$ MeV. We use diffusion constant $D_{Q}=0.162\>\text{fm}$ which is
an average over the temperature interval from 150 to 350 MeV taken from Ref.
Aarts . The equation of state used is the same as in Ref. torres , which is
$\chi_{Q}=\frac{2}{3}T^{2}$ (including up, down and strange quarks).
The details of how we solve an SDE are discussed in Appendix A. The solution
is presented in Fig. 5. The dots represent the result of the SDE simulation
for ten million random events. The solid curve is the Gaussian from Eq. (30);
it overlays the dots within the width of the line. The Kronecker
$\delta$-function at $\xi=0$ is clearly evident.
Figure 5: White noise density-density correlation function for 10 million
events. The solid curve is the Gaussian from Eq. 30. (color online)
Next we turn to colored noise. We have to generate a noise that has the
desired correlation in proper time but is uncorrelated in rapidity. The way we
do that is by solving another SDE which is called the Langevin equation.
$f+\tau_{Q}\frac{\partial f}{\partial\tau}=\zeta$ (32)
Here $\zeta$ is the regular white noise. The relaxation time $\tau_{Q}$
smoothens the Dirac $\delta$-correlation in proper time. The $\tau_{Q}$ also
introduces the maximum mode velocity to be $v_{Q}^{2}=D_{Q}/\tau_{Q}$, thereby
removing instantaneous signal propagation. The analytical solution to the
Langevin equation (with rapidity dependences suppressed) is
$\langle
f(\tau_{1})f(\tau_{2})\rangle=\frac{N(\tau_{2})}{4\pi\tau_{Q}}\left[e^{-|\tau_{1}-\tau_{2}|/\tau_{Q}}-e^{(2\tau_{i}-\tau_{1}-\tau_{2})/\tau_{Q}}\right]\equiv{\cal
N}(\tau_{1},\tau_{2})\,.$ (33)
The derivation is given in Appendix B. The numerically computed two-point
function is plotted in Fig. 6. The expected result (33) and the numerical
result are consistent for ten million simulated events.
Figure 6: Comparison of numerical and analytical results for $v_{Q}^{2}=0.16$
with rapidity dependences suppressed. (color online)
The grid sizes chosen ensures that they obey the Courant Friedrichs Lewy (CFL)
condition CFL . This condition states that the numerical domain of dependence
of any point in space and time must include the analytical domain of
dependence. Physically, this condition amounts to a signal propagating no more
than one spatial cell away during one time step. For speed $v_{Q}$ being a
constant, this amounts to the condition $\Delta\tau/\tau<\Delta\xi/v_{Q}$.
Figure 7 shows the dependence of the two-point correlator for two very
different values of the propogation speed, or equivalently the relaxation time
$\tau_{Q}$.
Figure 7: Variation of the density-density correlator with the propogation
speed $v^{2}_{Q}$. (color online)
## V Characterizing Self-Correlations
The self-correlations are trivial for the case of white noise; it’s a Dirac
$\delta$-function. Even the two-point correlation function of a free Boltzmann
gas has a $\delta$\- function term landau1 ; landau2 .
$\langle\delta n(\textbf{x}_{1})\delta n(\textbf{x}_{2})\rangle=\chi
T\delta^{3}(\textbf{x}_{1}-\textbf{x}_{2})+\cdots$ (34)
This is explained in the Ref. landau2 where $\langle(\Delta
N)^{2}\rangle=\chi TV$. One can see this in Eq. (30), where the denomintor
$\tau A$ is the Jacobian factor from the Bjorken expansion instead of
stationary Cartesian coordinates. Experiments measure just the two-particle
correlation and hence we have to subtract the self-correlation spratt . The
challenge is to characterize the self-correlations in the presence of colored
noise, since it is no longer a Dirac $\delta$-function.
Figure 8 shows the numerically computed self-correlation. If we subtract the
two-point correlation in Fig. 5 from that presented in this figure, we get the
expected Gaussian. This is shown in Fig. 9 where it is compared with the
analytical Gaussian function in Eq. (30) for 1 million events.
Figure 8: The self-correlation. Figure 9: The solid curve is the expected
Gaussian while the dots represent the result of the SDE simulation for 1
million random events. (color online)
Now we move on to the meaning of self-correlation for colored noise. Based on
the prescription of self-correlation that we discussed in the introduction, we
consider the schematic diagram in Fig. 10. We are interested in noise sources
at one particular $\xi$ because noise generated at any other $\xi$ would be
uncorrelated.
Figure 10: Schematic of the self-correlation. The star denotes a noise source
and the bursts are the charge fluctuations resulting from noise. (color
online)
Let us try to understand what the analytical formula for this would look like.
We start with the following expression for the charge fluctuation in
$k$-space.
$\delta\tilde{n}(k,\tau)=-\frac{1}{\tau}\int_{\tau_{i}}^{\tau}d\tau^{\prime}s(\tau^{\prime})\tilde{G}(k,\tau,\tau^{\prime})\tilde{f}(k,\tau^{\prime})\,.$
(35)
Here $\tilde{G}$ is the Green’s function for the homogeneous part of the SDE
(29), which can be written down in terms of Kummer’s function for the
temperature dependences listed after that equation plumbergkapusta . This
gives the full form of the two-point correlation function as in Eq. (49) of
Ref. plumbergkapusta .
$\displaystyle\langle\delta n(\xi_{1},\tau_{f})\delta
n(\xi_{2},\tau_{f})\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\tau_{f}^{2}}\int\frac{dk}{2\pi}e^{ik(\xi_{1}-\xi_{2})}\int
d\tau^{\prime}s(\tau^{\prime})\int d\tau^{\prime\prime}s(\tau^{\prime\prime})$
(36) $\displaystyle\times$
$\displaystyle\tilde{G}(k,\tau_{f},\tau^{\prime})\;\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})\,{\cal
N}(\tau^{\prime},\tau^{\prime\prime})\,.$
Following Eqs. (54) and (55) of Ref. plumbergkapusta , we can write the self-
correlation term as
$\displaystyle\langle\delta n(\xi_{1},\tau_{f})$ $\displaystyle\delta n$
$\displaystyle(\xi_{2},\tau_{f})\rangle_{\text{self}}$ (37) $\displaystyle=$
$\displaystyle\frac{\chi_{Q}(\tau_{f})T_{f}}{A\tau_{Q}}\int\frac{d\tau^{\prime\prime}}{\tau^{\prime\prime}}\left[e^{-(\tau_{f}-\tau_{2})/\tau_{Q}}-e^{-(\tau_{f}+\tau_{2}-2\tau_{i})/\tau_{Q}}\right]\int\frac{dk}{2\pi}e^{ik(\xi_{1}-\xi_{2})}\frac{\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})}{ik}$
$\displaystyle=$
$\displaystyle\frac{s(\tau_{f})}{D_{Q}}\int\frac{dk}{2\pi}e^{ik(\xi_{1}-\xi_{2})}\int
d\tau^{\prime\prime}s(\tau^{\prime\prime})\;\frac{\tilde{G}(k,\tau_{f},\tau_{f})}{ik}\;\frac{\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})}{ik}\,{\cal
N}(\tau_{f},\tau^{\prime\prime})\,.$
Recall from Ref. plumbergkapusta that $\tilde{G}(k,\tau_{f},\tau_{f})=ik$. It
denotes a noise fluctuation that was generated at the final time and didn’t
have to move anywhere. The $\tilde{G}(-k,\tau_{f},\tau^{\prime\prime})$ is a
noise fluctuation generated at a time $\tau^{\prime\prime}<\tau_{f}$ and then
moved to $\xi_{2}$ at $\tau_{f}$. For white noise, fluctuations generated at
two separate spacetime points can’t be correlated, so the fluctuation
generated at $\tau_{f}$ is only correlated to itself. Hence for white noise,
$\tau_{Q}\to 0$, and the self-correlation is just a Dirac $\delta$-function.
As $\tau_{Q}$ increases, the more backward in time the noise sources would be
correlated. Once generated the noise will travel and give rise to a correlated
electric charge fluctuation further away in spacetime rapidity. Hence we
expect the self-correlation term to be more spread out in spacetime rapidity.
One can use the same SDE solver for generating the self-correlation. The only
change is that the Green’s function $\tilde{G}$ is replaced by
$\tilde{G}/(ik)$ when solving for the charge density fluctuation.
$\displaystyle\langle$ $\displaystyle\delta n$
$\displaystyle(\xi_{2},\tau_{f})f(\xi_{1},\tau_{f})\rangle$ (38)
$\displaystyle=$
$\displaystyle\left\langle\int\frac{dk}{2\pi}e^{ik\xi_{2}}\delta\tilde{n}(k,\tau_{f})\int\frac{dk_{1}}{2\pi}e^{ik_{1}\xi_{1}}\tilde{f}(k,\tau_{f})\right\rangle$
$\displaystyle=$
$\displaystyle\frac{1}{\tau_{f}}\int^{\tau_{f}}_{\tau_{i}}d\tau^{\prime}s(\tau^{\prime})\int\frac{dk}{2\pi}e^{ik\xi_{2}}\frac{\tilde{G}(k,\tau^{\prime},\tau_{f})}{ik}\int\frac{dk_{1}}{2\pi}e^{ik_{1}\xi_{1}}\langle\tilde{f}(k,\tau^{\prime})\tilde{f}(k_{1},\tau_{f})\rangle$
$\displaystyle=$
$\displaystyle\frac{1}{\tau_{f}}\int^{\tau_{f}}_{\tau_{i}}d\tau^{\prime}s(\tau^{\prime})\int\frac{dk}{2\pi}e^{ik(\xi_{2}-\xi_{1})}\frac{\tilde{G}(k,\tau^{\prime},\tau_{f})}{ik}\mathcal{N}(\tau^{\prime},\tau_{f})$
$\displaystyle=$
$\displaystyle\frac{D_{Q}}{s_{f}\tau_{f}}\,\frac{\chi_{f}T_{f}}{A\tau_{Q}}\int^{\tau_{f}}_{\tau_{i}}\frac{d\tau^{\prime}}{\tau^{\prime}}\left[e^{-|\tau_{f}-\tau^{\prime}|/\tau_{Q}}-e^{(2\tau_{i}-\tau_{f}-\tau^{\prime})/\tau_{Q}}\right]\int\frac{dk}{2\pi}e^{ik(\xi_{2}-\xi_{1})}\frac{\tilde{G}(k,\tau^{\prime},\tau_{f})}{ik}$
Note that in going to the second step from the first step, we have used
$\tilde{G}/(ik)$ and not $\tilde{G}$. The implication is that for generating
the self-correlations, we will be using the differential equation whose
Green’s function is going to be $\tilde{G}/(ik)$, instead of $\tilde{G}$.
Thus, we arrive at the following relation for self-correlations.
$\langle\delta n(\xi_{1},\tau_{f})\delta
n(\xi_{2},\tau_{f})\rangle_{\text{self}}=\frac{s(\tau_{f})\tau_{f}}{D_{Q}}\langle\delta
n(\xi_{1},\tau_{f})f(\xi_{2},\tau_{f})\rangle\,.$ (39)
If $\tilde{G}/(ik)$ is our desired Green’s function, then $Z\equiv(\tau\delta
n(\xi,\tau))/(\tau_{f}s(\tau_{f}))=\delta n(\xi,\tau)/s(\tau)$ satisfies the
following equation
$\left(z^{2}+2z\frac{\tau_{Q}}{\tau_{i}}\right)\frac{\partial Z}{\partial
z}+z^{2}\frac{\tau_{Q}}{\tau_{i}}\frac{\partial^{2}Z}{\partial
z^{2}}-v_{Q}^{2}\frac{\tau_{Q}}{\tau_{i}}\frac{\partial^{2}Z}{\partial\xi^{2}}+\left(z+\frac{\tau_{Q}}{\tau_{i}}\right)f+z\frac{\tau_{Q}}{\tau_{i}}\frac{\partial
f}{\partial z}=0\,,$ (40)
where $z=\tau/\tau_{i}$. This is the same as Eq. (25) except that $\partial
f/\partial\xi$ is replaced by $f$ and $\partial^{2}f/\partial\xi\partial\tau$
is replaced by $\partial f/\partial\tau$. The justification is discussed in
the Appendix C.
In Fig. 11 we show the self-correlation at the final time $\tau_{f}$ for
various values of $\tau_{Q}$. As the speed of propagation decreases, the
height of the self-correlation decreases and widens. As a check, the limit
$\tau_{Q}\to 0$ is shown in Fig. 12.
Figure 11: Numerical results for self-correlations for colored noise. (color
online) Figure 12: Numerical results for self-correlation for white noise.
## VI Balance Functions
Balance functions are described in Ref. spratt . The width of a balance
function plotted against particle rapidity is a measure of the diffusion.
Balance functions have been experimentally studied by the ALICE and STAR
collaborations balance1 ; balance2 ; balance3 ; balance4 . Reference
ling_springer_stephanov studied the effect of white noise in balance
functions and compared their analytical results with experimental data.
Reference plumbergkapusta calculated balance functions for colored noise. We
will see how the widths of balance functions change if we vary the speed of
propagation of signals in case of Catteneo noise.
To see the effect of charge fluctuations in particle spectra we have to
calculate how the fluctuations freeze-out. The freeze-out happens when the
system has expanded and cooled to the extent that thermal equilibrium can no
longer be maintained. Then hadrons freeze-out and free-stream to the
detectors. The standard procedure to calculate freeze-out abundances of
particles is the Cooper-Frye prescription cooper_frye . This formula gives us
the distribution of emitted particles on a freeze-out hypersurface
$\Sigma_{f}$. This procedure has already been performed for this
hydrodynamical model in Refs. kapusta2012 ; plumbergkapusta ;
ling_springer_stephanov ; torres We will just give the salient features of
that calculation here.
$E\frac{dN}{d^{3}p}=d\int_{\Sigma_{f}}\frac{d^{3}\sigma_{\mu}}{(2\pi)^{3}}p^{\mu}f({\mbox{\boldmath$x$},\mbox{\boldmath$p$}})$
(41)
Here $d$ is the degeneracy of the particle species under consideration. We
take the distribution function to be the relativistic Boltzmann
$f({\mbox{\boldmath$x$},\mbox{\boldmath$p$}})=e^{-(u\cdot p-\mu)/T}\,,$ (42)
where $\mu$ is the chemical potential for that particle. The energy flux
through an infinitesimal freeze-out fluid cell is given by
$d^{3}\sigma_{\mu}p^{\mu}=\tau_{f}\,d\xi\,d^{2}x_{\perp}m_{\perp}\cosh(y-\xi)\,.$
(43)
The variable $y$ represents the particle rapidity
$p^{\mu}=(m_{\perp}\cosh y,p_{x},p_{y},m_{\perp}\sinh y)$ (44)
with $m_{\perp}=\sqrt{m^{2}+p_{\perp}^{2}}$ the transverse mass.
The average number of particles per unit rapidity at the final freeze-out time
is
$\left\langle\frac{dN}{dy}\right\rangle=\frac{dA\tau_{f}T_{f}^{3}}{4\pi^{2}}\int^{\infty}_{-\infty}\frac{dx}{\cosh^{2}x}\Gamma\left(3,\frac{m}{T_{f}}\cosh
x\right)\,.$ (45)
Reference plumbergkapusta calculates the fluctuation in this quantity due to
a $\mu$ around the freeze-out $\mu_{f}=0$. After a few more steps of algebra,
the fluctuation in $\frac{dN}{dy}$ reads
$\delta\left(\frac{dN}{dy}\right)=\frac{dA\tau_{f}T_{f}^{2}}{4\pi^{2}}\int
d\xi\,\delta n\,F_{n}(y-\xi)$ (46)
where $F_{n}$ is the smearing function
$F_{n}(x)=\frac{1}{\chi_{Q}\cosh^{2}x}\Gamma\left(3,\frac{m}{T_{f}}\cosh
x\right)\,.$ (47)
Using this in the definition of the charge balance function we arrive at the
Eq. (74) in Ref. plumbergkapusta .
$B(\Delta
y)=\frac{\langle\delta\left(dN/dy_{1}\right)\delta\left(dN/dy_{2}\right)\rangle}{\langle
dN/dy\rangle}=\frac{dA\tau_{f}T_{f}}{4\pi^{2}}\frac{C(\Delta
y)}{Q(m/T_{f})}\,.$ (48)
Here
$C(\Delta y)=2\pi\int
d\xi_{1}d\xi_{2}\,F_{n}(y_{1}-\xi_{1})\,F_{n}(y_{2}-\xi_{2})\,C_{nn}(\xi_{1}-\xi_{2},\tau_{f})\,.$
(49)
The two-point correlator for the charge fluctuation is
$C_{nn}(\xi_{1}-\xi_{2},\tau_{f})$ which is obtained from the solution of the
SDE. The function $Q$ is given by
$Q\left(\frac{m}{T_{f}}\right)=\int^{\infty}_{-\infty}\frac{dx}{\cosh^{2}x}\Gamma\left(3,\frac{m}{T_{f}}\cosh
x\right)\,.$ (50)
Let us first demonstrate the trivial self-correlation for white noise in terms
of the balance function for pions. Figure 13 shows the balance function for
the full unsubtracted correlation function for white noise. Notice the
positive and negative part of the curve; this is because the full two-point
correlation for white noise is composed of a positive self-correlation and a
negative piece which does not include any self-correlation. The balance
function for the self-correlation part only is shown in Fig. 14. When this is
subtracted from Fig. 13 one obtains the so-called subtracted balance function
shown in Fig. 15.
Figure 13: Balance function for the full white noise two-point function.
Figure 14: Balance function for the self-correlation of white noise. Figure
15: Balance function for the pure two-point function of white noise.
We follow the same procedure for carrying out cancellations of the
contributions arising from the self-correlations for colored noise to the
balance function. Figure 16 shows the full unsubtracted balance functions for
various values of $v_{Q}$ for colored noise. Figure 17 shows the self-
correlation part only, and Fig. 18 shows the subtracted balance functions. The
width of the subtracted balance function denotes the diffusion distance. That
width increases with increasing $v_{Q}$, as expected, since it represents the
rapidity interval over which the average charge pair has diffused to by freeze
out.
Figure 16: Balance function for the full unsubtracted two-point function.
(color online) Figure 17: Balance function for the self-correlation part of
two-point function. (color online) Figure 18: Balance function for the
subtracted two-point correlation function. (color online)
We estimated the error in our numerical simulations using the jackknife
method. This method estimates the error of statistics without making any
assumptions about the distribution that generated the data. It only uses the
sample provided. We create jackknife samples over the whole data set which are
“leave-one-out” data sets. In our case, we consider the two-point correlation
statistic $S$ on the original sample size of $10^{7}$ events. We leave out the
$i_{th}$ event to create the $i_{th}$ jackknife statistic $S_{i}$. The average
of the jackknife sample is $S_{\rm avg}=\sum_{i}S_{i}/n$. The jackknife error
is then estimated as
$\sigma_{\rm jack}=\sqrt{\frac{n-1}{n}\sum_{i}(S_{i}-S_{\rm avg})^{2}}$ (51)
The error we observe on $\langle\delta n\delta n\rangle$ is of the order of
$10^{-2}\;\text{MeV}^{3}\>\text{fm}^{-3}$. This amounts to
$\sigma_{\langle\delta n\delta n\rangle}/\langle\delta n\delta n\rangle\approx
10^{-3}$. We give a representative plot of the error bounds for
$v_{Q}^{2}=1/10$ in Fig. 19. The bounds are visible only when zoomed in. This
shows that for $10^{7}$ events, the statistical error in our simulations turn
out to be negligible.
Figure 19: Jackknife error bounds for $v_{Q}^{2}=1/10$. (color online)
## VII Conclusions
State-of-the-art modeling of high energy nuclear collisions uses relativistic
2nd order viscous hydrodynamics. The fluctuation-dissipation theorem says that
viscosity and thermal fluctuations are intricately connected. Although
thousands of particles are produced in these collisions, that is still
immensely smaller than Avogadro’s number. Therefore it has become apparent
that thermal fluctuations really ought to be part of the standard model for
heavy ion collisions kapusta2012 . Fully 3+1 dimensional hydrodynamic
simulations are required which presents a major challenge for implementation
of thermal noise. The goal of this paper is to understand the numerical
methods necessary to do this and, furthermore, how to subract self-
correlations from the numerically computed two-point correlators in order to
compare with experiment. We chose to study causal electric charge diffusion in
the boost-invariant 1+1 dimensional Bjorken model for two reasons. First, a
Cattaneo description of diffusion propagates signal at a finite speed which is
a necessity in heavy ion collisions. Second, this simple model was studied and
solved with essentially analytic methods plumbergkapusta against which we can
compare to verify the validity of the purely numerical approach.
We introduced the noise term in the dissipative charge conservation equation,
which in our case is the Catteneo noise. We simulated the stochastic
differential equations that arise from the electric charge conservation
equations. The way we solve the stochastic differential equations is by using
normal random number generators with a specific, well-defined variance and
then interpreting the derivatives of the noise in terms of what they mean when
integrating by parts. The whole machinery on how to handle the noise is
discussed in Appendix A. We used this methodology in simulating the white
noise charge conservation equation and obtained the expect result. Then we
generated colored Catteneo noise using a Langevin equation. We solved the full
colored noise charge conservation equation and again obtained the expected
result. The two-point charge correlator consists of two pieces. The first is
the self-correlation, which is a manifestation of the stochastic nature of the
dynamics. Once this piece is subtracted off, we are left with the physically
relevant two-point correlation function. The self-correlation is a trivial
Dirac $\delta$-function in the case of white noise, but is more complicated
for colored noise. In this work, we gave a physically insightful
interpretation of the meaning of self-correlation in the case of colored
noise. This interpretation allows us to use the stochastic differential
equation solver we developed to generate the self-correlations.
In the case of the white noise, we populated the whole spacetime lattice with
noise source terms uncorrelated to each other. It is obvious that all the
individual noise terms are not required to calculate the final two-point
correlation function, but more than a single noise term is necessary. Hence
Monte-Carlo simulations will be insufficient to reproduce the results for
colored noise. One can, however, speed up the stochastic differential equation
solving procedure by removing noise terms that are outside the causal past of
the spacetime points for which we want to calculate the twopoint correlations.
We used the results obtained to compute the balance functions for pions within
the context of this model. As one would expect, reducing the speed of
propagation of signal leads to narrowing of the balance functions and to a
corresponding increase in their height at small rapidities. As done previously
in Ref. plumbergkapusta we neglect the contributions from resonance decays to
the measured particle spectra used in the balance functions. Our results are
in good quantitative agreement with that previous study. The numerical method
used in this paper is verified.
Future work entails furthering the current methodology to a full 3+1
dimensional fluid dynamical models of heavy ion collisions such as MUSIC music
. Further, the prescription for self-correlations given in this paper for
Catteneo noise can be straightforwardly extended to the case of shear and bulk
viscosity and thermal conductivity, the details of which are deferred to a
future work. Since baryon charge conductivity diverges near a critical point,
this study can be extended to study charge fluctuations near the purported QCD
critical point, which is also deferred to future work. Another possible
direction of future work would be to study the higher order cumulants in the
presence of colored noise. Since two-point correlations and higher order
cumulants are expected to diverge near a QCD critical point, the ultimate
culmination of the present work would be to characterize the noisy
hydrodynamics of near-critical point behavior of heavy ion collisions.
## VIII Acknowledgements
A. D. thanks Gaurav Nirala for enlightening discussions. We thank Chun Shen
for suggesting the jackknife method. This work was supported by the U.S. DOE
Grant No. DE-FG02-87ER40328. C. P. acknowledges support from the CLASH project
(KAW 2017-0036). The authors acknowledge the Minnesota Supercomputing
Institute (MSI) at the University of Minnesota for providing resources that
contributed to the research results reported within this paper.
## Appendix A Numerical Simulation of SDEs
In this appendix, we show our procedure for representing the Dirac delta
function and its derivatives on a discrete lattice.
White noise is defined as $\langle
f(x)f(x^{\prime})\rangle=\delta(x-x^{\prime})$ and $\langle f(x)\rangle=0$.
This implies that
$\int dx\>g(x)\langle f(x)f(x^{\prime})\rangle=g(x^{\prime})$ (52)
On a discrete lattice, the integral becomes a sum over lattice points and $dx$
becomes the lattice spacing $\Delta x$. Hence
$g(x_{i})=\sum_{i^{\prime}}g(x_{i^{\prime}})\langle
f(x_{i})f(x_{i^{\prime}})\rangle\Delta
x=\sum_{i^{\prime}}g(x_{i^{\prime}})\left(\frac{\delta_{ii^{\prime}}}{\Delta
x}\right)\Delta x$ (53)
From the above, we can conclude
$\langle f(x_{i})f(x_{i^{\prime}})\rangle=\frac{\delta_{ii^{\prime}}}{\Delta
x}$ (54)
The $\delta_{ii^{\prime}}/\Delta x$ becomes a Dirac-delta function in the
limit $\Delta x\rightarrow 0$ which is the continuous case. Therefore we
sample the white noise function $f$ from a Normal distribution of mean $0$ and
standard deviation $1/\sqrt{\Delta x}$. We use a random number generator for a
large number of instances ($10^{6}$) and compute the two-point function. It
gives us the variance as the peak of a Kronecker delta at $x=0$. This is
illustrated in the following figure. We used $\Delta x=0.09$.
Figure 20: Two-point function of $f$ for 1 million events.
Next, we investigate the correlation between white noise $f$ and its
derivative $df/dx$. The two-point function $\langle
f(x)f^{\prime}(x^{\prime})\rangle$ must then satisfy
$\int dx\,g(x)\langle f(x)f^{\prime}(x^{\prime})\rangle=\int
dx\,g(x)\frac{\partial}{\partial
x^{\prime}}\delta(x-x^{\prime})=\frac{\partial}{\partial x^{\prime}}\int
dx\,g(x)\delta(x-x^{\prime})=g^{\prime}(x^{\prime})$ (55)
The derivative is $g^{\prime}(x)=(g_{i+1}-g_{i})/\Delta x$ in a discrete
lattice. Replacing the integral by the sum, we get
$g^{\prime}(x_{i})=\sum_{i^{\prime}}g(x_{i^{\prime}})\langle
f(x_{i})f^{\prime}(x_{i^{\prime}})\rangle\Delta x=\frac{g_{i+1}-g_{i}}{\Delta
x}=\sum_{i^{\prime}}g(x_{i^{\prime}})\left(\frac{\delta_{i+1,i^{\prime}}-\delta_{i,i^{\prime}}}{\Delta
x^{2}}\right)\Delta x$ (56)
Hence
$\langle
f(x_{i})f^{\prime}(x_{i^{\prime}})\rangle=\frac{\delta_{i+1,i^{\prime}}-\delta_{i,i^{\prime}}}{\Delta
x^{2}}$ (57)
If we again use the previous random number generator and calculate the two
point function we get the results shown in Fig. 21.
Figure 21: The correlation between $f$ and its derivative for 1 million
events.
Similarly, we can look into the correlation of the derivative of white noise
with itself.
$\langle
f^{\prime}(x)f^{\prime}(x^{\prime})\rangle=\frac{\partial^{2}}{\partial
x\partial x^{\prime}}\delta(x-x^{\prime})$ (58)
$\int dx\,g(x)\langle f^{\prime}(x)f^{\prime}(x^{\prime})\rangle=\int
dx\,g(x)\frac{\partial^{2}}{\partial x\partial
x^{\prime}}\delta(x-x^{\prime})=-\int dx\,g(x)\frac{\partial^{2}}{\partial
x^{2}}\delta(x-x^{\prime})=-g^{\prime\prime}(x^{\prime})$ (59)
In the second step we performed an integration by parts. The second derivative
is defined in the discrete case as
$g^{\prime\prime}(x)=(g_{i+1}+g_{i-1}-2g_{i})/\Delta x^{2}$. Substituting the
discrete sum in place of the integral, we get
$-g^{\prime\prime}(x^{\prime})=-\sum_{i}g_{i}\frac{\delta_{i,i^{\prime}+1}+\delta_{i,i^{\prime}-1}-2\delta_{i,i^{\prime}}}{\Delta
x^{3}}\Delta x=\sum_{i^{\prime}}g(x_{i^{\prime}})\langle
f^{\prime}(x_{i})f^{\prime}(x_{i^{\prime}})\rangle\Delta x$ (60) $\langle
f^{\prime}(x_{i})f^{\prime}(x_{i^{\prime}})\rangle=-\frac{\delta_{i,i^{\prime}+1}+\delta_{i,i^{\prime}-1}-2\delta_{i,i^{\prime}}}{\Delta
x^{3}}$ (61)
Figure 22 shows what we get numerically.
Figure 22: The two-point correlation of the derivative of $f$ for 1 million
events.
Integration of white noise is called a random walk $W(z)$ which is a
succession of random steps as a function of $z$. It is defined by
$W=\int_{z_{i}}^{z}f(z)dz$ (62)
We can easily calculate the variance of $W$:
$\langle
W^{2}(z)\rangle=\int_{z_{0}}^{z}dz^{\prime}\int_{z_{0}}^{z}dz^{\prime\prime}\langle
f(z^{\prime})f(z^{\prime\prime})\rangle=\int_{z_{0}}^{z}dz^{\prime}\int_{z_{0}}^{z}dz^{\prime\prime}\delta(z^{\prime}-z^{\prime\prime})=\int_{z_{0}}^{z}dz^{\prime}=z-z_{0}$
(63)
On a discrete lattice, $W_{i+1}=W_{i}+f\Delta z$ where we source $f$ from a
normal distribution of mean $0$ and standard deviation $1/\sqrt{\Delta z}$.
$W_{n}=\sum_{i=1}^{n}f\Delta z$ (64)
Hence the variance is
$\displaystyle\langle W^{2}\rangle=\langle(\sum_{i=1}^{n}\Delta
zf)^{2}\rangle=\langle\sum_{i=1}^{n}(\Delta
zf)^{2}\rangle=\sum_{i=1}^{n}(\Delta z)^{2}\langle
f^{2}\rangle=\sum_{i=1}^{n}(\Delta z)^{2}\frac{1}{\Delta
z}=\sum_{i=1}^{n}\Delta z=z-z_{0}$
Figure 23 shows the numerical results.
Figure 23: Two point correlation of $W$ for 1 million events with $z-z_{i}=5$.
We are ready to take up a simple stochastic differential equation to solve.
Consider
$\frac{dX}{dz}=-\frac{\partial f}{\partial\xi}$ (66)
where $z$ has dimensions of time and $\xi$ is dimensionless. Let us define the
following two-point function
$\langle
f(z_{1},\xi_{1})f(z_{2},\xi_{2})\rangle=M\delta(z_{1}-z_{2})\delta(\xi_{1}-\xi_{2})$
Here $M$ has dimensions of time to make $f$ dimensionless. We calculate the
two-point function in $\xi$.
$\displaystyle\langle X(z_{f},\xi_{1})X(z_{f},\xi_{2})\rangle$
$\displaystyle=$ $\displaystyle\left\langle\int^{z_{f}}_{z_{i}}\frac{\partial
f}{\partial\xi}(\xi_{1})dz\int^{z_{f}}_{z_{i}}\frac{\partial
f}{\partial\xi}(\xi_{2})dz^{\prime}\right\rangle$ (67) $\displaystyle=$
$\displaystyle\int^{z_{f}}_{z_{i}}\int^{z_{f}}_{z_{i}}dzdz^{\prime}\left\langle\frac{\partial
f}{\partial\xi}(z,\xi_{1})\frac{\partial
f}{\partial\xi}(z^{\prime},\xi_{2})\right\rangle$ $\displaystyle=$
$\displaystyle-\int^{z_{f}}_{z_{i}}\int^{z_{f}}_{z_{i}}dzdz^{\prime}M\left(\frac{\delta_{i+1}+\delta_{i-1}-2\delta_{i}}{\Delta\xi^{3}}\right)\delta(z-z^{\prime})$
$\displaystyle=$
$\displaystyle-M(z_{f}-z_{i})\left(\frac{\delta_{i+1}+\delta_{i-1}-2\delta_{i}}{\Delta\xi^{3}}\right)$
We used Eq. (61) in the above calculation. The two-point function has
dimensions of time-squared and so is the expression on the right. On a
discrete lattice,
$X(z+\Delta z)=X(z)-\Delta z\times\frac{\Delta f}{\Delta\xi}$ (68)
Figure 24 shows the numerical results using $z_{f}-z_{i}=10$.
Figure 24: Two point correlation of $X$ for a million events.
## Appendix B Analytical Solution of Langevin Equation
The Langevin equation can be written as
$\frac{df(\tau)}{d\tau}=-\frac{1}{\tau_{Q}}f(\tau)+\frac{1}{\tau_{Q}}\zeta(\tau)$
(69)
Here $\zeta$ is white noise and $f$ is the Catteneo noise.
$\langle\zeta(\tau)\rangle=0\qquad\langle\zeta(\tau_{1})\zeta(\tau_{2})\rangle=N(\tau_{1})\delta(\tau_{1}-\tau_{2})$
(70)
It does not matter whether we use $N(\tau_{1})$ or $N(\tau_{2})$ because of
the Dirac-delta function. Let us multiply both sides by the factor
$e^{\tau/\tau_{Q}}$.
$\int_{\tau_{i}}^{\tau}\frac{d}{d\tau}(e^{\tau/\tau_{Q}}f)d\tau=\int_{\tau_{i}}^{\tau}\frac{e^{\tau/\tau_{Q}}}{\tau_{Q}}\zeta
d\tau$ (71)
$e^{\tau/\tau_{Q}}f(\tau)-e^{\tau_{i}/\tau_{Q}}f(\tau_{i})=\frac{1}{\tau_{Q}}\int^{\tau}_{\tau_{i}}\zeta(\tau)e^{(\tau^{\prime}-\tau)/\tau_{Q}}d\tau^{\prime}$
(72)
Let us set $f(\tau_{i})=0$. Another way to see this is in an equilibrium
system, the system does not have any initial conditions to be sensitive to.
Any fluctuations in $f(\tau)$ will then be solely due to the action of
$\zeta(\tau)$. Now we consider two separate times $\tau_{1}$, $\tau_{2}$.
$\displaystyle\begin{aligned} \langle
f(\tau_{1})f(\tau_{2})\rangle&=\frac{N}{\tau_{Q}^{2}}\int_{\tau_{i}}^{\tau_{1}}e^{(\tau^{\prime\prime}-\tau_{1})/\tau_{Q}}d\tau^{\prime\prime}\int_{\tau_{i}}^{\tau_{2}}e^{(\tau^{\prime}-\tau_{2})/\tau_{Q}}d\tau^{\prime}\delta(\tau_{1}-\tau_{2})\\\
&=\frac{N}{\tau_{Q}^{2}}\int_{\tau_{i}}^{\text{min}(\tau_{1},\tau_{2})}e^{(2\tau^{\prime\prime}-\tau_{1}-\tau_{2})/\tau_{Q}}d\tau^{\prime\prime}=\frac{N}{2\tau_{Q}}\left[e^{|\tau_{1}-\tau_{2}|/\tau_{Q}}-e^{(2\tau_{i}-\tau_{1}-\tau_{2})/\tau_{Q}}\right]\end{aligned}$
(73)
## Appendix C Constructing the self-correlations
Self-correlations are defined by Eq. (37). As discussed above, their dynamics
can be modeled by an equation whose (Fourier transformed) Green’s function is
related to the original Green’s function by
$\tilde{G}_{\mathrm{self}}(k,\tau,\tau^{\prime})=\frac{\tilde{G}(k,\tau,\tau^{\prime})}{ik}$
(74)
The original Green’s function is defined schematically by the stochastic
differential equation
$D_{1}X(\tau,\xi)=D_{2}\frac{\partial f}{\partial\xi}(\tau,\xi),$ (75)
where $D_{1}$ and $D_{2}$ are differential operators which contain no explicit
$\xi$-dependence (other than $\xi$-derivatives) and $f$ is the noisy source.
Fourier transforming the $\xi$-dependence to $k$ as before, this equation
becomes
$\tilde{D}_{1}\tilde{X}(\tau,k)=ik\tilde{D}_{2}\tilde{f}(\tau,k)$ (76)
and its solution is written in terms of the original Green’s function as
$\tilde{X}(k,\tau)=-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\tilde{G}(k;\tau,\tau^{\prime})\tilde{f}(k,\tau^{\prime})$
(77)
We therefore seek an ‘unphysical’ field $X_{\mathrm{self}}$ whose two-point
function corresponds to the self-correlations which need to be subtracted out.
This field solution will be generated by the expression
$\displaystyle\tilde{X}_{\mathrm{self}}(k,\tau)$ $\displaystyle=$
$\displaystyle-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\tilde{G}_{\mathrm{self}}(k;\tau,\tau^{\prime})\tilde{f}(k,\tau^{\prime})$
(78) $\displaystyle=$
$\displaystyle-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\frac{\tilde{G}(k;\tau,\tau^{\prime})}{ik}\tilde{f}(k,\tau^{\prime})$
$\displaystyle=$
$\displaystyle-\int_{\tau_{0}}^{\tau}d\tau^{\prime}\tilde{G}(k;\tau,\tau^{\prime})\left(\frac{\tilde{f}(k,\tau^{\prime})}{ik}\right)$
The self-correlations can be therefore straightforwardly generated by
replacing $\tilde{f}\to\tilde{f}/(ik)$ in $k$-space, which amounts to
discarding the $\xi$-derivative in Eq. (75). Thus, the unphysical self-
correlation field is generated by solving the modified equation
$D_{1}X(\tau,\xi)=D_{2}f(\tau,\xi)$ (79)
## References
* (1) J. I. Kapusta and J. M. Torres-Rincon, Phys. Rev. C. 86, 054911 (2012).
* (2) Y. Akamatsu, D. Teaney, F. Yan, and Y. Yin, Phys. Rev. C 100, 044901 (2019).
* (3) M. Stephanov and Y. Yin, Phys. Rev. D 98, 036006 (2018).
* (4) M. Nahrgang, M. Bluhm, T. Schäfer, and S. A. Bass, Phys. Rev. D 99, 116015 (2019).
* (5) M. Bluhm et al., arXiv:2001.08831 [nucl-th].
* (6) M. Stephanov, K. Rajagopal, and E. Shuryak, Phys. Rev. D 60, 114028 (1999).
* (7) M. A. Stephanov, Phys. Rev. Lett. 102, 032301 (2009)
* (8) B. Berdnikov and K. Rajagopal, Phys. Rev. D 61, 105017 (2000).
* (9) J. I. Kapusta, B. Müller, and M. Stephanov, Phys. Rev. C. 85, 054906 (2012).
* (10) S. Pratt, Phys. Rev. Lett. 108, 212301 (2012).
* (11) J. I. Kapusta and C. Plumberg, Phys. Rev. C 97, 014906 (2018).
* (12) C. Cattaneo, Atti Del Semin. Mat. e Fis. Univ. Modena 3, 3 (1948); C. R. Acad. Sci. 247, 431 (1958).
* (13) J. I. Kapusta and C. Young, Phys. Rev. C 90, 044902 (2014).
* (14) M. E. Gurtin and A. C. Pipkin, Arch. Ration. Mech. Anal. 31, 113 (1968).
* (15) G. Aarts, C. Allton, A. Amato, P. Giudice, S. Hands, and J.-I. Skullerud, JHEP 02, 186 (2015).
* (16) R.Courant, K. Friedrichs, and H. Lewy, Mathematische Annalen, Vol. 100, pp. 32-74 (1928).
* (17) L. D. Landau and E. M. Lifshitz, Statistical Physics: Part 1, Vol. 5, Course of Theoretical Physics (Pergamon, 1980).
* (18) E. M. Lifshitz and L. P. Pitaevskii, Statistical Physics: Part 2, Vol. 9, Course of Theoretical Physics (Pergamon, 1980).
* (19) B. Abelev, et al. (STAR Collaboration), Phys. Rev. Lett. 105, 022301 (2010).
* (20) A. R. Timmins (for the ALICE Collaboration), J. Phys. G: Nucl. Part. Phys. 38, 124093 (2011).
* (21) M. Aggarwal, et al. (STAR Collaboration), Phys. Rev. C 82, 024905 (2010).
* (22) B. Abelev, et al. (ALICE Collaboration), Phys. Lett. B 723, 267 (2013).
* (23) B. Ling, T. Springer, and M. Stephanov, Phys. Rev. C 89, 064901 (2014).
* (24) F. Cooper and G. Frye, Phys. Rev. D 10, 186 (1974).
* (25) B. Schenke, S. Jeon, and C. Gale, Phys. Rev. C 82, 014903 (2010).
* (26) https://github.com/audide12/HeavyIon
|
2024-09-04T02:54:58.871503 | 2020-03-10T20:26:26 | 2003.04956 | {
"authors": "Bohan Wu, Feng Xu, Zhanpeng He, Abhi Gupta, and Peter K. Allen",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26144",
"submitter": "Bohan Wu",
"url": "https://arxiv.org/abs/2003.04956"
} | arxiv-papers | # SQUIRL: Robust and Efficient Learning from Video Demonstration of Long-
Horizon Robotic Manipulation Tasks
Bohan Wu, Feng Xu, Zhanpeng He, Abhi Gupta, and Peter K. Allen This work is
supported by NSF Grant CMMI-1734557. Authors are with Columbia University
Robotics Group, New York, NY, 10027, USA
###### Abstract
Recent advances in deep reinforcement learning (RL) have demonstrated its
potential to learn complex robotic manipulation tasks. However, RL still
requires the robot to collect a large amount of real-world experience. To
address this problem, recent works have proposed learning from expert
demonstrations (LfD), particularly via inverse reinforcement learning (IRL),
given its ability to achieve robust performance with only a small number of
expert demonstrations. Nevertheless, deploying IRL on real robots is still
challenging due to the large number of robot experiences it requires. This
paper aims to address this scalability challenge with a robust, sample-
efficient, and general meta-IRL algorithm, SQUIRL, that performs a new but
related long-horizon task robustly given only a single video demonstration.
First, this algorithm bootstraps the learning of a task encoder and a task-
conditioned policy using behavioral cloning (BC). It then collects real-robot
experiences and bypasses reward learning by directly recovering a Q-function
from the combined robot and expert trajectories. Next, this algorithm uses the
Q-function to re-evaluate all cumulative experiences collected by the robot to
improve the policy quickly. In the end, the policy performs more robustly
(90%+ success) than BC on new tasks while requiring no trial-and-errors at
test time. Finally, our real-robot and simulated experiments demonstrate our
algorithm’s generality across different state spaces, action spaces, and
vision-based manipulation tasks, e.g., pick-pour-place and pick-carry-drop.
## I Introduction
We aspire robots to become generalists who acquire new complex skills robustly
and quickly. The robotic system, whether planned or learned, needs to leverage
its existing knowledge to solve a new but related task in an efficient yet
high-performance manner. Thanks to recent advances in machine learning and
sim-to-real transfer mechanisms, short-horizon robotic manipulation such as
grasping has improved in performance. However, many real-world robotic
manipulation tasks are long-horizon, diverse, and abundant in volume. In the
absence of a scalable and systematic way to construct simulation environments
for a large number of tasks, the robot needs to learn a new task directly in
the physical world from only a handful of trials, due to the high cost of
collecting real-robot trial-and-errors and experiences.
Figure 1: Learning from a single video demonstration of a long-horizon
manipulation task via Soft Q-functioned Meta-IRL (SQUIRL). In the pick-pour-
place example above, the robot needs to approach, pick-up and carry the grey
bottle, pour the iron pebble inside the bottle into a specific container, and
finally place the bottle back on the table. During training, the robot is
given a single video demonstration for each of the 117 training tasks. After
learning from these 117 videos, the robot also practices 90 trial-and-errors
in total on these tasks. From such combined expert and robot trajectories, the
robot learns the general skills of pouring robustly. At test time, given a
single video demonstration of pouring into a new, unseen red container at a
new position, the robot successfully replicates this new task without the need
for any trial-and-errors.
We observe that real-world robotic skill acquisition can become more sample-
efficient in several important ways. First, we notice that humans learn tasks
quickly by watching others perform similar tasks. Among many forms of task
representations such as rewards, goal images, and language instructions, human
demonstrations guide exploration effectively and can lead to significant
sample efficiency gains. Furthermore, learning from video demonstrations
sidesteps hand-designing a proper reward function for every new task. In the
case of a vision-based task, video demonstrations also conveniently provide
the same pixel state space for the robot.
In learning from demonstrations (LfD), the robot should be sample-efficient in
two dimensions – it should use as few expert demonstrations (“demonstrations”
hereafter) as possible and take as few trial-and-errors (practices) as
possible on its own to learn a robust policy. Among LfD methods, behavioral
cloning (“BC” hereafter) is sample-efficient but susceptible to compounding
errors. Here, compounding errors refer to the problem in which every time a
behavioral-cloned robot makes a small error, it makes a larger error down the
road as it drifts away from the expert state distribution. In contrast, IRL
alleviates compounding errors by allowing the robot to try the tasks out in
the real world and measure its behavior against the expert. However, due to
the need to learn a reward function, IRL can require many trial-and-errors in
the real world, while BC does not require such robot experiences. We posit
that leveraging off-policy experiences of trial-and-errors is essential to
making IRL sample-efficient enough for real robots. Here, “off-policy
experiences” refer to the cumulative experiences that the robot has collected
thus far during training. In contrast, “on-policy experiences” are the most
recent experiences that the robot has collected using its current policy.
Humans leverage lifelong, cumulative experiences to learn quickly at present.
We envision robots to acquire new skills more quickly by learning from off-
policy (i.e., cumulative) experiences.
Finally, many real-world tasks are related and share structures and knowledge
that can be exploited to solve a new but similar task later. For example,
humans can quickly learn to pick and place a new object after learning to pick
and place many known objects. Meta-learning, explicitly utilizing this
property, aims to learn a new but related task quickly if it has already
learned several similar tasks in the past.
Figure 2: Fig.2: Pick-Pour-Place Robot Setup at Test Time. Given an RGB image
from the top-down (black) or 45°camera (also black), the UR5-Seed robot is
tasked to approach and pick-up the grey cylindrical bottle, pour the iron
pebble already inside the bottle into a specific container on the table and
finally place the bottle back on the table.
With these motivations, we introduce SQUIRL, a meta-IRL algorithm that learns
long-horizon tasks quickly and robustly by learning from 1) video
demonstrations, 2) off-policy robot experiences, and 3) a set of related
tasks. Fig.1 explains this algorithm using the example of a set of long-
horizon pick-pour-place tasks, using the UR5-Seed111Site:
www.seedrobotics.com/rh8d-dexterous-hand.html robot setup shown in Fig.2. In
this task, we have the containers (green, yellow, orange, and red), a
cylindrical bottle (grey), and an iron pebble inside the bottle. The robot
needs to first approach and pick-up the grey bottle, pour the iron pebble
inside the bottle into a specific container on the table, and then finally
place the bottle back on the table, as shown in each row of images in Fig.1.
At the beginning of each task, the bottle is not yet in hand, but the iron
pebble is already in the bottle. At training time, the robot is given a single
video demonstration for each of the 117 pick-pour-place tasks, as shown in the
first two rows of images in Fig.1. Every new combination of container
positions represents a different pick-pour-place task. Furthermore, the robot
only needs to pour into one of the containers in a single task. Therefore,
pouring into different containers also represents different tasks. After
learning from these 117 demonstrations, the robot also practices 90 trial-and-
errors on these tasks in total. From such a combination of expert and robot
trajectories, the robot learns the general skills of pick-pour-place robustly.
In all 117 training tasks, only two of the four containers appear on the
table: the green and yellow containers, as shown in the first two rows of
images in Fig.1. The orange and red containers are excluded during training
and only appear at test time, as shown in the last row of images in Fig.1. We
do so to evaluate our algorithm’s generalizability to unseen containers at
test time. As shown in the last row of images in Fig.1, the robot successfully
pours into a new container (red) at test time, at a new position never seen
before during training, without the need for any trials or practices.
To achieve such fast generalization to new tasks, our algorithm learns a task
encoder network and a task-conditioned policy. The task encoder generates a
32-dimensional task embedding vector that encodes task-specific information.
The policy network then learns to generalize to new tasks by accepting this
task embedding vector as input, thus becoming “task-conditioned”. During
training, our algorithm first bootstraps learning by training both the task
encoder and the policy jointly via the BC loss. The robot then collects 10
trials across 10 tasks using the warmed-up policy and the task encoder. Next,
using the combined experiences of the expert and the robot, our algorithm
bypasses reward learning by directly learning a task-conditioned Q-function.
Using this Q-function, our algorithm then reuses and re-evaluates all
cumulative experiences of the robot to improve the policy quickly. This cycle
repeats until the $90^{th}$ trial. Finally, at test time, the task encoder
generates a new task embedding from a single video demonstration of a new
task. This embedding is then inputted into the task-conditioned policy to
solve the new task without any trial-and-errors and yet in a high-performance
manner. In summary, our contributions are:
1. 1.
A robust meta-IRL algorithm that outperforms ($90\%$\+ success) its behavioral
cloning counterpart in real-robot and simulated vision-based long-horizon
manipulation;
2. 2.
A novel Q-functioned IRL formulation that circumvents reward learning and
improves IRL sample efficiency;
3. 3.
An efficient method that leverages off-policy robot experiences for training
and requires no trials at test time;
4. 4.
A general approach that tackles various long-horizon robotic manipulation
tasks and works with both vision and non-vision observations and different
action spaces.
## II Related Work
### II-A Inverse Reinforcement Learning (IRL) and Meta-IRL
Inverse reinforcement learning (IRL) models another agent’s (typically the
expert’s) reward function, given its policy or observed behavior. Previous
works have approached the IRL problem with maximum margin methods [1][2] and
maximum entropy methods [3][4][5]. In particular, maximum entropy methods
recover a distribution of trajectories that have maximum entropy among all
distributions and match the demonstrated policy’s behaviors. While these
methods have shown promising results in continuous control problems, they
suffer from low sample efficiency due to the need for evaluating the robot’s
policy, which can be alleviated by meta-learning (i.e., meta-IRL). SMILe [6]
and PEMIRL [7] are two meta-IRL algorithms based on AIRL [8] that leverage a
distribution of tasks to learn a continuous task-embedding space to encode
task information and achieve fast adaptation to a new but similar task. Our
work differs from [6][7] in four crucial ways. First, our meta-IRL algorithm
works with real robots and image observations. Second, instead of a reward
function, we directly model a Q-function that the policy can optimize, in
order to increase IRL sample efficiency. Third, we train the task encoder with
the behavioral cloning (BC) gradient as opposed to the IRL gradient for
stabler and more efficient learning. Lastly, we bootstrap policy and task
encoder learning using BC before training via meta-IRL.
### II-B Real-robot Learning from Demonstrations (LfD)
Our work is related to real-robot LfD [9], such as [10][11][12]. In
particular, [13] developed IRL on real robots without learning from raw
pixels. Other works (e.g., [14][15][16][17]) used BC for real-robot LfD.
Another work [18] developed goal-conditioned BC on a simulated robot to learn
long-horizon tasks by playing with objects in the scene. While enjoying
efficient learning by casting imitation learning into a supervised learning
problem, BC suffers from the covariate shift between the train and test data.
In comparison, IRL achieves robust performance by modeling the state-action
joint distribution instead of the conditional action distribution in BC [19].
Different from previous works, our meta-IRL algorithm works on real-robot
vision-based tasks, and its Q-functioned IRL policy gradient can be directly
combined with the BC gradient signal to approach both the sample efficiency of
BC and the robustness of IRL.
### II-C One-shot Meta-imitation Learning on Real Robots
Our algorithm attempts to enable robots to quickly and robustly imitate a
single unseen video demonstration by learning from a distribution of tasks
with shared structure, i.e., one-shot robot meta-imitation learning. For
example, [20] combines gradient-based meta-learning and BC on a real robot to
learn quickly from video demonstrations. [21] then extends [20] to enable
robots to learn from human-arm demonstrations directly. [22] then improves
[21] to meta-imitation-learn multi-stage real-robot visuomotor tasks in a
hierarchical manner. However, constrained by the covariate shift problem of
BC, these works show limited task performance (e.g., around $50\%$ success
rate for the training tasks). In contrast, our algorithm learns a vision-based
manipulation task robustly ($90\%+$ success rates) and efficiently (117
videos, 90 trials) by utilizing the generalization ability of task embeddings
[23] and a novel Q-functioned IRL formulation.
## III Preliminaries
### III-A Off-policy Reinforcement Learning via Soft Actor-Critic
Standard RL models a task $\mathcal{M}$ as an MDP defined by a state space
$\mathcal{S}$, an initial state distribution $\rho_{0}\in\Pi(\mathcal{S})$, an
action space $\mathcal{A}$, a reward function
$\mathcal{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$, a dynamics model
$\mathcal{T}:\mathcal{S}\times\mathcal{A}\to\Pi(\mathcal{S})$, a discount
factor $\gamma\in[0,1)$, and a horizon $H$. Here, $\Pi(\cdot)$ defines a
probability distribution over a set. The robot acts according to stochastic
policy $\pi:\mathcal{S}\to\Pi(\mathcal{A})$, which specifies action
probabilities for each $s$. Each policy $\pi$ has a corresponding
$Q^{\pi}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ function that defines the
expected discounted cumulative reward for taking an action $a$ from $s$ and
following $\pi$ onward.
Off-policy RL, particularly Soft Actor-Critic (SAC) [24], reuses historical
experiences to improve learning sample efficiency by recovering a “soft”
Q-function estimator $Q_{\theta}$. A policy can then be learned by minimizing
the KL divergence between the policy distribution and the exponential-Q
distribution: $\pi^{*}=\operatorname*{arg\,min}_{\pi\in\Pi}D_{KL}(\pi(a\mid
s)\;\|\;\frac{\exp(Q^{\pi_{old}}_{\theta}(s,a))}{Z(s)})$
### III-B Timestep-centric IRL as Adversarial Imitation Learning
The purpose of IRL is to learn the energy function $f_{\theta}$ implicit in
the provided expert demonstrations and use $f_{\theta}$ to learn a policy that
robustly matches the expert performance. In particular, timestep-centric IRL
aims to recover an energy function $f_{\theta}(s,a)$ to rationalize and match
the demonstrated expert’s action conditional distribution: $p_{\theta}(a\mid
s)=\frac{\exp(f_{\theta}(s,a))}{Z_{\theta}}\propto\exp(f_{\theta}(s,a))=\overline{p_{\theta}}(a\mid
s)$, where $Z_{\theta}$ is the partition function, an integral over all
possible actions given state $s$. In other words, IRL minimizes the KL
divergence between the actual and predicted expert conditional action
distributions: $\pi_{E}(a\mid s)$ and $p_{\theta}(a\mid s)$.
Adversarial IRL [8][25] provides a sampling-based approximation to MatEntIRL
[4] in an adversarial manner. Specifically, AIRL [8] learns a generative
policy $\pi_{\psi}$ and a binary discriminator $D_{\theta}$ derived from
energy function $f_{\theta}$:
$\displaystyle D_{\theta}(s,a)=P((s,a)\text{ is generated by expert})$
$\displaystyle=\frac{\overline{p_{\theta}}(a\mid
s)}{\overline{p_{\theta}}(a\mid s)+\pi_{\psi}(a\mid
s)}=\frac{\exp(f_{\theta}(s,a))}{\exp(f_{\theta}(s,a))+\pi_{\psi}(a\mid s)}$
(1)
and $\theta$ is trained to distinguish state-action pairs sampled from the
expert vs. the policy, using binary cross entropy loss:
$\displaystyle\mathcal{L}^{IRL}=-\mathbb{E}$
${}_{(s,a)\sim\pi_{\psi},\pi_{E}}[y(s,a)\log(D_{\theta}(s,a))$
$\displaystyle+(1-y(s,a))\log(1-D_{\theta}(s,a))]$ (2)
where $y(s,a)=\mathds{1}\\{(s,a)\text{ is generated by expert }\pi_{E}\\}$.
Meanwhile, the policy is trained to maximize the MaxEntIRL Objective [4], or
equivalently, to match the expert’s state-action joint distribution via
reverse KL divergence [19].
### III-C One-shot Meta-imitation Learning from A Single Video
In one-shot meta-imitation learning, the robot is trained to solve a large
number of tasks drawn from a task distribution $p(\mathcal{M})$. The total
number of tasks in this task distribution can be finite or infinite. Each
imitation task $\mathcal{M}_{train}^{i}$ consists of a single video
demonstration $\mathcal{D}^{i}_{\pi_{E}}$. During training, the robot can also
generate limited practice trajectories (e.g., 90). For example, in the Pick-
Pour-Place experiment in Fig.1, the robot receives a single video
demonstration for each of the 117 tasks. Each task is characterized by a
different combination of container positions, or pouring into the green vs.
the yellow container. At test time, the robot receives a single video of a new
task $\mathcal{M}_{test}^{i}$ drawn from $p(\mathcal{M})$. For example, a new
Pick-Pour-Place test task can be a new combination of container positions or
pouring into a new container (e.g., the red or orange container). The robot
then needs to solve this task the first time without trial-and-error.
### III-D Embedding-based Meta-learning
Embedding-based meta-learning [7][23] learns a task-specific embedding vector
$z$ that contains task-level abstraction to adapt to a new but related task
quickly. This method aims to learn a task-conditioned policy $\pi(a|s,z)$ that
maximizes task-conditioned expected returns:
$\max_{\pi}\mathbb{E}_{(s_{t},a_{t})\sim\pi,\rho_{0}}[\sum_{t=1}^{T}r(s_{t},a_{t}|c)+\alpha\mathcal{H}(\pi(a_{t}|s_{t},c))]$,
by learning an embedding space $Z$ that maximizes the mutual information
between $z$ and task context $c$. The goal is to make this learned embedding
space generalizable to new tasks so that at test time, the policy can quickly
adapt to unseen tasks with no or few practices. A key advantage of embedding-
based meta-learning is the ability to learn from off-policy experiences.
However, current methods are mostly if not only demonstrated in non-vision
tasks in simulation.
## IV Mathematical Formulation for SQUIRL
### IV-A SQUIRL: Timestep-centric IRL as Soft Q-Learning
Previous works in timestep-centric IRL such as [6][7][8] have interpreted the
energy function $f_{\theta}$ in Eq.III-B as a reward function $r_{\theta}$ and
later recover a Q or advantage function based on reward $r_{\theta}$ for
policy improvement. To improve IRL sample efficiency, we propose to bypass
this reward learning and directly interpret $f_{\theta}(s,a)$ as the soft
Q-function [24] $Q^{\pi_{mix}}_{\theta}(s,a)$. This soft Q-function models the
expert’s behavior as maximizing both the Q-value and its entropy (i.e.,
randomness) simultaneously. It also encourages the robot to explore the real
world to imitate the expert more robustly. Under this formulation,
approximating the expert’s conditional action distribution is equivalent to
recovering a soft Q-function under which the expert is soft Q-optimal:
$\displaystyle\operatorname*{arg\,min}_{\theta}D_{KL}(\pi_{E}(a\mid
s)\;\|\;p_{\theta}(a\mid s))$ $\displaystyle=$
$\displaystyle\operatorname*{arg\,max}_{\theta}\mathbb{E}_{a\sim\pi_{E}(a\mid
s)}[Q^{\pi_{mix}}_{\theta}(s,a)]-\log Z_{\theta}$ (3)
Eq.3 rationalizes the expert behavior intuitively because the expert should be
optimal with respect to the cumulative reward [3], not the immediate reward.
Here, $Q^{\pi_{mix}}_{\theta}$ is under a mixture policy $\pi_{mix}$ between
the robot and expert’s policies.
### IV-B SQUIRL as Expert Imitation and Adversarial Learning
Under SQUIRL, the policy learning objective (Eq.4) is also equivalent
(derivations on website) to matching: 1) the exponential-Q distribution of the
discriminator $\theta$ (Eq.5), 2) the generator’s objective in Generative
Adversarial Networks (GANs) [26] (Eq.6), and 3) the joint state-action
distribution of expert [19] (Eq.7):
$\pi^{*}=\operatorname*{arg\,min}_{\pi\in\Pi}\mathcal{L}^{RL}(\pi)$, where
$\displaystyle\mathcal{L}^{RL}(\pi)=D_{KL}(\pi_{\psi}(a\mid
s)\;\|\;\frac{\exp{Q^{\pi_{mix}}_{\theta}(s,a)}}{Z(s)})$ (4)
$\displaystyle=D_{KL}(\pi_{\psi}(a\mid s)\;\|\;p_{\theta}(a\mid s))$ (5)
$\displaystyle=\mathbb{E}_{(s,a)\sim\pi_{mix}}[\log(1-D_{\theta}(s,a))-\log(D_{\theta}(s,a))]$
(6) $\displaystyle=D_{KL}(\rho_{\pi_{\psi}}(s,a)\;\|\;\rho_{\pi_{E}}(s,a))$
(7)
Meanwhile, the discriminator $\theta$ is matching its Q-function to the log-
distribution of the expert’s conditional action distribution (Section III-B).
Therefore, when this Q-function is optimal:
$Q^{\pi_{mix}}_{\theta}=Q^{\pi_{mix}}_{\theta^{*}}$, the robot’s policy
objective (Eq.4) is also matching the expert’s conditional action
distribution:
$\psi^{*}=\operatorname*{arg\,min}_{\psi}E_{\rho_{\pi_{mix}}(s)}[D_{KL}(\pi_{\psi}(a\mid
s)\;\|\;\pi_{E}(a\mid s))]$ (8)
### IV-C Comparison to the Behavioral Cloning (BC) Objective
While BC attempts to learn a policy that also matches the expert’s conditional
action distribution [19], the fundamental difference is that the KL-divergence
in BC’s case is computed under the expert’s narrow state distribution
$\rho_{\pi_{E}}(s)$:
$\psi_{BC}^{*}=\operatorname*{arg\,min}_{\psi}E_{\rho_{\pi_{E}}(s)}[D_{KL}(\pi_{E}(a\mid
s)\;\|\;\pi_{\psi}(a\mid s))]$. In contrast, ours (Eq.8) is computed under
$\rho_{\pi_{mix}}(s)$: the state distribution of the combined cumulative
experience of the robot and the expert, which is a much wider distribution
than the expert distribution. We hypothesize that this, along with matching
the joint state-action distribution of the expert (Eq.7), makes our algorithm
less susceptible to compounding errors than BC, as experimentally tested in
Section VI.
Figure 3: SQUIRL: Soft Q-functioned Meta-IRL. To begin, our algorithm
bootstraps learning for the policy (orange) and the task encoder (yellow) via
behavioral cloning (the left third of Fig.3). Next, our algorithm uses the
warmed-up policy and task encoder to generate 10 trials in the physical world
(not in simulation). Using the combined expert and robot trajectories, our
algorithm learns a task-conditioned soft Q-function (green) that rationalizes
the expert’s behaviors as maximizing both cumulative reward and entropy (i.e.,
randomness). Using this Q-function, our algorithm then quickly improves the
policy using all cumulative robot and expert timesteps. This cycle repeats
until convergence, totaling 90 trials (the middle third of Fig.3). Finally, at
test time (the right third Fig.3), our algorithm generates a new embedding $z$
for the new task, and inputs this embedding into the task-conditioned policy
to solve the new task without any practices.
## V SQUIRL: Soft Q-functioned Meta-IRL
Shown in Fig.3, our algorithm learns three neural networks jointly – a task
encoder (yellow), a task-conditioned policy (orange), and a task-conditioned
soft Q-function (green):
1. 1.
$\Psi_{\phi}(c)$: a task encoder that encodes a sampled batch of $C=64$ expert
state-action pairs $c=\\{s^{i}_{1:C},a^{i}_{1:C}\\}$ from a task $i$ into a
single 32-dim embedding vector $z^{i}\in\mathbb{R}^{32}$ (by computing the
mean vector across 64 embeddings) that enables generalization to new tasks.
This batch of expert state-action pairs is randomly sampled and thus does not
encode time information. Both the policy and the Q-function accept this
embedding vector as input.
2. 2.
$\pi_{\psi}(s,z^{i})$: a task-conditioned policy the robot uses to perform a
task $i$ given state $s$ and the task embedding vector
$z^{i}\in\mathbb{R}^{32}$ outputted by the task encoder $\Psi_{\phi}(c)$.
3. 3.
$Q_{\theta}(s,a,z^{i})$: a task-conditioned soft Q-function used to train the
policy $\pi_{\psi}(s,z^{i})$ to more robustly mimic the expert’s behavior for
the robotic manipulation task $i$.
To begin, the robot is given an expert trajectory of state-action pairs
$\mathcal{D}_{\pi_{E}}$ for each of the 117 training tasks. The robot first
uses these expert trajectories to bootstrap training for both its policy
$\pi_{\psi}$, and the task encoder $\Psi_{\phi}$ via behavioral cloning
(Eq.9). This way, the robot can distinguish the train tasks better and learn
more quickly in the real world. Next, the robot generates 10 trials (state-
action pairs) $\overline{\mathcal{D}}_{\pi_{\psi}}$ in the physical world (not
simulation) using its warmed-up policy and task encoder. Then, the robot uses
both the expert’s and its state-action pairs to train a discriminator
$\theta$. This discriminator classifies which state-action pairs come from the
expert $\pi_{E}$ vs. the robot $\pi_{\psi}$. At first, the robot is
distinctively worse than the expert at performing the tasks. This makes it
easy for the discriminator to classify. By doing so, the discriminator learns
a Q-function $Q^{\pi_{mix}}_{\theta}$ using Eq.3.
Using the learned Q-function $Q^{\pi_{mix}}_{\theta}$, the robot trains its
policy $\pi_{\psi}$ via Eq.4. Meanwhile, the robot also has the option to
continue updating its task-conditioned policy and task encoder via behavioral
cloning (Eq.9). Since training the policy via Eq.4 is equivalent to indirectly
imitating the expert (Eq.7 and 8), as derived in Section IV-B, the
trajectories generated by the policy gradually become more similar to the
expert. This makes the state-action pairs more difficult for the discriminator
to classify. This difficulty, in turn, forces the discriminator to learn a
more precise Q-function, which then encourages the policy to mimic the expert
even more closely. This cycle repeats until convergence (90 trials in total),
at which point: 1) the policy matches the expert performance, 2) the task
encoder learns to generalize to new tasks, and 3) the discriminator continues
to struggle to distinguish state-action pairs correctly despite having learned
an accurate Q-function.
### V-A Rationale for Bypassing Reward Learning via SQUIRL
SQUIRL learns a Q-function without rewards because 1) the policy is ultimately
trained by the Q-function, not rewards, thus bypassing reward learning
improves IRL sample efficiency, and 2) circumventing reward learning avoids
off-policy Q-learning from a constantly changing reward function and makes
training easier and more stable empirically.
### V-B Architectures for Policy, Task Encoder, and Q-function
For all non-vision tasks, we parameterize $\pi_{\psi},\Psi_{\phi},Q_{\theta}$
with five fully-connected (FC) layers. For vision tasks, we use a 5-layer CNN
followed by a spatial-softmax activation layer for the RGB image. This
activation vector is then concatenated with the non-vision input vector and
together passed through five FC layers. Our algorithm is general and works
with many other network architectures, state, and action spaces.
### V-C Incorporating BC to Bootstrap and Accelerate Learning
Since our algorithm’s IRL objective (Eq.8) is compatible with BC, as explained
in Section IV-C, our algorithm can jointly be trained with BC to stabilize and
accelerate learning without conflicting gradient issues (line 16 in Algorithm
1):
$\mathcal{L}^{BC}=\mathbb{E}_{(s,a)\sim\pi_{E}}[\left\lVert\pi_{\psi}(s,\Psi_{\phi}(c))-a\right\rVert^{2}]$
(9)
This, combined with the off-policy nature of our algorithm, also allows the
robot to bootstrap learning by first “pre-training” via BC (Eq.9) using the
expert demonstrations, before improving performance further via meta-IRL
training.
Algorithm 1 SQUIRL: Soft Q-functioned Meta-IRL (Train)
Input: One expert video demonstration trajectory of state-action pairs
$\mathcal{D}^{i}_{\pi_{E}}=\\{s^{i}_{1:H},a^{i}_{1:H}\\}$ for each of the $n$
training tasks $i=1:n$, where $H$ is the horizon of the task (e.g.,
$n=117,H=100$)
1: Initialize soft Q-function $Q_{\theta}$, policy $\pi_{\psi}$, task encoder
$\Psi_{\phi}$, and an empty buffer of off-policy robot trajectories
$\mathcal{D}^{i}_{\pi_{\psi}}\leftarrow\\{\\}$ for each training task $i=1:n$
2: Warm-up policy and task encoder via $\mathcal{L}^{BC}$ (Eq.9)
3: while not converged do
4: Sample a batch of $m$ task indices $\\{i^{1:m}\\}$ from all training tasks
$i=1:n$, (e.g., $m=10$)
5: for $i=i^{1:m}$ do
6: Infer task embedding
$z^{i}=\mathbb{R}^{\mathcal{Z}}\leftarrow\Psi_{\phi}(c)$, where
$c=\\{s^{i}_{1:C},a^{i}_{1:C}\\}\sim\mathcal{D}^{i}_{\pi_{E}}$ (e.g.,
$\mathcal{Z}=32,C=64$)
7: Generate a robot trajectory of state-action pairs
$\overline{\mathcal{D}}^{i}_{\pi_{\psi}}=\\{s^{i}_{1:H},a^{i}_{1:H}\\}$ from
task $i$ using $\pi_{\psi},z^{i}$
8:
$\mathcal{D}^{i}_{\pi_{\psi}}\leftarrow\mathcal{D}^{i}_{\pi_{\psi}}\cup\overline{\mathcal{D}}^{i}_{\pi_{\psi}}$
9: end for
10: for $j=1:J$ (e.g., $J=400$) do
11: Sample another batch of $m$ task indices $\\{i^{1:m}\\}$
12: $\theta\leftarrow\theta-\nabla_{\theta}\mathcal{L}^{IRL}$ (Eq.III-B) using
a combined batch of $\mathcal{B}=128$ robot and expert timesteps:
$\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\cup\overline{\mathcal{D}}^{i}_{\pi_{E}}$
and $z^{i}$, where
$\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\sim\mathcal{D}^{i}_{\pi_{\psi}}$,
$\overline{\mathcal{D}}^{i}_{\pi_{E}}\sim\mathcal{D}^{i}_{\pi_{E}}$,
$i=\\{i^{1:m}\\}$
13: end for
14: for $k=1:K$ (e.g., $K=2000$) do
15: Sample another batch of $m$ task indices $\\{i^{1:m}\\}$
16: if necessary then
$\\{\psi,\phi\\}\leftarrow\\{\psi,\phi\\}-\nabla_{\psi,\phi}\mathcal{L}^{BC}$
(Eq.9) using a batch of $\mathcal{B}$ expert timesteps
$\overline{\mathcal{D}}^{i}_{\pi_{E}}\sim\mathcal{D}^{i}_{\pi_{E}},z^{i}$,
$i=\\{i^{1:m}\\}$ end if
17: $\psi\leftarrow\psi-\nabla_{\psi}\mathcal{L}^{RL}$ (Eq.4) using a combined
batch of $\mathcal{B}$ robot and expert timesteps:
$\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\cup\overline{\mathcal{D}}^{i}_{\pi_{E}}$
and $z^{i}$, where
$\overline{\mathcal{D}}^{i}_{\pi_{\psi}}\sim\mathcal{D}^{i}_{\pi_{\psi}}$,
$\overline{\mathcal{D}}^{i}_{\pi_{E}}\sim\mathcal{D}^{i}_{\pi_{E}}$,
$i=\\{i^{1:m}\\}$
18: end for
19: end while
20: return soft Q-function $Q_{\theta}$, policy $\pi_{\psi}$, task encoder
$\Psi_{\phi}$
Algorithm 2 SQUIRL: Soft Q-functioned Meta-IRL (Test)
Input: $\pi_{\psi}$, $\Psi_{\phi}$, $Q_{\theta}$, and a single expert video
demonstration of state-action pairs
$\mathcal{D}^{i}_{\pi_{E}}=\\{s^{i}_{1:H}$, $a^{i}_{1:H}\\}$ from a new task
$i$ unseen during training
1: Infer task embedding vector
$z^{i}=\mathbb{R}^{\mathcal{Z}}\leftarrow\Psi_{\phi}(c)$, where
$c=\\{s^{i}_{1:C},a^{i}_{1:C}\\}\sim\mathcal{D}^{i}_{\pi_{E}}$ (e.g.,
$\mathcal{Z}=32,C=64$)
2: Rollout robot trajectory in the real world using $\pi_{\psi}$, $z^{i}$
Approach Box
Lower to Box
Grasp Box
Pick up Box
Carry Box
Drop Box
Figure 4: Pick-Carry-Drop Experiment. The robot needs to approach, lower to,
grasp, pick-up, carry, and drop the box to solve the task.
### V-D Using Expert Demonstration as Both the Input Task Context Variables
and Training Signal for the Task Encoder
Learning robust task embeddings enables robots to generalize to new tasks
quickly [23]. To this end, our algorithm proposes to use 64 expert timesteps
as the input task context variable $c$ into the task encoder, as opposed to 64
robot timesteps. This is because context variables should explore the task and
environment sufficiently well to expose the key information of the task, and
expert demonstration timesteps are an ideal candidate compared to the
timesteps from the robot’s suboptimal policy. As a result, the context
variable $c$ input into the task encoder only includes the states and actions
of the expert, but not the rewards or the next states.
In addition, we choose the BC loss $\mathcal{L}^{BC}$ in Eq.9 as the training
loss for learning the task encoder $\Psi_{\phi}$. This BC loss is stable since
the expert timesteps are fixed. In contrast, the IRL loss $\mathcal{L}^{IRL}$
(Eq.III-B) and the policy loss $\mathcal{L}^{RL}$ (Eq.4) are less stable
because the training data distribution for both losses are non-stationary.
This design choice also allows us to learn a robust task embeddings first via
BC pre-training before performing meta-IRL training via SQUIRL. We empirically
observe that such pre-training can improve the training stability and the
sample efficiency of SQUIRL, but the final policy performance is similar with
or without BC pre-training. In summary, our algorithm is detailed in Algorithm
1 (train) and Algorithm 2 (test), with hyperparameters detailed
here222Hyperparameters in Algorithm 1 and 2. Policy gradient batch size
$\mathcal{B}$: 1024 (non-vision), 128 (vision); task embedding batch size $C$:
64; all learning rates: $3e^{-4}$; starting SAC alpha: $1e^{-5}$; SAC target
entropy: $-300$; IRL updates per epoch $J$: $400$; policy updates per epoch
$K$: $2000$; task embedding size $\mathcal{Z}$: 32; meta-batch size $m$: 10;
discount rate $\gamma$: 0.99.
## VI Experiments and Results Analysis
We evaluate the generality and robustness of our algorithm across long-horizon
vision and non-vision tasks with continuous state and action spaces in both
simulation (Pick-Carry-Drop, a horizon of 1024 timesteps, 30 train tasks) and
real-world (Pick-Pour-Place, a horizon of 100 timesteps, 117 train tasks).
There is only a single expert demonstration for each of the train or test
tasks. We compare with the PEARL-BC baseline, which is the behavioral cloning
version of PEARL [23]. Evaluation: We evaluate real-robot and simulation
experiments on 50 and 500 trials respectively across 50 seen and unseen tasks.
We report mean and standard deviation (“stdev” hereafter). The performance
difference between different experiments is statistically significant if the
difference in mean is at least either standard deviation away. Experimental
video is at http://crlab.cs.columbia.edu/squirl.
Approach Bottle
Grasp Bottle
Carry Bottle
Pour Orange Cup
Carry Bottle
Place Bottle
Figure 5: Pick-Pour-Place at Test Time. To solve this task, the robot needs to
first approach, grasp and carry the grey bottle, pour the iron pebble inside
the bottle into a specific container, and carry and place the bottle back on
the table. At the beginning of each task, the bottle is not in hand, and but
the iron pebble is already in the bottle. Top row: top-down camera images.
Bottom row: 45°camera images.
### VI-A Simulation Experiment: Pick-Carry-Drop
Description. We modify the planar Stacker task [27] to create “Pick-Carry-
Drop”. Shown in Fig.4, a robot is tasked to approach, pick, carry, and drop
the black box into the stack marked in green. The task is successful if the
box is dropped into the stack within 1024 timesteps, and failed otherwise.
State Space. We evaluate our algorithm on both the vision and the non-vision
version of the task, to demonstrate that SQUIRL is general across different
state space modalities. The state space for the vision version includes 1) the
joint angles and velocities for its 5-DOFs, 2) a one-hot vector indicating the
current stage of the task, and 3) an RGB image shown in Fig.4. The non-vision
version’s state space replaces the RGB image with the position of the black
box.
Action Space. The robot controls its 5-DOF joint torques.
Task Definition. There are a total of 30 training tasks in this experiment,
each corresponding to a different drop location:
$x\in\\{-0.15,-0.14,\ldots,0.14\\}$. During test time, we randomly sample a
new, real-valued drop location from the maximum valid range:
$x\in[-0.25,0.25]$. The green drop location is invisible in both the vision
and the non-vision version of the task. Therefore, the robot needs to infer
the green drop location (i.e., task information) solely from the provided
expert video demonstration. On the other hand, the starting pose of the robot
and the location of the black box are all initialized randomly at the
beginning of each task.
Robot Trials. The robot uses 150 training trials in total.
Expert Demonstration. We trained an expert policy from scratch via RL to
provide expert demonstrations. The reward function used to train the expert
policy comprises of six stages, each stage with a reward of 10. Designing this
reward function has taken significant human effort, which exhibits the value
of directly learning from video demonstrations.
TABLE I: Pick-Carry-Drop Results (% Drop Success$\pm$Stdev) Tasks | Seen | Unseen | Seen | Unseen
---|---|---|---|---
| Vision | Non-Vision
SQUIRL (BC + IRL) | 95.8$\pm$1.7 | 95.0$\pm$1.5 | 97.3$\pm$3.0 | 96.9$\pm$2.0
Baseline (PEARL-BC) | 77.8$\pm$1.6 | 76.5$\pm$0.7 | 90.8$\pm$2.5 | 89.5$\pm$1.6
Ablation: No BC Joint Training or BC Pre-training
SQUIRL (IRL Only) | 93.8$\pm$1.8 | 93.2$\pm$1.6 | 94.7$\pm$1.7 | 93.9$\pm$1.4
Simulation Results and Analysis. As shown in Table I, our algorithm, “SQUIRL
(BC + IRL)”, pre-trains via BC and then trains the policy using both the BC
loss (Eq.9) and the IRL policy gradient loss (Eq.4). It statistically
significantly outperforms the PEARL-BC baseline in both the vision
(95.8%$\pm$1.7 vs. 77.8%$\pm$1.6) and non-vision (97.3%$\pm$3.0 vs.
90.8%$\pm$2.5) version of the task for seen tasks. For unseen tasks, we
observed similar outperformance (95.0%$\pm$1.5 vs. 76.5%$\pm$0.7 in the vision
case and 96.9%$\pm$2.0 vs. 89.5%$\pm$1.6 in the non-vision case).
Qualitatively, in the PEARL-BC’s case, the robot sometimes misses the drop
location as it attempts to drop the box or fails to pick up the box when the
box gets stuck by the walls of the stack (kindly see website). The performance
drop of the baseline from the non-vision version (90.8%$\pm$2.5 and
89.5%$\pm$1.6 for seen and unseen tasks) to the vision version (77.8%$\pm$1.6
and 76.5%$\pm$0.7 for seen and unseen tasks) is mainly because vision-based
manipulation tends to suffer from larger compounding errors. Nevertheless, as
evident in the statistical similarities between seen and unseen tasks for
SQUIRL (95.8%$\pm$1.7 vs. 95.0%$\pm$1.5 for vision) and PEARL-BC
(77.8%$\pm$1.6 vs. 76.5%$\pm$0.7 for vision), both algorithms can generalize
to unseen tasks, due to the generalizability of task embeddings.
Ablation: IRL Gradient Only. To compare the performance contribution of
SQUIRL’s meta-IRL core training procedure directly against PEARL-BC, we
created “SQUIRL (IRL only)”, which trains the policy using only the policy
gradient loss in Eq.4 (no BC joint training or pre-training). This ablated
version still outperforms the PEARL-BC baseline (93.8%$\pm$1.8 vs.
77.8%$\pm$1.6 for seen vision tasks, 93.2%$\pm$1.6 vs. 76.5%$\pm$0.7 for
unseen vision tasks). Nevertheless, by combining BC and IRL gradients, “SQUIRL
(BC + IRL)” improves performance slightly further (95.8%$\pm$1.7 and
95.0%$\pm$1.5). Intuitively, while BC only matches the expert’s conditional
action distribution under the expert’s state distribution, BC’s supervised
learning signal is stabler than IRL. Joint training with BC and IRL gradients
can be interpreted as combining the stability of BC and the robustness of
Q-functioned IRL, by matching the conditional action distribution of the
expert under the broader state distribution of the expert-robot mixture
experience (Eq.8), in addition to matching the expert’s joint state-action
distribution (Eq.7).
### VI-B Real-Robot Experiment: Pick-Pour-Place
Description. We evaluated our algorithm on the UR5-Seed robot (Fig.2) to
perform a set of long-horizon pick-pour-place tasks. As shown in Fig.2, in
each task, there is a grey cylindrical bottle, an iron pebble that is already
in the bottle, and more than one container on the table. The robot is tasked
to approach and pick-up the grey bottle, pour the iron pebble into a specific
container, and place the bottle back on the table. The task is a success only
if the pebble is poured into the correct container and the bottle is placed
upright on the table within $H=100$ timesteps, and a failure otherwise.
State Space. The state space contains a top-down or 45°camera’s RGB image
(Fig.5), and 2 binary indicators for whether the robot has poured or closed
the hand, respectively.
Action Space. The action space includes the Cartesian unit directional vector
for the end-effector movement. During each timestep, the robot can adjust the
end-effector by 2cm along any 3D direction. The action space also includes a
binary indicator to control the arm vs. the hand and a trinary indicator to
close, open, or rotate the hand for pouring.
Orthogonality to State and Action Representions. While Pick-Pour-Place can be
tackled by first localizing the correct container via object detection
(alternative state space) and then executing motion-planning trajectories to
pour (alternative action space), our algorithm is general across and
orthogonal to alternative state and action spaces.
Task Definition. As shown in each row of images in Fig.1, each task is defined
by the positions and colors of the containers, and by the correct container to
pour into. There are always only the green and yellow containers in the 117
train tasks. 25 of the 50 test tasks have the green and yellow containers at
new positions. The remaining 25 test tasks add the red and the orange unseen
containers, or either. Since there is always more than one container in the
RGB image, the robot will not know which container to pour into without the
expert demonstration. Therefore, the robot needs to depend solely on the task
encoder’s ability to extract the correct task information from the expert
demonstration.
Robot Trials. The robot collects 90 training trials in total.
Expert Demonstration. We collect demonstrations via teleoperation using a
Flock of Birds sensor333Flock of Birds is a 6D pose tracker from Ascension
Technologies Corp.. Using the human wrist pose detected by the sensor in real-
time, we move, open, close, or rotate the robot hand for pouring. We collected
$117$ video demonstrations across 117 tasks for training. It takes 1-2 minutes
to collect one demonstration.
TABLE II: Pick-Pour-Place Results (% Pour Success$\pm$Stdev) Tasks | RGB Image | Seen | Unseen
---|---|---|---
SQUIRL (BC + IRL) | Top-Down (90°) | 92.0$\pm$4.5 | 90.0$\pm$7.1
Baseline (PEARL-BC) | 70.0$\pm$7.1 | 68.0$\pm$11.0
Baseline (Standard-BC) | 60.0$\pm$10.0 | 56.0$\pm$11.4
SQUIRL (BC + IRL) | $45\degree$ (Ablation) | 90.0$\pm$7.1 | 88.0$\pm$8.4
Real-robot Results and Analysis. As shown in Table II, our algorithm
outperforms the PEARL-BC baseline statistically significantly in both seen
tasks (92.0%$\pm$4.5 vs. 70.0%$\pm$7.1) and unseen tasks (90.0%$\pm$7.1 vs.
68.0%$\pm$11.0). This observed outperformance mainly originates from our soft
Q-functioned IRL formulation, which forces the robot to imitate the expert
under a much wider state distribution provided by the expert-robot mixture
trajectories, instead of the narrow state distribution of the expert
demonstrations. This helps reduce compounding errors during task execution.
The low performance of the PEARL-BC baseline is mainly due to additional
compounding errors induced by real-world sensory noises such as unstable
lighting conditions and small perturbation to camera positions. Qualitatively,
the PEARL-BC baseline sometimes pours into the wrong container, misses the
target container by a few centimeters, or moves past the target container
while failing to pour in time (kindly see website for examples). Nevertheless,
from the statistical similarity between seen and unseen tasks for both our
algorithm (92.0%$\pm$4.5 vs. 90.0%$\pm$7.1) and PEARL-BC (70.0%$\pm$7.1 vs.
68.0%$\pm$11.0), we see that the learned task encoder is still effectively
generalizing to a new, related task.
Comparison to the “Standard-BC” Baseline. We also compared to “Standard-BC”
(60.0%$\pm$10.0 and 56.0%$\pm$11.4 for seen and unseen tasks), which performs
no meta-learning and learns every train or test task independently from
scratch via BC. As a result, the neural network overfits to the single
demonstration and fails to generalize to real-world sensory (camera) noises at
test time. Note that Standard-BC’s unseen-task performance is slightly lower
than seen tasks since the unseen tasks are more challenging with at most 4
containers on the table, compared to only 2 containers in seen tasks.
Ablation: Non-top-down Camera. We also tested our algorithm with a $45\degree$
RGB image (90.0%$\pm$7.1 and 88.0%$\pm$8.4 for seen and unseen tasks) against
a top-down RGB image (92.0%$\pm$4.5 and 90.0%$\pm$7.1 for seen and unseen
tasks). The statistical similarity between the two shows that SQUIRL is
general and can accept a non-top-down RGB input image.
## VII Conclusion
We introduced SQUIRL, a robust, efficient, and general Soft Q-functioned meta-
IRL algorithm, towards enabling robots to learn from limited expert (one per
task) and robot (90 in total) trajectories. This algorithm is statistically
significantly more robust than behavioral cloning and requires no trial-and-
errors at test time. Finally, this general algorithm has been tested to work
with various long-horizon manipulation tasks, and across vision and non-vision
state and action spaces. In the future, we will extend this algorithm to learn
from direct human-arm demonstrations instead of teleoperation. This will lower
the cost of collecting real-world expert demonstrations further. We also aim
to incorporate hierarchical learning into SQUIRL to solve much longer horizon
manipulation tasks by reusing low-level subpolicies.
## References
* [1] P. Abbeel and A. Ng, “Apprenticeship learning via inverse reinforcement learning,” _International Conference on Machine Learning_ , 2004.
* [2] N. Ratliff, B. Andrew, and M. Zinkevich, “Maximum margin planning,” _International Conference on Machine Learning (ICML)_ , 2006.
* [3] B. Ziebart, “Modeling purposeful adaptive behavior with the principle of maximum causal entropy,” _PhD Thesis_ , 2010.
* [4] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey, “Maximum entropy inverse reinforcement learning,” in _Proc. AAAI_ , 2008.
* [5] A. Boularias, J. Kober, and J. Peters, “Relative entropy inverse reinforcement learning,” in _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_ , ser. Proceedings of Machine Learning Research, vol. 15. PMLR, 11–13 Apr 2011.
* [6] S. K. Seyed Ghasemipour, S. S. Gu, and R. Zemel, “Smile: Scalable meta inverse reinforcement learning through context-conditional policies,” in _Advances in Neural Information Processing Systems_ , 2019.
* [7] L. Yu, T. Yu, C. Finn, and S. Ermon, “Meta-inverse reinforcement learning with probabilistic context variables,” in _NeurIPS_ , 2019.
* [8] J. Fu, K. Luo, and S. Levine, “Learning robust rewards with adverserial inverse reinforcement learning,” in _ICLR_ , 2018.
* [9] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robot learning from demonstration,” _Robotics and Autonomous Systems_ , vol. 57, no. 5, pp. 469–483, 2009.
* [10] D. Xu, S. Nair, Y. Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese, “Neural task programming: Learning to generalize across hierarchical tasks,” in _International Conference on Robotics and Automation_ , 2018.
* [11] D.-A. Huang, S. Nair, D. Xu, Y. Zhu, A. Garg, L. Fei-Fei, S. Savarese, and J. C. Niebles, “Neural task graphs: Generalizing to unseen tasks from a single video demonstration,” in _CVPR_ , 2019.
* [12] D.-A. Huang, Y.-W. Chao, C. Paxton, X. Deng, L. Fei-Fei, J. C. Niebles, A. Garg, and D. Fox, “Motion reasoning for goal-based imitation learning,” in _ICRA_ , 2020.
* [13] C. Finn, S. Levine, and P. Abbeel, “Guided cost learning: Deep inverse optimal control via policy optimization,” in _ICML_ , 2016.
* [14] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” in _ICRA_ , 2018.
* [15] J. Kober and J. Peters, “Imitation and reinforcement learning - practical algorithms for motor primitive learning in robotics,” _IEEE Robotics and Automation Magazine_ , vol. 17, no. 2, pp. 55–62, 2010.
* [16] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generalization of motor skills by learning from demonstration,” in _International Conference on Robotics and Automation (ICRA)_ , 2009.
* [17] P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain, “Time-contrastive networks: Self-supervised learning from video,” in _ICRA_ , 2018.
* [18] C. Lynch, M. Khansari, T. Xiao, V. Kumar, J. Tompson, S. Levine, and P. Sermanet, “Learning latent plans from play,” in _CoRL_ , 2019.
* [19] S. K. S. Ghasemipour, R. Zemel, and S. Gu, “A divergence minimization perspective on imitation learning methods,” in _CoRL_ , 2019.
* [20] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine, “One-shot visual imitation learning via meta-learning,” in _CoRL_ , 2017.
* [21] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine, “One-shot imitation from observing humans via domain-adaptive meta-learning,” in _Robotics: Science and Systems (RSS)_ , 2018.
* [22] T. Yu, P. Abbeel, S. Levine, and C. Finn, “One-shot hierarchical imitation learning of compound visuomotor tasks,” in _IROS_ , 2019.
* [23] K. Rakelly, A. Zhou, D. Quillen, C. Finn, and S. Levine, “Efficient off-policy meta-reinforcement learning via probabilistic context variables,” _International Conference on Machine Learning (ICML)_ , 2019.
* [24] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” _International Conference on Machine Learning (ICML)_ , 2018.
* [25] J. Ho and S. Ermon, “Generative adversarial imitation learning,” in _Advances in neural information processing systems_ , 2016.
* [26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in _Advances in neural information processing systems_ , 2014.
* [27] Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. de Las Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, T. Lillicrap, and M. Riedmiller, “DeepMind control suite,” Tech. Rep., Jan. 2018.
|
2024-09-04T02:54:58.885703 | 2020-03-10T20:41:24 | 2003.04960 | {
"authors": "Sanmit Narvekar and Bei Peng and Matteo Leonetti and Jivko Sinapov and\n Matthew E. Taylor and Peter Stone",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26145",
"submitter": "Sanmit Narvekar",
"url": "https://arxiv.org/abs/2003.04960"
} | arxiv-papers | # Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey
Sanmit Narvekar<EMAIL_ADDRESS>
Department of Computer Science
University of Texas at Austin Bei Peng<EMAIL_ADDRESS>
Department of Computer Science
University of Oxford Matteo Leonetti<EMAIL_ADDRESS>
School of Computing
University of Leeds Jivko Sinapov<EMAIL_ADDRESS>
Department of Computer Science
Tufts University Matthew E. Taylor<EMAIL_ADDRESS>
Alberta Machine Intelligence Institute
Department of Computing Science
University of Alberta Peter Stone<EMAIL_ADDRESS>
Department of Computer Science
University of Texas at Austin
and Sony AI
###### Abstract
Reinforcement learning (RL) is a popular paradigm for addressing sequential
decision tasks in which the agent has only limited environmental feedback.
Despite many advances over the past three decades, learning in many domains
still requires a large amount of interaction with the environment, which can
be prohibitively expensive in realistic scenarios. To address this problem,
transfer learning has been applied to reinforcement learning such that
experience gained in one task can be leveraged when starting to learn the
next, harder task. More recently, several lines of research have explored how
tasks, or data samples themselves, can be sequenced into a _curriculum_ for
the purpose of learning a problem that may otherwise be too difficult to learn
from scratch. In this article, we present a framework for curriculum learning
(CL) in reinforcement learning, and use it to survey and classify existing CL
methods in terms of their assumptions, capabilities, and goals. Finally, we
use our framework to find open problems and suggest directions for future RL
curriculum learning research.
Keywords: curriculum learning, reinforcement learning, transfer learning
## 1 Introduction
Curricula are ubiquitous throughout early human development, formal education,
and life-long learning all the way to adulthood. Whether learning to play a
sport, or learning to become an expert in mathematics, the training process is
organized and structured so as to present new concepts and tasks in a sequence
that leverages what has previously been learned. In a variety of human
learning domains, the quality of the curricula has been shown to be crucial in
achieving success. Curricula are also present in animal training, where it is
commonly referred to as shaping (Skinner, 1958; Peterson, 2004).
As a motivating example, consider the game of Quick Chess (shown in Figure 1),
a game designed to introduce children to the full game of chess, by using a
sequence of progressively more difficult “subgames.” For example, the first
subgame is played on a 5x5 board with only pawns, where the player learns how
pawns move, get promoted, and take other pieces. Next, in the second subgame,
the king piece is added, which introduces a new objective: keeping the king
alive. In each successive subgame, new elements are introduced (such as new
pieces, a larger board, or different configurations) that require learning new
skills and building upon knowledge learned in previous games. The final game
is the full game of chess.
The idea of using such curricula to train artificial agents dates back to the
early 1990s, where the first known applications were to grammar learning
(Elman, 1993; Rohde and Plaut, 1999), robotics control problems (Sanger,
1994), and classification problems (Bengio et al., 2009). Results showed that
the order of training examples matters and that generally, incremental
learning algorithms can benefit when training examples are ordered in
increasing difficulty. The main conclusion from these and subsequent works in
curriculum learning is that starting small and simple and gradually increasing
the difficulty of the task can lead to faster convergence as well as increased
performance on a task.
Recently, research in reinforcement learning (RL) (Sutton and Barto, 1998) has
been exploring how agents can leverage transfer learning (Lazaric et al.,
2008; Taylor and Stone, 2009) to re-use knowledge learned from a source task
when attempting to learn a subsequent target task. As knowledge is transferred
from one task to the next, the sequence of tasks induces a curriculum, which
has been shown to improve performance on a difficult problem and/or reduce the
time it takes to converge to an optimal policy.
Figure 1: Different subgames in the game of Quick Chess, which are used to
form a curriculum for learning the full game of Chess.
Many groups have been studying how such a curriculum can be generated
automatically to train reinforcement learning agents, and many approaches to
do so now exist. However, what exactly constitutes a curriculum and what
precisely qualifies an approach as being an example of curriculum learning is
not clearly and consistently defined in the literature. There are many ways of
defining a curriculum: for example, the most common way is as an ordering of
tasks. At a more fundamental level, a curriculum can also be defined as an
ordering of individual experience samples. In addition, a curriculum does not
necessarily have to be a simple linear sequence. One task can build upon
knowledge gained from multiple source tasks, just as courses in human
education can build off of multiple prerequisites.
Methods for curriculum generation have separately been introduced for areas
such as robotics, multi-agent systems, human-computer and human-robot
interaction, and intrinsically motivated learning. This body of work, however,
is largely disconnected. In addition, many landmark results in reinforcement
learning, from TD-Gammon (Tesauro, 1995) to AlphaGo (Silver et al., 2016) have
implicitly used curricula to guide training. In some domains, researchers have
successfully used methodologies that align with our definition of curriculum
learning without explicitly describing it that way (e.g., self-play). Given
the many landmark results that have utilized ideas from curriculum learning,
we think it is very likely that future landmark results will also rely on
curricula, perhaps more so than researchers currently expect. Thus, having a
common basis for discussion of ideas in this area is likely to be useful for
future AI challenges.
### Overview
The goal of this article is to provide a systematic overview of curriculum
learning (CL) in RL settings and to provide an over-arching framework to
formalize this class of methods. We aim to define classification criteria for
computational models of curriculum learning for RL agents, that describe the
curriculum learning research landscape over a broad range of frameworks and
settings. The questions we address in this survey include:
* •
What is a _curriculum_ , and how can it be represented for reinforcement
learning tasks? At the most basic level, a curriculum can be thought of as an
ordering over experience samples. However, it can also be represented at the
task level, where a set of tasks can be organized into a sequence or a
directed acyclic graph that specifies the order in which they should be
learned. We address this question in detail in Section 3.1.
* •
What is the _curriculum learning_ method, and how can such methods be
evaluated? We formalize this class of methods in Section 3.2 as consisting of
three parts, and extend metrics commonly used in transfer learning (introduced
in Section 2) to the curriculum setting to facilitate evaluation in Section
3.3.
* •
How can tasks be constructed for use in a curriculum? The quality of a
curriculum is dependent on the quality of tasks available to select from.
Tasks can either be generated in advance, or dynamically and on-the-fly with
the curriculum. Section 4.1 surveys works that examine how to automatically
generate good intermediate tasks.
* •
How can tasks or experience samples be sequenced into a curriculum? In
practice, most curricula for RL agents have been manually generated for each
problem. However, in recent years, automated methods for generating curricula
have been proposed. Each makes different assumptions about the tasks and
transfer methodology used. In Section 4.2, we survey these different automated
approaches, as well as describe how humans have approached curriculum
generation for RL agents.
* •
How can an agent transfer knowledge between tasks as it learns through a
curriculum? Curriculum learning approaches make use of transfer learning
methods when moving from one task to another. Since the tasks in the
curriculum can vary in state/action space, transition function, or reward
function, it’s important to transfer relevant and reusable information from
each task, and effectively combine information from multiple tasks. Methods to
do this are enumerated and discussed in Section 4.3.
The next section provides background in reinforcement learning and transfer
learning. In Section 3, we define the curriculum learning method, evaluation
metrics, and the dimensions along which we will classify curriculum learning
approaches. Section 4, which comprises the core of the survey, provides a
detailed overview of the existing state of the art in curriculum learning in
RL, with each subsection considering a different component of the overall
curriculum learning approach. Section 5 discusses paradigms related to
curriculum learning for RL, such as curriculum learning for supervised
learning and for human education. Finally, in Section 6, we identify gaps in
the existing literature, outline the limitations of existing CL methods and
frameworks, and provide a list of open problems.
## 2 Background
In this section, we provide background on Reinforcement Learning (RL) and
Transfer Learning (TL).
### 2.1 Reinforcement Learning
Reinforcement learning considers the problem of how an agent should act in its
environment over time, so as to maximize some scalar reward signal. We can
formalize the interaction of an agent with its environment (also called a
_task_) as a Markov Decision Process (MDP). In this article, we restrict our
attention to _episodic_ MDPs:111In continuing tasks, a discount factor
$\gamma$ is often included. For simplicity, and due to the fact that tasks
typically terminate in curriculum learning settings, we present the
undiscounted case. But unless otherwise noted, our definitions and discussions
can easily apply to the discounted case as well.
###### Definition 1
An episodic MDP $M$ is a 6-tuple $(\mathcal{S},\mathcal{A},p,r,\Delta
s_{0},\mathcal{S}_{f})$, where $\mathcal{S}$ is the set of states,
$\mathcal{A}$ is the set of actions, $p(s^{\prime}|s,a)$ is a transition
function that gives the probability of transitioning to state $s^{\prime}$
after taking action $a$ in state $s$, and $r(s,a,s^{\prime})$ is a reward
function that gives the immediate reward for taking action $a$ in state $s$
and transitioning to state $s^{\prime}$. In addition, we shall use $\Delta
s_{0}$ to denote the initial state distribution, and $\mathcal{S}_{f}$ to
denote the set of terminal states.
We consider time in discrete time steps. At each time step $t$, the agent
observes its state and chooses an action according to its _policy_ $\pi(a|s)$.
The goal of the agent is to learn an _optimal policy_ $\pi^{*}$, which
maximizes the expected _return_ $G_{t}$ (the cumulative sum of rewards $R$)
until the episode ends at timestep $T$:
$G_{t}=\sum_{i=1}^{T-t}R_{t+i}$
There are three main classes of methods to learn $\pi^{*}$: value function
approaches, policy search approaches, and actor-critic methods. In _value
function approaches_ , a value $v_{\pi}(s)$ is first learned for each state
$s$, representing the expected return achievable from $s$ by following policy
$\pi$. Through policy evaluation and policy improvement, this value function
is used to derive a policy better than $\pi$, until convergence towards an
optimal policy. Using a value function in this process requires a model of the
reward and transition functions of the environment. If the model is not known,
one option is to learn an action-value function instead, $q_{\pi}(s,a)$, which
gives the expected return for taking action $a$ in state $s$ and following
$\pi$ after:
$q_{\pi}(s,a)=\sum_{s^{\prime}}p(s^{\prime}|s,a)[r(s,a,s^{\prime})+q_{\pi}(s^{\prime},a^{\prime})]\textrm{
, where }a^{\prime}\sim\pi(\cdot|s^{\prime})$
The action-value function can be iteratively improved towards the optimal
action-value function $q_{*}$ with on-policy methods such as SARSA (Sutton and
Barto, 1998). The optimal action-value function can also be learned directly
with off-policy methods such as $Q$-learning (Watkins and Dayan, 1992). An
optimal policy can then be obtained by choosing action
$\text{argmax}_{a}q_{*}(s,a)$ in each state. If the state space is large or
continuous, the action-value function can instead be estimated using a
function approximator (such as a neural network), $q(s,a;\bm{w})\approx
q_{*}(s,a)$, where $\bm{w}$ are the weights of the network.
In contrast, _policy search methods_ directly search for or learn a
parameterized policy $\pi_{\bm{\theta}}(a|s)$, without using an intermediary
value function. Typically, the parameter $\bm{\theta}$ is modified using
search or optimization techniques to maximize some performance measure
$J(\bm{\theta})$. For example, in the episodic case, $J(\bm{\theta})$ could
correspond to the expected value of the policy parameterized by $\bm{\theta}$
from the starting state $s_{0}\sim\Delta s_{0}$: $v_{\pi_{\theta}}(s_{0})$.
A third class of methods, _actor-critic methods_ , maintain a parameterized
representation of both the current policy and value function. The actor is a
parameterized policy that dictates how the agent selects actions. The critic
estimates the (action-)value function for the actor using a policy evaluation
method such as temporal-difference learning. The actor then updates the policy
parameter in the direction suggested by the critic. An example of actor-critic
methods is Deterministic Policy Gradient (Silver et al., 2014).
### 2.2 Transfer Learning
In the standard reinforcement learning setting, an agent usually starts with a
random policy, and directly attempts to learn an optimal policy for the target
task. When the target task is difficult, for example due to adversarial
agents, poor state representation, or sparse reward signals, learning can be
very slow.
Transfer learning is one class of methods and area of research that seeks to
speed up training of RL agents. The idea behind transfer learning is that
instead of learning on the _target task_ tabula rasa, the agent can first
train on one or more _source task_ MDPs, and _transfer_ the knowledge acquired
to aid in solving the target. This knowledge can take the form of samples
(Lazaric et al., 2008; Lazaric and Restelli, 2011), options (Soni and Singh,
2006), policies (Fernández et al., 2010), models (Fachantidis et al., 2013),
or value functions (Taylor and Stone, 2005). As an example, in value function
transfer (Taylor et al., 2007), the parameters of an action-value function
$q_{source}(s,a)$ learned in a source task are used to initialize the action-
value function in the target task $q_{target}(s,a)$. This biases exploration
and action selection in the target task based on experience acquired in the
source task.
Some of these methods assume that the source and target MDPs either share
state and action spaces, or that a _task mapping_ (Taylor et al., 2007) is
available to map states and actions in the target task to known states and
actions in the source. Such mappings can be specified by hand, or learned
automatically (Taylor et al., 2008; Ammar et al., 2015). Other methods assume
the transition or reward functions do not change between tasks. The best
method to use varies by domain, and depends on the relationship between source
and target tasks. Finally, while most methods assume that knowledge is
transferred from one source task to one target task, some methods have been
proposed to transfer knowledge from several source tasks directly to a single
target (Svetlik et al., 2017). See Taylor and Stone (2009) or Lazaric (2012)
for a survey of transfer learning techniques.
### 2.3 Evaluation Metrics for Transfer Learning
There are several metrics to quantify the benefit of transferring from a
source task to a target task (Taylor and Stone, 2009). Typically, they compare
the learning trajectory on the target task for an agent after transfer, with
an agent that learns directly on the target task from scratch (see Figure 2a).
One metric is _time to threshold_ , which computes how much faster an agent
can learn a policy that achieves expected return $G_{0}\geq\delta$ on the
target task if it transfers knowledge, as opposed to learning the target from
scratch, where $\delta$ is some desired performance threshold. Time can be
measured in terms of CPU time, wall clock time, episodes, or number of actions
taken. Another metric is _asymptotic performance_ , which compares the final
performance after convergence in the target task of learners when using
transfer versus no transfer. The _jumpstart_ metric instead measures the
initial performance increase on the target task as a result of transfer.
Finally, the _total reward_ ratio compares the total reward accumulated by the
agent during training up to a fixed stopping point, using transfer versus not
using transfer.
$\begin{array}[]{cc}\includegraphics[height=113.81102pt]{img/weakTL.png}&\includegraphics[height=113.81102pt]{img/strongTL.png}\\\
(a)&(b)\end{array}$
Figure 2: Performance metrics for transfer learning using (a) weak transfer
and (b) strong transfer with offset curves.
An important evaluation question is whether to include time spent _learning in
source tasks_ into the cost of using transfer. The transfer curve in Figure 2a
shows performance on the target task, and starts at time 0, even though time
has already been spent learning one or more source tasks. Thus, it does not
reflect time spent training in source tasks before transferring to the target
task. This is known in transfer learning as the _weak transfer_ setting, where
time spent training in source tasks is treated as a sunk cost. On the other
hand, in the _strong transfer_ setting, the learning curves must account for
time spent in all source tasks. One way to do this is to offset the curves to
reflect time spent in source tasks, as shown in Figure 2b. Another option is
to freeze the policy while learning on source tasks, and plot that policy’s
performance on the target task.
## 3 The Curriculum Learning Method
A _curriculum_ serves to sort the experience an agent acquires over time, in
order to accelerate or improve learning. In the rest of this section we
formalize this concept and the methodology of _curriculum learning_ , and
describe how to evaluate the benefits and costs of using a curriculum.
Finally, we provide a list of attributes which we will use to categorize
curriculum learning approaches in the rest of this survey.
### 3.1 Curricula
A curriculum is a general concept that encompasses both schedules for
organizing past experiences, and schedules for acquiring experience by
training on tasks. As such, we first propose a fully general definition of
curriculum, and then follow it with refinements that apply to special cases
common in the literature.
We assume a _task_ is modeled as a Markov Decision Process, and define a
curriculum as follows:
###### Definition 2 (Curriculum)
Let $\mathcal{T}$ be a set of tasks, where
$m_{i}=(\mathcal{S}_{i},\mathcal{A}_{i},p_{i},r_{i})$ is a task in
$\mathcal{T}$. Let $\mathcal{D}^{\mathcal{T}}$ be the set of all possible
transition samples from tasks in $\mathcal{T}$:
$\mathcal{D}^{\mathcal{T}}=\\{(s,a,r,s^{\prime})\>|\>\exists\,m_{i}\in\mathcal{T}\;\mathrm{s.t.}\;s\in\mathcal{S}_{i},a\in\mathcal{A}_{i},s^{\prime}\sim
p_{i}(\cdot|s,a),r\leftarrow r_{i}(s,a,s^{\prime})\\}$. A _curriculum_
$C=(\mathcal{V},\mathcal{E},g,\mathcal{T})$ is a directed acyclic graph, where
$\mathcal{V}$ is the set of vertices,
$\mathcal{E}\subseteq\\{(x,y)\;|\;(x,y)\in\mathcal{V}\times\mathcal{V}\>\land
x\neq y\\}$ is the set of directed edges, and
$g:\mathcal{V}\to\mathcal{P}(\mathcal{D}^{\mathcal{T}})$ is a function that
associates vertices to subsets of samples in $\mathcal{D}^{\mathcal{T}}$,
where $\mathcal{P}(\mathcal{D}^{\mathcal{T}})$ is the power set of
$\mathcal{D}^{\mathcal{T}}$. A directed edge $\langle v_{j},v_{k}\rangle$ in
$C$ indicates that samples associated with $v_{j}\in\mathcal{V}$ should be
trained on before samples associated with $v_{k}\in\mathcal{V}$. All paths
terminate on a single sink node $v_{t}\in\mathcal{V}$.222In theory, a
curriculum could have multiple sink nodes corresponding to different target
tasks. For the purpose of exposition, we assume a separate curriculum is
created and used for each task.
A curriculum can be created online, where edges are added dynamically based on
the learning progress of the agent on the samples at a given vertex. It can
also be designed completely offline, where the graph is generated before
training, and edges are selected based on properties of the samples associated
with different vertices.
Creating a curriculum graph at the sample level can be computationally
difficult for large tasks, or large sets of tasks. Therefore, in practice, a
simplified representation for a curriculum is often used. There are 3 common
dimensions along which this simplification can happen. The first is the
single-task curriculum, where all samples used in the curriculum come from a
single task:
###### Definition 3 (Single-task Curriculum)
A _single-task curriculum_ is a curriculum $C$ where the cardinality of the
set of tasks considered for extracting samples $|\mathcal{T}|=1$, and consists
of only the target task $m_{t}$.
A single-task curriculum essentially considers how best to organize and train
on experience acquired from a single task. This type of curriculum is common
in experience replay methods (Schaul et al., 2016).
A second common simplification is to learn a curriculum at the task level,
where each vertex in the graph is associated with samples from a single task.
At the task level, a curriculum can be defined as a directed acyclic graph of
_intermediate_ tasks:
###### Definition 4 (Task-level Curriculum)
For each task $m_{i}\in\mathcal{T}$, let $\mathcal{D}^{\mathcal{T}}_{i}$ be
the set of all samples associated with task $m_{i}$:
$\mathcal{D}^{\mathcal{T}}_{i}=\\{(s,a,r,s^{\prime})\>|\>s\in\mathcal{S}_{i},a\in\mathcal{A}_{i},s^{\prime}\sim
p_{i}(\cdot|s,a),r\leftarrow r_{i}(s,a,s^{\prime})\\}$. A _task-level
curriculum_ is a curriculum $C=(\mathcal{V},\mathcal{E},g,\mathcal{T})$ where
each vertex is associated with samples from a single task in $\mathcal{T}$.
Thus, the mapping function $g$ is defined as
$g:\mathcal{V}\to\\{\mathcal{D}^{\mathcal{T}}_{i}\;|\;m_{i}\in\mathcal{T}\\}$.
In reinforcement learning, the entire set of samples from a task (or multiple
tasks) is usually not available ahead of time. Instead, the samples
experienced in a task depend on the agent’s behavior policy, which can be
influenced by previous tasks learned. Therefore, while generating a task-level
curriculum, the main challenge is how to order tasks such that the behavior
policy learned is useful for acquiring good samples in future tasks. In other
words, selecting and training on a task $m$ induces a mapping function $g$,
and determines the set of samples $\mathcal{D}_{i}^{\mathcal{T}}$ that will be
available at the next vertex based on the agent’s behavior policy as a result
of learning $m$. The same task is allowed to appear at more than one vertex,
similar to how in Definition 2 the same set of samples can be associated with
more than one vertex. Therefore, tasks can be revisited when the agent’s
behavior policy has changed. Several works have considered learning task-level
curricula over a graph of tasks (Svetlik et al., 2017; MacAlpine and Stone,
2018). An example can be seen in Figure 3b.
Finally, another simplification of the curriculum is the linear _sequence_.
This is the simplest and most common structure for a curriculum in existing
work:
###### Definition 5 (Sequence Curriculum)
A _sequence curriculum_ is a curriculum $C$ where the indegree and outdegree
of each vertex $v$ in the graph $C$ is at most 1, and there is exactly one
source node and one sink node.
These simplifications can be combined to simplify a curriculum along multiple
dimensions. For example, the sequence simplification and task-level
simplification can be combined to produce a task-level sequence curriculum.
This type of curriculum can be represented as an ordered list of tasks
$[m_{1},m_{2},...m_{n}]$. An example can be seen in Figure 3a (Narvekar et
al., 2017).
A final important question when designing curricula is determining the
stopping criteria: that is, how to decide _when_ to stop training on samples
or tasks associated with a vertex, and move on to the next vertex. In
practice, typically training is stopped when performance on the task or set of
samples has converged. Training to convergence is not always necessary, so
another option is to train on each vertex for a fixed number of episodes or
epochs. Since more than one vertex can be associated with the same
samples/tasks, this experience can be revisited later on in the curriculum.
### 3.2 Curriculum Learning
_Curriculum learning_ is a methodology to _optimize_ the order in which
experience is accumulated by the agent, so as to increase performance or
training speed on a set of final tasks. Through generalization, knowledge
acquired quickly in simple tasks can be leveraged to reduce the exploration of
more complex tasks. In the most general case, where the agent can acquire
experience from multiple intermediate tasks that differ from the final MDP,
there are 3 key elements to this method:
* •
Task Generation. The quality of a curriculum is dependent on the quality of
tasks available to choose from. Task generation is the process of creating a
good set of intermediate tasks from which to obtain experience samples. In a
task-level curriculum, these tasks form the nodes of the curriculum graph.
This set of intermediate tasks may either be pre-specified, or dynamically
generated during the curriculum construction by observing the agent.
* •
Sequencing. Sequencing examines how to create a partial ordering over the set
of experience samples $\mathcal{D}$: that is, how to generate the edges of the
curriculum graph. Most existing work has used manually defined curricula,
where a human selects the ordering of samples or tasks. However, recently
automated methods for curriculum sequencing have begun to be explored. Each of
these methods make different assumptions about the tasks and transfer
methodology used. These methods will be the primary focus of this survey.
* •
Transfer Learning. When creating a curriculum using multiple tasks, the
intermediate tasks may differ in state/action space, reward function, or
transition function from the final task. Therefore, transfer learning is
needed to extract and pass on reusable knowledge acquired in one task to the
next. Typically, work in transfer learning has examined how to transfer
knowledge from one or more source tasks directly to the target task.
Curriculum learning extends the transfer learning scenario to consider
training sessions in which the agent must repeatedly transfer knowledge from
one task to another, up to a set of final tasks.
$\begin{array}[]{cc}\includegraphics[height=156.49014pt]{img/sequences.png}&\includegraphics[height=156.49014pt]{img/dag.png}\\\
(a)&(b)\end{array}$
Figure 3: Examples of structures of curricula from previous work. (a) Linear
sequences in a gridworld domain (Narvekar et al., 2017) (b) Directed acyclic
graphs in block dude (Svetlik et al., 2017).
### 3.3 Evaluating Curricula
Curricula can be evaluated using the same metrics as for transfer learning
(cf. Section 2.3), by comparing performance on the target task after following
the complete curriculum, versus performance following no curriculum (i.e.,
learning from scratch). If there are multiple final tasks, the metrics can
easily be extended: for example, by comparing the average asymptotic
performance over a set of tasks, or the average time to reach a threshold
performance level over a set of tasks.
Similarly, it is possible to distinguish between weak and strong transfer.
However, in curriculum learning, there is the additional expense required to
_build_ the curriculum itself, in addition to training on intermediate tasks
in the curriculum, which can also be factored in when evaluating the cost of
the curriculum. As in the transfer learning case, cost can be measured in
terms of wall clock time, or data/sample complexity.
Most existing applications of curricula in reinforcement learning have used
curricula created by humans. In these cases, it can be difficult to assess how
much time, effort, and prior knowledge was used to design the curriculum.
Automated approaches to generate a curriculum also typically require some
prior knowledge or experience in potential intermediate tasks, in order to
guide the sequencing of tasks. Due to these difficulties, these approaches
have usually treated curriculum generation as a sunk cost, focusing on
evaluating the performance of the curriculum itself, and comparing it versus
other curricula, including those designed by people.
The best set of evaluation criteria to use ultimately depends on the specific
problem and settings being considered. For example, how expensive is it to
collect data on the final task compared to intermediate tasks? If intermediate
tasks are relatively inexpensive, we can treat time spent in them as sunk
costs. Is it more critical to improve initial performance, final performance,
or reaching a desired performance threshold? If designing the curriculum will
require human interaction, how will this time be factored into the cost of
using a curriculum? Many of these questions depend on whether we wish to
evaluate the utility of a specific curriculum (compared to another
curriculum), or whether we wish to evaluate the utility of using a curriculum
design approach versus training without one.
### 3.4 Dimensions of Categorization
We categorize curriculum learning approaches along the following seven
dimensions, organized by attributes (in bold) and the values (in italics) they
can take. We use these dimensions to create a taxonomy of surveyed work in
Section 4.
1. 1.
Intermediate task generation: _target / automatic / domain experts / naive
users_. In curriculum learning, the primary challenge is how to sequence a set
of tasks to improve learning speed. However, finding a good curriculum depends
on first having useful source tasks to select from. Most methods assume the
set of possible source tasks is fixed and given ahead of time. In the simplest
case, only samples from the _target_ task are used. When more than one
intermediate task is used, typically they are manually designed by humans. We
distinguish such tasks as designed by either _domain experts_ , who have
knowledge of the agent and its learning algorithm, or _naive users_ , who do
not have this information. On the other hand, some works consider
_automatically_ creating tasks online using a set of rules or generative
process. These approaches may still rely on some human input to control/tune
hyper-parameters, such as the number of tasks generated, or to verify that
generated tasks are actually solvable.
2. 2.
Curriculum representation: _single / sequence / graph_. As we discussed
previously, the most general form of a curriculum is a directed acyclic graph
over subsets of samples. However, in practice, simplified versions of this
representation are often used. In the simplest case, a curriculum is an
ordering over samples from a _single_ task. When multiple tasks can be used in
a curriculum, curricula are often created at the task-level. These curricula
can be represented as a linear chain, or _sequence_. In this case, there is
exactly one source for each intermediate task in the curriculum. It is up to
the transfer learning algorithm to appropriately retain and combine
information gathered from previous tasks in the chain. More generally, they
can be represented as a full directed acyclic _graph_ of tasks. This form
supports transfer learning methods that transfer from many-to-one, one-to-
many, and many-to-many tasks.
3. 3.
Transfer method: _policies / value function / task model / partial policies /
shaping reward / other / no transfer_. Curriculum learning leverages ideas
from transfer learning to transfer knowledge between tasks in the curriculum.
As such, the transfer learning algorithm used affects how the curriculum will
be produced. The type of knowledge transferred can be low-level knowledge,
such as an entire _policy_ , an _(action-)value function_ , or a full _task
model_ , which can be used to directly initialize the learner in the target
task. It can also be high-level knowledge, such as _partial policies_ (e.g.
options) or _shaping rewards_. This type of information may not fully
initialize the learner in the target task, but it could be used to guide the
agent’s learning process in the target task. We use partial policies as an
umbrella term to represent closely related ideas such as options, skills, and
macro-actions. When samples from a single task are sequenced, _no transfer_
learning algorithm is necessary. Finally, we use _other_ to refer to other
types of transfer learning methods. We categorize papers along this dimension
based on what is transferred between tasks in the curriculum in each paper’s
experimental results.
4. 4.
Curriculum sequencer: _automatic / domain experts / naive users_. Curriculum
learning is a three-part method, consisting of task generation, sequencing,
and transfer learning. While much of the attention of this survey is on
automated sequencing approaches, many works consider the other parts of this
method, and assume the sequencing is done by a human or oracle. Thus, we
identify and categorize the type of sequencing approach used in each work
similar to task generation: it can be done _automatically_ by a sequencing
algorithm, or manually by humans that are either _domain experts_ or _naive
users_.
5. 5.
Curriculum adaptivity: _static / adaptive_. Another design question when
creating a curriculum is whether it should be generated in its entirety before
training, or dynamically adapted during training. We refer to the former type
as _static_ and to the latter as _adaptive_. Static approaches use properties
of the domain and possibly of the learning agent, to generate a curriculum
before any task is learned. Adaptive methods, on the other hand, are
influenced by properties that can only be measured during learning, such as
the learning progress by the agent on the task it is currently facing. For
example, learning progress can be used to guide whether subsequent tasks
should be easier or harder, as well as how relevant a task is for the agent at
a particular point in the curriculum.
6. 6.
Evaluation metric: _time to threshold / asymptotic / jumpstart / total
reward_. We discussed four metrics to quantify the effectiveness of learned
curricula in Section 3.3. When calculating these metrics, one can choose
whether to treat time spent generating the curriculum and training on the
curriculum as a sunk cost, or whether to account for both of these for
performance. Specifically, there are three ways to measure the cost of
learning and training via a curriculum. 1) The cost of generating and using
the curriculum is treated as a sunk cost, and the designer is only concerned
with performance on the target task after learning. This case corresponds to
the weak transfer setting. 2) The cost of training on intermediate tasks in
the curriculum is accounted for, when comparing to training directly on the
target task. This case is most common when it is hard to evaluate the cost of
generating the curriculum itself, for example if it was hand-designed by a
human. 3) Lastly, the most comprehensive case accounts for the cost of
generating the curriculum as well as training via the curriculum. We will
refer to the last two as strong transfer, and indicate it by bolding the
corresponding metric. Note that achieving asymptotic performance improvements
implies strong transfer.
7. 7.
Application area: _toy / sim robotics / real robotics / video games / other_.
Curriculum learning methods have been tested in a wide variety of domains.
_Toy_ domains consist of environments such as grid worlds, cart-pole, and
other low dimensional environments. _Sim robotics_ environments simulate
robotic platforms, such as in MuJoCo. _Real robotics_ papers test their method
on physical robotic platforms. _Video games_ consist of game environments such
as Starcraft or the Arcade Learning Environment (Atari). Finally, _other_ is
used for custom domains that do not fit in these categories. We list these so
that readers can better understand the scalability and applicability of
different approaches, and use these to inform what methods would be suitable
for their own problems.
## 4 Curriculum Learning for Reinforcement Learning Agents
In this section, we systematically survey work on each of the three central
elements of curriculum learning: task generation (Section 4.1), sequencing
(Section 4.2), and transfer learning (Section 4.3). For each of these
subproblems, we provide a table that categorizes work surveyed according to
the dimensions outlined in Section 3. The bulk of our attention will be
devoted to the subproblem most commonly associated with curriculum learning:
sequencing.
### 4.1 Task Generation
Task generation is the problem of creating intermediate tasks specifically to
be part of a curriculum. In contrast to the life-long learning scenario, where
potentially unrelated tasks are constantly proposed to the agent (Thrun,
1998), the aim of task generation is to create a set of tasks such that
knowledge transfer through them is beneficial. Therefore, all the generated
tasks should be relevant to the final task(s) and avoid _negative transfer_ ,
where using a task for transfer hurts performance. The properties of the
research surveyed in this section are reported in Table 1.
Very limited work has been dedicated to formally studying this subproblem in
the context of reinforcement learning. All known methods assume the domain can
be parameterized using some kind of representation, where different
instantiations of these parameters create different tasks. For instance,
Narvekar et al. (2016) introduce a number of methods to create intermediate
tasks for a specific final task. The methods hinge on a definition of a domain
as a set of MDPs identified by a _task descriptor_ , which is a vector of
parameters specifying the _degrees of freedom_ in the domain. For example, in
the quick chess example (see Section 1), these parameters could be the size of
the board, number of pawns, etc. By varying the degrees of freedom and
applying task _restrictions_ , the methods define different types of tasks.
Methods introduced include: _task simplification_ , which directly changes the
degrees of freedom to reduce the task dimensions; _promising initialization_ ,
which modifies the set of initial states by adding states close to high
rewards; _mistake learning_ , which rewinds the domain to a state a few steps
before a mistake is detected and resumes learning from there; and several
other methods. The set of methods determine different kinds of possible tasks,
which form a space of tasks in which appropriate intermediate tasks can be
chosen.
Da Silva and Reali Costa (2018) propose a similar partially automated task
generation procedure in their curriculum learning framework, based on Object-
Oriented MDPs. Each task is assumed to have a class _environment_
parameterized by a number of attributes. A function, which must be provided by
the designer, creates simpler versions of the final task by instantiating the
attributes with values that make the tasks easier to solve. For example,
continuing the quick chess example, the attributes could be the types of
pieces, and the values are the number of each type of piece. The presence of
different kinds and numbers of objects provide a range of tasks with different
levels of difficulty. However, since the generation is mostly random, the
designer has to make sure that the tasks are indeed solvable.
Citation | Intermediate Task Generation | Curriculum Representation | Transfer Method | Curriculum Sequencer | Curriculum Adaptivity | Evaluation Metric | Application Area
---|---|---|---|---|---|---|---
Da Silva and Reali Costa (2018) | automatic | graph | value function | automatic | static | time to threshold, total reward | toy, video games
Narvekar et al. (2016) | automatic | sequence | value function | domain experts | adaptive | asymptotic | video games
Schmidhuber (2013) | automatic | sequence | partial policies | automatic | adaptive | asymptotic | other
Stone and Veloso (1994) | automatic | sequence | other | domain experts | adaptive | time to threshold | other
Table 1: The papers discussed in Section 4.1, categorized along the dimensions
presented in Section 3.4. Bolded values under evaluation metric indicate
strong transfer.
Generating auxiliary intermediate tasks is a problem that has been studied in
non-RL contexts as well. For instance, Stone and Veloso (1994) consider how to
semiautomatically create subproblems to aid in learning to solve difficult
_planning_ problems. Rather than using a static analysis of the domain’s
properties, they propose to use a partially completed search trajectory of the
target task to identify what makes a problem difficult, and suggest auxiliary
tasks. For example, if the task took too long and there are multiple goals in
the task, try changing the order of the goals. Other methods they propose
include reducing the number of goals, creating tasks to solve difficult
subgoals, and changing domain operators and objects available for binding.
Lastly, Schmidhuber (2013) introduced Powerplay, a framework that focuses on
inventing new problems to train a more and more general problem solver in an
unsupervised fashion. The system searches for both a new task and a
modification of the current problem solver, such that the modified solver can
solve all previous tasks, plus the new one. The search acts on a domain-
dependent encoding of the problem and the solver, and has been demonstrated on
pattern recognition and control tasks (Srivastava et al., 2013). The generator
of the task and new solver is given a limited computational budget, so that it
favors the generation of the simplest tasks that could not be solved before.
Furthermore, a possible task is to solve all previous tasks, but with a more
compact representation of the solver. The resulting iterative process makes
the system increasingly more competent at different tasks. The task generation
process effectively creates a curriculum, although in this context there are
no final tasks, and the system continues to generate pairs of problems and
solvers indefinitely, without any specific goal.
### 4.2 Sequencing
Given a set of tasks, or samples from them, the goal of sequencing is to order
them in a way that facilitates learning. Many different sequencing methods
exist, each with their own set of assumptions. One of the fundamental
assumptions of curriculum learning is that we can configure the environment to
create different tasks. For the practitioner attempting to use curriculum
learning, the amount of control one has to shape the environment affects the
type of sequencing methods that could be applicable. Therefore, we categorize
sequencing methods by the degree to which intermediate tasks may differ.
Specifically, they form a spectrum, ranging from methods that simply reorder
experience in the final task without modifying any property of the
corresponding MDP, to ones that define entirely new intermediate tasks, by
progressively adjusting some or all of the properties of the final task.
In this subsection, we discuss the different sequencing approaches. First, in
Section 4.2.1, we consider methods that reorder samples in the target task to
derive a curriculum. Experience replay methods are one such example. In
Section 4.2.2, we examine multi-agent approaches to curriculum generation,
where the cooperation or competition between two (typically evolving) agents
induces a sequence of progressively challenging tasks, like a curriculum.
Then, in Section 4.2.3, we begin describing methods that explicitly use
intermediate tasks, starting with ones that vary in limited ways from the
target task. In particular, these methods only change the reward function
and/or the initial and terminal state distributions to create a curriculum. In
Section 4.2.4, we discuss methods that relax this assumption, and allow
intermediate tasks that can vary in any way from the target task MDP. Finally,
in Section 4.2.5, we discuss work that explores how humans sequence tasks into
a curriculum.
#### 4.2.1 Sample Sequencing
First we consider methods that reorder samples from the final task, but do not
explicitly change the domain itself. These ideas are similar to curriculum
learning for supervised learning (Bengio et al., 2009), where training
examples are presented to a learner in a specific order, rather than
completely randomly. Bengio et al. (2009) showed that ordering these examples
from simple to complex can improve learning speed and generalization ability.
An analogous process can be used for reinforcement learning. For example, many
current reinforcement learning methods, such as Deep Q Networks (DQN) (Mnih et
al., 2015) use a replay buffer to store past state-action-reward experience
tuples. At each training step, experience tuples are sampled from the buffer
and used to train DQN in minibatches. The original formulation of DQN
performed this sampling uniformly randomly. However, as in the supervised
setting, samples can be reordered or “prioritized,” according to some measure
of usefulness or difficulty, to improve learning.
Citation | Intermediate Task Generation | Curriculum Representation | Transfer Method | Curriculum Sequencer | Curriculum Adaptivity | Evaluation Metric | Application Area
---|---|---|---|---|---|---|---
Sample Sequencing (Section 4.2.1)
Andrychowicz et al. (2017) | target | single | no transfer | automatic | adaptive | asymptotic | sim robotics
Fang et al. (2019) | target | single | no transfer | automatic | adaptive | asymptotic | sim robotics
Kim and Choi (2018) | target | single | no transfer | automatic | adaptive | asymptotic | toy, other
Lee et al. (2019) | target | single | no transfer | automatic | adaptive | time to threshold | toy, video games
Ren et al. (2018) | target | single | no transfer | automatic | adaptive | asymptotic | video games
Schaul et al. (2016) | target | single | no transfer | automatic | adaptive | asymptotic | video games
Co-learning (Section 4.2.2)
Baker et al. (2020) | automatic | sequence | policies | automatic | adaptive | asymptotic, time to threshold | other
Bansal et al. (2018) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics
Pinto et al. (2017) | automatic | sequence | policies | automatic | adaptive | time to threshold | sim robotics
Sukhbaatar et al. (2018) | automatic | sequence | policies | automatic | adaptive | time to threshold, asymptotic | toy, video games
Vinyals et al. (2019) | automatic | sequence | policies | automatic | adaptive | asymptotic | video games
Reward and Initial/Terminal State Distribution Changes (Section 4.2.3)
Asada et al. (1996) | domain experts | sequence | value function | automatic | adaptive | asymptotic | sim/real robotics
Baranes and Oudeyer (2013) | automatic | sequence | partial policies | automatic | adaptive | asymptotic | sim/real robotics
Florensa et al. (2017) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics
Florensa et al. (2018) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics
Ivanovic et al. (2019) | automatic | sequence | policies | automatic | adaptive | asymptotic | sim robotics
Racaniere et al. (2019) | automatic | sequence | policies | automatic | adaptive | asymptotic | toy, video games
Riedmiller et al. (2018) | domain experts | sequence | policies | automatic | adaptive | time to threshold | sim/real robotics
Wu and Tian (2017) | domain experts | sequence | task model | automatic | both | asymptotic | video games
No Restrictions (Section 4.2.4)
Bassich et al. (2020) | domain experts | sequence | policies | automatic | adaptive | asymptotic, time to threshold | toy
Da Silva and Reali Costa (2018) | automatic | graph | value function | automatic | static | time to threshold, total reward | toy, video games
Foglino et al. (2019a) | domain experts | sequence | value function | automatic | static | time to threshold, asymptotic, total reward | toy
Foglino et al. (2019b) | domain experts | sequence | value function | automatic | static | total reward | toy
Foglino et al. (2019c) | domain experts | sequence | value function | automatic | static | total reward | toy
Jain and Tulabandhula (2017) | domain experts | sequence | value function | automatic | adaptive | time to threshold, total reward | toy
Matiisen et al. (2017) | domain experts | sequence | policies | automatic | adaptive | asymptotic | toy, video games
Narvekar et al. (2017) | automatic | sequence | value function | automatic | adaptive | time to threshold | toy
Narvekar and Stone (2019) | domain experts | sequence | value function, shaping reward | automatic | adaptive | time to threshold | toy, video games
Svetlik et al. (2017) | domain experts | graph | shaping reward | automatic | static | asymptotic, time to threshold | toy, video games
Human-in-the-loop Curriculum Generation (Section 4.2.5)
Hosu and Rebedea (2016) | target | single | no transfer | automatic | adaptive | asymptotic | video games
Khan et al. (2011) | domain experts | sequence | no transfer | naive users | static | N/A | other
MacAlpine and Stone (2018) | domain experts | graph | policies | domain experts | static | asymptotic | sim robotics
Peng et al. (2018) | domain experts | sequence | task model | naive users | static | time to threshold | other
Stanley et al. (2005) | domain experts | sequence | partial policies | domain experts | adaptive | asymptotic | video games
Table 2: The papers discussed in Section 4.2, categorized along the dimensions
presented in Section 3.4. Bolded values under evaluation metric indicate
strong transfer.
The first to do this type of sample sequencing in the context of deep learning
were Schaul et al. (2016). They proposed Prioritized Experience Replay (PER),
which prioritizes and replays _important_ transitions more. Important
transitions are those with high expected learning progress, which is measured
by their temporal difference (TD) error. Intuitively, replaying samples with
larger TD errors allows the network to make stronger updates. As transitions
are learned, the distribution of important transitions changes, leading to an
implicit curriculum over the samples.
Alternative metrics for priority/importance have been explored as well. Ren et
al. (2018) propose to sort samples using a complexity index (CI) function,
which is a combination of a self-paced prioritized function and a coverage
penalty function. The self-paced prioritized function selects samples that
would be of appropriate difficulty, while the coverage function penalizes
transitions that are replayed frequently. They provide one specific
instantiation of these functions, which are used in experiments on the Arcade
Learning Environment (Bellemare et al., 2013), and show that it performs
better than PER in many cases. However, these functions must be designed
individually for each domain, and designing a broadly applicable domain-
independent priority function remains an open problem.
Kim and Choi (2018) consider another extension of prioritized experience
replay, where the weight/priority of a sample is jointly learned with the main
network via a secondary neural network. The secondary network, called
ScreenerNet, learns to predict weights according to the error of the sample by
the main network. Unlike PER, this approach is memoryless, which means it can
directly predict the significance of a training sample even if that particular
example was not seen. Thus, the approach could potentially be used to actively
request experience tuples that would provide the most information or utility,
creating an online curriculum.
Instead of using sample importance as a metric for sequencing, an alternative
idea is to restructure the training process based on trajectories of samples
experienced. For example, when learning, typically easy to reach states are
encountered first, whereas harder to reach states are encountered later on in
the learning cycle. However, in practical settings with sparse rewards, these
easy to reach states may not provide a reward signal. Hindsight Experience
Replay (HER) (Andrychowicz et al., 2017) is one method to make the most of
these early experiences. HER is a method that learns from “undesired
outcomes,” in addition to the desired outcome, by replaying each episode with
a goal that was actually achieved rather than the one the agent was trying to
achieve. The problem is set up as learning a Universal Value Function
Approximator (UVFA) (Schaul et al., 2015), which is a value function
$v_{\pi}(s,g)$ defined over states $s$ and goals $g$ . The agent is given an
initial state $s_{1}$ and a desired goal state $g$. Upon executing its policy,
the agent may not reach the goal state $g$, and instead land on some other
terminal state $s_{T}$. While this trajectory does not help to learn to
achieve $g$, it does help to learn to achieve $s_{T}$. Thus, this trajectory
is added to the replay buffer with the goal state substituted with $s_{T}$,
and used with an off-policy RL algorithm. HER forms a curriculum by taking
advantage of the implicit curriculum present in exploration, where early
episodes are likely to terminate on easy to reach states, and more difficult
to reach states are found later in the training process.
One of the issues with vanilla HER is that all goals in seen trajectories are
replayed evenly, but some goals may be more useful at different points of
learning. Thus, Fang et al. (2019) later proposed Curriculum-guided HER (CHER)
to adaptively select goals based on two criteria: curiosity, which leads to
the selection of diverse goals, and proximity, which selects goals that are
closer to the true goal. Both of these criteria rely on a measure of distance
or similarity between goal states. At each minibatch optimization step, the
objective selects a subset of goals that maximizes the weighted sum of a
diversity and proximity score. They manually impose a curriculum that starts
biased towards diverse goals and gradually shifts towards proximity based
goals using a weighting factor that is exponentially scaled over time.
Other than PER and HER, there are other works that reorder/resample
experiences in a novel way to improve learning. One example is the episodic
backward update (EBU) method developed by Lee et al. (2019). In order to speed
up the propagation of delayed rewards (e.g., a reward might only be obtained
at the end of an episode), Lee et al. (2019) proposed to sample a whole
episode from the replay buffer and update the values of all transitions within
the sampled episode in a backward fashion. Starting from the end of the
sampled episode, the $\max$ Bellman operator is applied recursively to update
the target $Q$-values until the start of the sampled episode. This process
basically reorders all the transitions within each sampled episode from the
last timestep of the episode to the first, leading to an implicit curriculum.
Updating highly correlated states in a sequence while using function
approximation is known to suffer from cumulative overestimation errors. To
overcome this issue, a diffusion factor $\beta\in(0,1)$ was introduced to
update the current $Q$-value using a weighted sum of the new bootstrapped
target value and the pre-existing $Q$-value estimate. Their experimental
results show that in 49 Atari games, EBU can achieve the same mean and median
human normalized performance of DQN by using significantly fewer samples.
Methods that sequence experience samples have wide applicability and found
broad success in many applications, since they can be applied directly on the
target task without needing to create intermediate tasks that alter the
environment. In the following sections, we consider sequencing approaches that
progressively alter how much intermediate tasks in the curriculum may differ.
#### 4.2.2 Co-learning
Co-learning is a multi-agent approach to curriculum learning, in which the
curriculum emerges from the interaction of several agents (or multiple
versions of the same agent) in the same environment. These agents may act
either cooperatively or adversarially to drive the acquisition of new
behaviors, leading to an implicit curriculum where both sets of agents improve
over time. Self-play is one methodology that fits into this paradigm, and many
landmark results such as TD-Gammon (Tesauro, 1995) and more recently AlphaGo
(Silver et al., 2016) and AlphaStar (Vinyals et al., 2019) fall into this
category. Rather than describing every work that uses self-play or co-
learning, we describe a few papers that focus on how the objectives of the
multiple agents can be set up to facilitate co-learning.
Sukhbaatar et al. (2018) proposed a novel method called asymmetric self-play
that allows an agent to learn about the environment without any external
reward in an unsupervised manner. This method considers two agents, a teacher
and a student, using the paradigm of “the teacher proposing a task, and the
student doing it.” The two agents learn their own policies simultaneously by
maximizing interdependent reward functions for goal-based tasks. The teacher’s
task is to navigate to an environment state that the student will use either
as 1) a goal, if the environment is resettable, or 2) as a starting state, if
the environment is reversible. In the first case, the student’s task is to
reach the teacher’s final state, while in the second case, the student starts
from the teacher’s final state with the aim of reverting the environment to
its original initial state. The student’s goal is to minimize the number of
actions it needs to complete the task. The teacher, on the other hand, tries
to maximize the difference between the actions taken by the student to execute
the task, and the actions spent by the teacher to set up the task. The
teacher, therefore, tries to identify a state that strikes a balance between
being the simplest goal (in terms of number of teacher actions) for itself to
find, and the most difficult goal for the student to achieve. This process is
iterated to automatically generate a curriculum of intrinsic exploration.
Another example of jointly training a pair of agents adversarially for policy
learning in single-agent RL tasks is Robust Adversarial RL (RARL) by Pinto et
al. (2017). Unlike asymmetric self-play (Sukhbaatar et al., 2018), in which
the teacher defines the goal for the student, RARL trains a protagonist and an
adversary, where the protagonist learns to complete the original RL task while
being robust to the disturbance forces applied by the adversarial agent. RARL
is targeted at robotic systems that are required to generalize effectively
from simulation, and learn robust policies with respect to variations in
physical parameters. Such variations are modeled as disturbances controlled by
an adversarial agent, and the adversarial agent’s goal is to learn the optimal
sequence of destabilizing actions via a zero-sum game training procedure. The
adversarial agent tries to identify the hardest conditions under which the
protagonist agent may be required to act, increasing the agent’s robustness.
Learning takes place in turns, with the protagonist learning against a fixed
antagonist’s policy, and then the antagonist learning against a fixed
protagonist’s policy. Each agent tries to maximize its own return, and the
returns are zero-sum. The set of “destabilizing actions” available to the
antagonist is assumed to be domain knowledge, and given to the adversary ahead
of time.
For multi-agent RL tasks, several works have shown how simple interaction
between multiple learning agents in an environment can result in emergent
curricula. Such ideas were explored early on in the context of evolutionary
algorithms by Rosin and Belew (1997). They showed that competition between 2
groups of agents, dubbed hosts and parasites, could lead to an “arms race,”
where each group drives the other to acquire increasingly complex skills and
abilities. Similar results have been shown in the context of RL agents by
Baker et al. (2020). They demonstrated that increasingly complex behaviors can
emerge in a physically grounded task. Specifically, they focus on a game of
hide and seek, where there are two teams of agents. One team must hide with
the help of obstacles and other items in the environment, while the other team
needs to find the first team. They were able to show that as one team
converged on a successful strategy, the other team was pressured to learn a
counter-strategy. This process was repeated, inducing a curriculum of
increasingly competitive agents.
A similar idea was explored by Bansal et al. (2018). They proposed to use
multi-agent curriculum learning as an alternative to engineering dense shaping
rewards. Their method interpolates between dense “exploration” rewards, and
sparse multi-agent competitive rewards, with the exploration reward gradually
annealed over time. In order to prevent the adversarial agent from getting too
far ahead of the learning agent and making the task impossible, the authors
propose to additionally sample older versions of the opponent. Lastly, in
order to increase robustness, the stochasticity of the tasks is increased over
time.
Curriculum learning approaches have also been proposed for cooperative multi-
agent systems (Wang et al., 2020; Yang et al., 2020). In these settings, there
is a natural curriculum created by starting with a small number of agents, and
gradually increasing them in subsequent tasks. The schedule with which to
increase the number of agents is usually manually defined, and the emphasis
instead is on how to perform transfer when the number of agents change.
Therefore, we discuss these approaches in more detail in Section 4.3.
Finally, while self-play has been successful in a wide variety of domains,
including solving games such as Backgammon (Tesauro, 1995) and Go (Silver et
al., 2016), such an approach alone was not sufficient for producing strong
agents in a complex, multi-agent, partially-observable game like Starcraft.
One of the primary new elements of Vinyals et al. (2019) was the introduction
of a Starcraft League, a group of agents that have differing strategies
learned from a combination of imitation learning from human game data and
reinforcement learning. Rather than have every agent in the league maximize
their own probability of winning against all other agents like in standard
self play, there were some agents that did this, and some whose goal was to
optimize against the main agent being trained. In effect, these agents were
trained to exploit weaknesses in the main agent and help it improve. Training
against different sets of agents over time from the league induced a
curriculum that allowed the main agents to achieve grandmaster status in the
game.
#### 4.2.3 Reward and Initial/Terminal State Distribution Changes
Thus far, the curriculum consisted of ordering experience from the target task
or modifying agents in the target environment. In the next two sections, we
begin to examine approaches that explicitly create different MDPs for
intermediate tasks, by changing some aspect of the MDP. First we consider
approaches that keep the state and action spaces the same, as well as the
environment dynamics, but allow the reward function and initial/terminal state
distributions to vary.
One of the earliest examples of this type of method was _learning from easy
missions_. Asada et al. (1996) proposed this method to train a robot to shoot
a ball into a goal based on vision inputs. The idea was to create a series of
tasks, where the agent’s initial state distribution starts close to the goal
state, and is progressively moved farther away in subsequent tasks, inducing a
curriculum of tasks. In this work, each new task starts one “step” farther
away from the goal, where steps from the goal is measured using a domain
specific heuristic: a state is closer to the terminal state if the goal in the
camera image gets larger. The heuristic implicitly requires that the state
space can be categorized into “substates,” such as goal size or ball position,
where the ordering of state transitions in a substate to a goal state is
known. Thus, each substate has a dimension for making the task simpler or more
complex. Source tasks are manually created to vary along these dimensions of
difficulty.
Recently, Florensa et al. (2017) proposed more general methods for performing
this reverse expansion. They proposed reverse curriculum generation, an
algorithm that generates a distribution of starting states that get
increasingly farther away from the goal. The method assumes at least one goal
state is known, which is used as a seed for expansion. Nearby starting states
are generated by taking a random walk from existing starting states by
selecting actions with some noise perturbation. In order to select the next
round of starting states to expand from, they estimate the expected return for
each of these states, and select those that produce a return between a
manually set minimum and maximum interval. This interval is tuned to expand
states where progress is possible, but not too easy. A similar approach by
Ivanovic et al. (2019) considered combining the reverse expansion phase for
curriculum generation with physics-based priors to accelerate learning by
continuous control agents.
An opposite “forward” expansion approach has also been considered by Florensa
et al. (2018). This method allows an agent to automatically discover different
goals in the state space, and thereby guide exploration of the space. They do
this discovery with a Generative Adversarial Network (GAN) (Goodfellow et al.,
2014), where the generator network proposes goal regions (parameterized
subsets of the state space) and the discriminator evaluates whether the goal
region is of appropriate difficulty for the current ability of the agent. Goal
regions are specified using an indicator reward function, and policies are
conditioned on the goal in addition to the state, like in a universal value
function approximator (Schaul et al., 2015). The agent trains on tasks
suggested by the generator. In detail, the approach consists of 3 parts: 1)
First, goal regions are labelled according to whether they are of appropriate
difficulty. Appropriate goals are those that give a return between
hyperparameters $R_{min}$ and $R_{max}$. Requiring at least $R_{min}$ ensures
there is a signal for learning progress. Requiring less than $R_{max}$ ensures
that it is not too easy. 2) They use the labeled goals to train a Goal GAN. 3)
Goals are sampled from the GAN as well as a replay buffer, and used for
training to update the policy. The goals generated by the GAN shift over time
to reflect the difficulty of the tasks, and gradually move from states close
to the starting state to those farther away.
Racaniere et al. (2019) also consider an approach to automatically generate a
curriculum of goals for the agent, but for more complex goal-conditioned tasks
in dynamic environments where the possible goals vary between episodes. The
idea was to train a “setter” model to propose a curriculum of goals for a
“solver” agent to attempt to achieve. In order to help the setter balance its
goal predictions, they proposed three objectives which lead to a combination
of three losses to train the setter model: goal validity (the goal should be
valid or achievable by the current solver), goal feasibility (the goal should
match the feasibility estimates for the solver with current skill), and goal
coverage (encourage the setter to choose more diverse goals to encourage
exploration in the space of goals). In addition, a “judge” model was trained
to predict the reward the current solver agent would achieve on a goal (the
feasibility of a goal) proposed by the setter. Their experimental results
demonstrate the necessity of all three criteria for building useful curricula
of goals. They also show that their approach is more stable and effective than
the goal GAN method (Florensa et al., 2018) on complex tasks.
An alternative to modifying the initial or terminal state distribution is to
modify the reward function. Riedmiller et al. (2018) introduce SAC-X
(Scheduled Auxiliary Control), an algorithm for scheduling and executing
auxiliary tasks that allow the agent to efficiently explore its environment
and also make progress towards solving the final task. Auxiliary tasks are
defined to be tasks where the state, action, and transition function are the
same as the original MDP, but where the reward function is different. The
rewards they use in auxiliary tasks correspond to changes in raw or high level
sensory input, similar to Jaderberg et al. (2017). However, while Jaderberg et
al. (2017) only used auxiliary tasks for improving learning of the state
representation, here they are used to guide exploration, and are sequenced.
The approach is a hierarchical RL method: they need to 1) learn intentions,
which are policies for the auxiliary tasks, and 2) learn the scheduler, which
sequences intention policies and auxiliary tasks. To learn the intentions,
they learn to maximize the action-value function of each intention from a
starting state distribution that comes as a result of following each of the
other intention policies. This process makes the policies compatible. The
scheduler can be thought of as a meta-agent that performs sequencing, whose
goal is to maximize the return on the target task MDP. The scheduler selects
intentions, whose policy is executed on the extrinsic task, and is used to
guide exploration.
Heuristic-based methods have also been designed to sequence tasks that differ
in their reward functions. One such approach is SAGG-RIAC (Self-Adaptive Goal
Generation - Robust Intelligent Adaptive Curiosity) (Baranes and Oudeyer,
2013). They define _competence_ as the distance between the achieved final
state and the goal state, and _interest_ as the change in competence over time
for a set of goals. A region of the task space is deemed more _interesting_
than others, if the latest tasks in the region have achieved a high increase
in competence. The approach repeatedly selects goals by first picking a region
with a probability proportional to its interest, and then choosing a goal at
random within that region. With a smaller probability the system also selects
a goal at random over the whole task set or a goal close to a previously
unsuccessful task. The bias towards interesting regions causes the goals to be
more dense in regions where the competence increases the fastest, creating a
curriculum. Because of the stochastic nature of the goal generating process,
however, not every task is necessarily beneficial in directly increasing the
agent’s ability on the target task, but contributes to updating the competence
and interest measures. Since the intermediate tasks are generated online as
the agent learns, in this approach both sequencing and generation result from
the same sampling process.
Finally, Wu and Tian (2017) also consider changing the transition dynamics and
the reward functions of the intermediate tasks. They propose a novel framework
for training an agent in a partially observable 3D Doom environment. Doom is a
First-Person Shooter game, in which the player controls the agent to fight
against enemies. In their experiment, they first train the agent on some
simple maps with several curricula. Each curriculum consists of a sequence of
progressively more complex environments with varying domain parameters (e.g.,
the movement speed or initial health of the agent). After learning a capable
initial task model, the agent is then trained on more complicated maps and
more difficult tasks with a different reward function. They also design an
adaptive curriculum learning strategy in which a probability distribution over
different levels of curriculum is maintained. When the agent performs well on
the current distribution, the probability distribution is shifted towards more
difficult tasks.
#### 4.2.4 No restrictions
Next, there is a class of methods that create a curriculum using intermediate
tasks, but make no restrictions on the MDPs of these intermediate tasks. We
categorize them in three ways by how they address the task sequencing problem:
treating sequencing 1) as an MDP/POMDP, 2) as a combinatorial optimization
over sequences, and 3) as learning the connections in a directed acyclic task
graph. Because there are no limitations on the types of intermediate tasks
allowed, some assumptions are usually made about the transfer learning
algorithm, and additional information about the intermediate tasks (such as
task descriptors) is typically assumed. Finally, we also discuss work on an
auxiliary problem to sequencing: how long to spend on each task.
#### MDP-based Sequencing
The first formalization of the sequencing problem is as a Markov Decision
Process. These methods formulate curriculum generation as an interaction
between 2 types of MDPs. The first is the standard MDP, which models a
_learning agent_ (i.e., the student) interacting with a task. The second is a
higher level meta-MDP for the _curriculum agent_ (i.e., the teacher), whose
goal is to select tasks for the learning agent.
Narvekar et al. (2017) denote the meta-MDP as a curriculum MDP (CMDP), where
the state space $\mathcal{S}$ is the set of policies the learning agent can
represent. These can be represented parametrically using the weights of the
learning agent. The action space $\mathcal{A}$ is the set of tasks the
learning agent can train on next. Learning a task updates the learning agent’s
policy, and therefore leads to a transition in the CMDP via a transition
function $p$. Finally, the reward function $r$ is the time in steps or
episodes that it took to learn the selected task. Under this model, a
curriculum agent typically starts in an initial state corresponding to a
random policy for the learning agent. The goal is to reach a terminal state,
which is defined as a policy that can achieve some desired performance
threshold on the target task, as fast as possible.
Matiisen et al. (2017) consider a similar framework, where the interaction is
defined as a POMDP. The state and action spaces of the meta-POMDP are the same
as in Narvekar et al. (2017), but access to the internal parameters of the
learning agent is not available. Instead, an observation of the current score
of the agent on each intermediate task is given. The reward is the change in
the score on the task from this timestep to the previous timestep when the
same task was trained on. Thus, while Narvekar et al. (2017) focused on
minimizing time to threshold performance on the target task, the design of
Matiisen et al. (2017) aims to maximize the sum of performance in all tasks
encountered.
While both approaches are formalized as POMDPs, learning on these POMDPs is
computationally expensive. Thus, both propose heuristics to guide the
selection of tasks. Narvekar et al. (2017) take a sample-based approach, where
a small amount of experience samples gathered on the target and intermediate
tasks are compared to identify relevant intermediate tasks. The task that
causes the greatest change in policy as evaluated on the target task samples
is selected. In contrast, Matiisen et al. (2017) select tasks where the
absolute value of the slope of the learning curve is highest. Thus it selects
tasks where the agent is making the most progress or where the agent is
forgetting the most about tasks it has already learned. Initially tasks are
sampled randomly. As one task starts making progress, it will be sampled more,
until the learning curve plateaus. Then another will be selected, and the
cycle will repeat until all the tasks have been learned.
Subsequently, Narvekar and Stone (2019) explored whether learning was possible
in a curriculum MDP, thus avoiding the need for heuristics in task sequencing.
They showed that you can represent a CMDP state using the weights of the
knowledge transfer representation. For example, if the agent uses value
function transfer, the CMDP state is represented using the weights of the
value function. By utilizing function approximation over this state space,
they showed it is possible to learn a policy over this MDP, termed a
curriculum policy, which maps from the current status of learning progress of
the agent, to the task it should learn next. In addition, the approach
addresses the question of how long to train on each intermediate task. While
most works have trained on intermediate tasks until learning plateaus, this is
not always necessary. Narvekar and Stone (2019) showed that training on each
intermediate task for a few episodes, and letting the curriculum policy
reselect tasks that require additional time, results in faster learning.
However, while learning a curriculum policy is possible, doing so
independently for each agent and task is still very computationally expensive.
#### Combinatorial Optimization and Search
A second way of approaching sequencing is as a combinatorial optimization
problem: given a fixed set of tasks, find the permutation that leads to the
best curriculum, where best is determined by one of the CL metrics introduced
in Section 3.3. Finding the optimal curriculum is a computationally difficult
black-box optimization problem. Thus, typically fast approximate solutions are
preferred.
One such popular class of methods are metaheuristic algorithms, which are
heuristic methods that are not tied to specific problem domains, and thus can
be used as black boxes. Foglino et al. (2019a) adapt and evaluate four
representative metaheuristic algorithms to the task sequencing problem: beam
search (Ow and Morton, 1988), tabu search (Glover and Laguna, 1998), genetic
algorithms (Goldberg, 1989), and ant colony optimization (Dorigo et al.,
1991). The first two are trajectory-based, which start at a guess of the
solution, and search the neighborhood of the current guess for a better
solution. The last two are population-based, which start with a set of
candidate solutions, and improve them as a group towards areas of increasing
performance. They evaluate these methods for 3 different objectives: time to
threshold, maximum return (asymptotic performance), and cumulative return.
Results showed that the trajectory-based methods outperformed their
population-based counterparts on the domains tested.
While metaheuristic algorithms are broadly applicable, it is also possible to
create specific heuristic search methods targeted at particular problems, such
as task sequencing with a specific transfer metric objective. Foglino et al.
(2019b) introduce one such heuristic search algorithm, designed to optimize
for the cumulative return. Their approach begins by computing transferability
between all pairs of tasks, using a simulator to estimate the cumulative
return attained by using one task as a source for another. The tasks are then
sorted according to their potential of being a good source or target, and
iteratively chained in curricula of increasing length. The algorithm is
anytime, and eventually exhaustively searches the space of all curricula with
a predefined maximum length.
Jain and Tulabandhula (2017) propose 4 different online search methods to
sequence tasks into a curriculum. Their methods also assume a simulator is
available to evaluate learning on different tasks, and use the learning
trajectory of the agent on tasks seen so far to select new tasks. The 4
approaches are: 1) Learn each source task for a fixed number of steps, and add
the one that gives the most reward. The intuition is that high reward tasks
are the easiest to make progress on. 2) Calculate a transferability matrix for
all pairs of tasks, and create a curriculum by chaining tasks backwards from
the target tasks greedily with respect to it. 3) Extract a feature vector for
each task (as in Narvekar et al., 2016), and learn a regression model to
predict transferability using the feature vector. 4) Extract pair wise feature
vectors between pairs of tasks, and learn a regression model to predict
transferability.
Finally, instead of treating the entire problem as a black box, it has also
been treated as a gray box. Foglino et al. (2019c) propose such an approach,
formulating the optimization problem as the composition of a white box
scheduling problem and black box parameter optimization. The scheduling
formulation partially models the effects of a given sequence, assigning a
utility to each task, and a penalty to each pair of tasks, which captures the
effect on the objective of learning two tasks one after the other. The white-
box scheduling problem is an integer linear program, with a single optimal
solution that can be computed efficiently. The quality of the solution,
however, depends on the parameters of the model, which are optimized by a
black-box optimization algorithm. This external optimization problem searches
the optimal parameters of the internal scheduling problem, so that the output
of the two chained optimizers is a curriculum that maximizes cumulative
return.
#### Graph-based Sequencing
Another class of approaches explicitly treats the curriculum sequencing
problem as connecting nodes with edges into a directed acyclic task graph.
Typically, the task-level curriculum formulation is used, where nodes in the
graph are associated with tasks. A directed edge from one node to another
implies that one task is a source task for another.
Existing work has relied on heuristics and additional domain information to
determine how to connect different task nodes in the graph. For instance,
Svetlik et al. (2017) assume the set of tasks is known in advance, and that
each task is represented by a task feature descriptor. These features encode
properties of the domain. For example, in a domain like Ms. Pac-Man, features
could be the number of ghosts or the type of maze. The approach consists of
three parts. First, a binary feature vector is extracted from the feature
vector to represent non-zero elements. This binary vector is used to group
subsets of tasks that share similar elements. Second, tasks within each group
are connected into subgraphs using a novel heuristic called _transfer
potential_. Transfer potential is defined for discrete state spaces, and
trades off the applicability of a source task against the cost needed to learn
it. Applicability is defined as the number of states that a value function
learned in the source can be applied to a target task. The cost of a source
task is approximated as the size of its state space. Finally, once subgraphs
have been created, they are linked together using directed edges from
subgraphs that have a set of binary features to subgraphs that have a superset
of those features.
Da Silva and Reali Costa (2018) follow a similar procedure, but formalize the
idea of task feature descriptors using an object-oriented approach. The idea
is based on representing the domain as an object-oriented MDP, where states
consist of a set of objects. A task OO-MDP is specified by the set of specific
objects in this task, and the state, action, transition, and reward functions
of the task. With this formulation, source tasks can be generated by selecting
a smaller set of objects from the target task to create a simpler task. To
create the curriculum graph, they adapt the idea of transfer potential to the
object-oriented setting: instead of counting the number of states that the
source task value function is applicable in, they compare the sets of objects
between the source and target tasks. While the sequencing is automated, human
input is still required to make sure the tasks created are solvable.
#### Auxiliary Problems
Finally, we discuss an additional approach that tackles an auxiliary problem
to sequencing: how long to spend on each intermediate task in the curriculum.
Most existing work trains on intermediate tasks until performance plateaus.
However, as we mentioned previously, Narvekar and Stone (2019) showed that
this is unnecessary, and that better results can be obtained by training for a
few episodes, and reselecting or changing tasks dynamically as needed.
Bassich et al. (2020) consider an alternative method for this problem based on
_progression_ functions. Progression functions specify the pace at which the
difficulty of the task should change over time. The method relies on the
existence of a task-generation function, which maps a desired complexity
$c_{t}\in[0,1]$ to a task of that complexity. The most complex task, for which
$c_{t}=1$, is the final task. After every episode, the progression function
returns the difficulty of the task that the agent should face at that time.
The authors define two types of progression functions: fixed progressions, for
which the learning pace is predefined before learning takes place; and
adaptive progressions, which adjust the learning pace online based on the
performance of the agent. Linear and exponential progressions are two examples
of fixed progression functions, and increase the difficulty of the task
linearly and exponentially, respectively, over a prespecified number of time
steps. The authors also introduce an adaptive progression based on a friction
model from physics, which increases $c_{t}$ as the agent’s performance is
increasing, and slows down the learning pace if performance decreases.
Progression functions allow the method to change the task at every episode,
solving the problem of deciding how long to spend in each task, while
simultaneously creating a continually changing curriculum.
#### 4.2.5 Human-in-the-Loop Curriculum Generation
Thus far, all the methods discussed in Section 4.2 create a curriculum
_automatically_ using a sequencing algorithm, which either reorders samples
from the final task or progressively alters how much intermediate tasks in the
curriculum may differ. Bengio et al. (2009) and Taylor (2009) both emphasize
the importance of better understanding how _humans_ approach designing
curricula. Humans may be able to design good curricula by considering which
intermediate tasks are “too easy” or “too hard,” given the learner’s current
ability to learn, similar to how humans are taught with the zone of proximal
development (Vygotsky, 1978). These insights could then be leveraged when
designing automated curriculum learning systems. Therefore, in this section,
we consider curriculum sequencing approaches that are done _manually_ by
humans who are either _domain experts_ , who have specialized knowledge of the
problem domain, or _naive users_ , who do not necessarily know about the
problem domain and/or machine learning.
One example of having domain experts manually generate the curriculum is the
work done by Stanley et al. (2005), in which they explore how to keep video
games interesting by allowing agents to change and to improve through
interaction with the player. They use the NeuroEvolving Robotic Operatives
(NERO) game, in which simulated robots start the game with no skills and have
to learn complicated behaviors in order to play the game. The human player
takes the role of a trainer and designs a curriculum of training scenarios to
train a team of simulated robots for military combat. The player has a natural
interface for setting up training exercises and specifying desired goals. An
ideal curriculum would consist of exercises with increasing difficulty so that
the agent can start with learning basic skills and gradually building on them.
In their experiments, the curriculum is designed by several NERO programmers
who are familiar with the game domain. They show that the simulated robots
could successfully be trained to learn different sophisticated battle tactics
using the curriculum designed by these domain experts. It is unclear whether
the human player who is not familiar with the game can design good curriculum.
A more recent example is by MacAlpine and Stone (2018). They use a very
extensive manually constructed curriculum to train agents to play simulated
robot soccer. The curriculum consists of a training schedule over 19 different
learned behaviors. It encompasses skills such as moving to different positions
on the field with different speeds and rotation, variable distance kicking,
and accessory skills such as getting up when fallen. Optimizing these skills
independently can lead to problems at the intersection of these skills. For
example, optimizing for speed in a straight walk can lead to instability if
the robot needs to turn or kick due to changing environment conditions. Thus,
the authors of this work hand-designed a curriculum to train related skills
together using an idea called overlapping layered learning. This curriculum is
designed using their domain knowledge of the task and agents.
While domain experts usually generate good curricula to facilitate learning,
most existing work does not explicitly explore their curriculum design
process. It is unclear what kind of design strategies people follow when
sequencing tasks into a curriculum. Published research on Interactive
Reinforcement Learning (Thomaz and Breazeal, 2006; Knox and Stone, 2009; Suay
and Chernova, 2011; Knox and Stone, 2012; Griffith et al., 2013; Subramanian
et al., 2016; Loftin et al., 2016; MacGlashan et al., 2017) has shown that RL
agents can successfully speed up learning using human feedback, demonstrating
the significant role can humans play in teaching an agent to learn a (near-)
optimal policy. This large body of work mainly focuses on understanding how
human teachers want to teach the agent and how to incorporate these insights
into the standard RL framework. Similarly, the way we define curriculum design
strategies still leaves a lot to be defined by human teachers. As pointed out
by Bengio et al. (2009), the notion of simple and complex tasks is often based
on human intuition, and there is value in understanding how humans identify
“simple” tasks. Along these lines, some work has been done to study whether
curriculum design is a prominent teaching strategy that naive users choose to
teach the agent and how they approach designing curricula.
$\begin{array}[]{cc}\includegraphics[height=79.6678pt]{img/dog_training_given_final_task.png}&\includegraphics[width=303.53267pt]{img/dog_training_curriculum.png}\\\
(a)&(b)\end{array}$
Figure 4: One example of curricula designed by human users. (a) Given final
task. (b) A curriculum designed by one human participant.
To study the teaching strategies followed by naive users, Khan et al. (2011)
conduct behavioral studies in which human participants need to teach a robot
the concept of whether an object can be grasped with one hand. In their
experiment, participants are provided with 31 cards with photos of common
objects (e.g., food, furniture, and animals) for them to select. The
experiment consists of two subtasks. In the first subtask, participants sort
the objects on the table based on their subjective ratings of their
graspability. In the second subtask, participants pick up the cards from the
table and show them to the robot while teaching the robot the concept of
graspability, using as few cards as possible. While teaching the robot the
object’s graspability, participants can either use any natural language or say
either “graspable” or “not graspable,” depending on one of the two conditions
they are randomly assigned. They observe that participants follow three
distinct teaching strategies, one of which is consistent with the curriculum
learning principle, i.e., starting simple and gradually increasing the
difficulty of the task. Furthermore, they propose a novel theoretical
framework as a potential explanation for the teaching strategy that follows
the curriculum learning principle, which shows that it is the result of
minimizing per-iteration expected error of the learner.
Peng et al. (2018) also explore how naive users design a curriculum of tasks
for an agent, but in a more complex sequential decision-making task.
Specifically, a simple simulated home environment is used, where the agent
must learn to perform tasks in a variety of environments. The tasks are
specified via text commands and the agent is trained to perform the task via
reinforcement and punishment feedback from a human trainer. It uses the goal-
directed Strategy-Aware Bayesian Learning (SABL) algorithm (Loftin et al.,
2016) for learning from human feedback. In the user study, participants are
asked to design a set of training assignments for the agent to help it quickly
learn to complete the given final assignment (shown in Figure 4a). A set of
source tasks are provided for human participants to select and sequence. One
example of curricula designed by human participants is shown in Figure 4b.
Their empirical results show that, compared to directly learning the pre-
specified final task from scratch, non-expert humans can successfully design
curricula that result in better overall agent performance on learning both the
entire curriculum and the final task. They also discover that humans are more
likely to select commands for intermediate tasks that include concepts that
are important for the final task, and that doing so results in curricula that
lead to better overall agent performance. Furthermore, they demonstrate that
by taking advantage of this type of non-expert guidance, their curriculum-
learning algorithm can be adapted to learn the human-generated curricula more
efficiently.
There is also some work that does not explicitly ask humans to design a
curriculum, but uses human data to help generate the curriculum. One example
is the work done by Hosu and Rebedea (2016), in which they propose a deep RL
method that combines online agent experiences with offline human experiences
to train the agent more efficiently. In some sparse-reward Atari games such as
Montezuma’s Revenge and Private Eye, the agent needs to execute a long
sequence of specific actions to receive the first positive reward from the
environment, which makes the exploration problem much harder. Thus, the
commonly used $\epsilon$-greedy strategy could not find any game paths to
reach a first state with positive reward, preventing the neural network from
learning relevant features to good states. Inspired by curriculum learning and
the human starts evaluation metric used for testing Atari agents, they use
checkpoints sampled from a human player’s game experience as starting points
for the learning process. The main intuition behind this approach is that at
least some of the checkpoints will be an “easier” starting point, which is
closer to some states with positive reward that the agent can benefit from.
While this method belongs to the class of sequencing approaches, as discussed
in Section 4.2.1, that reorders samples in the final task to derive a
curriculum, it additionally considers more informative sample data generated
by naive human users in order to build a more efficient curriculum.
We find that very limited work has been done on investigating how humans
design curricula. While the work discussed in this section enriches our
empirical understanding of human teaching and gives us some insights into the
development of new machine-learning algorithms and interfaces that can better
accommodate machine- or human-created curricula, we believe more work needs to
be done along this line.
### 4.3 Knowledge Transfer
While we view sequencing, as covered in Section 4.2, to be the core concept of
curriculum learning, the whole premise of CL depends on an agent’s ability to
transfer knowledge among tasks. While a full discussion of transfer learning
for RL is beyond the scope of this survey, this subsection is designed to
provide the reader a brief introduction to the area so that they can
effectively leverage it as part of their own explorations in curriculum
learning.
In curriculum learning, transfer learning methods are used to allow the agent
to reuse knowledge learned from one intermediate task to another within the
curriculum. It is worth noting that when creating a curriculum using only
samples from the target task (discussed in Section 4.2.1), there is no
transfer as there is only a single task (the target task) and correspondingly
no change in the environment. However, when creating a curriculum using
multiple intermediate tasks, which may differ in state/action space, reward
function, or transition function from the final task, transfer learning is
needed to extract and pass on reusable knowledge acquired in one intermediate
task to the next. The type of knowledge transferred also directly affects the
type of learner that is applicable to the learning process.
Transferred knowledge can be low-level, such as an entire policy, a value
function, a full task model, or some training instances, which can be directly
used to initialize the learner in the target task. The knowledge can also be
high-level, such as partial policies or options, skills, shaping rewards, or
subtask definitions. This type of information may not fully initialize the
learner in the target task, but it could be used to guide the agent’s learning
process in the target task. In this subsection, we discuss different transfer
learning approaches used in curricula.
In policy transfer, a policy learned in a source or intermediate task is used
to initialize the policy in the target task. When transferring policies
between different tasks, the tasks may differ in some aspect of the MDP, such
as starting states (Florensa et al., 2017), reward functions (Florensa et al.,
2018; Riedmiller et al., 2018), or transition functions (Clegg et al., 2017).
For instance, Clegg et al. (2017) demonstrate that an arm-like manipulator can
successfully learn the control policy for a simulated dressing task, by
transferring policies between tasks with different transition functions. In a
dressing task, the goal is to achieve a desired relative positioning of the
garment and the limb. To do this, they first train a sphere to move through a
funnel-like geometry to reach some target location. They then directly apply
the learned policy to a different scenario in which a manipulator with
arbitrary shape navigates through a simulated garment. The main trick is to
train multiple spheres using a curriculum learning strategy and then aggregate
them to control the manipulator in the dressing task.
Citation | Intermediate Task Generation | Curriculum Representation | Transfer Method | Curriculum Sequencer | Curriculum Adaptivity | Evaluation Metric | Application Area
---|---|---|---|---|---|---|---
Clegg et al. (2017) | domain experts | sequence | policies | domain experts | static | asymptotic, time to threshold | sim robotics
Fujii et al. (1998) | domain experts | sequence | partial policies | domain experts | static | asymptotic | real robotics
Karpathy and Van De Panne (2012) | domain experts/target | sequence/single | partial policies /no transfer | domain experts/automatic | static/adaptive | time to threshold | sim robotics
Rusu et al. (2016) | domain experts | sequence | policies | domain experts | static | asymptotic | video games
Shao et al. (2018) | domain experts | sequence | task model | domain experts | static | asymptotic, total reward | video games
Sinapov et al. (2015) | automatic | sequence | value function | automatic | static | jump start | video games
Tessler et al. (2017) | domain experts | sequence | partial policies | domain experts | static | asymptotic | video games
Vezhnevets et al. (2016) | automatic | sequence | partial policies | automatic | static | asymptotic, total reward | video games
Wang et al. (2020) | domain experts | sequence | policies | domain experts | static | asymptotic | video games
Yang and Asada (1996) | domain experts | sequence | partial policies | automatic | adaptive | asymptotic, time to threshold | real robotics
Yang et al. (2020) | domain experts | sequence | policies | domain experts | static | asymptotic, time to threshold | toy, other
Zimmer et al. (2018) | domain experts | sequence | partial policies | domain experts | static | asymptotic, total reward | sim robotics
Table 3: The papers discussed in Section 4.3, categorized along the dimensions
presented in Section 3.4. Bolded values under evaluation metric indicate
strong transfer.
In Shao et al. (2018), a learned task model is transferred between tasks,
which is used to initialize the policy network. Thus, it is similar to
transferring policies. Their work aims to solve the problem of multi-agent
decision making in StarCraft micromanagement, where the goal is to control a
group of units to destroy the enemy under certain terrain conditions. A
parameter sharing multi-agent gradient-descent Sarsa($\lambda$) (PS-MAGDS)
method is proposed to train the units to learn an optimal policy, which is
parametrized by a feed-forward neural network. PS-MAGDS simply extends the
traditional Sarsa($\lambda$) to multiple units by sharing parameters of the
policy network among units to encourage cooperative behaviors. A reward
function including small immediate rewards is also designed to accelerate the
learning process. When using transfer learning in their experiments, the
agents are first trained in some small scale source scenarios using PS-MAGDS.
The well-trained model is then used to initialize the policy network to learn
micromanagement in the target scenarios. To scale the combat to a large scale
scenario, they combine curriculum learning and transfer learning where the
agents are trained with a sequence of progressively more complex
micromanagement tasks. The difficulty of the micromanagement task is
controlled by changing the number and type of units.
Value function transfer is another common method for transferring low-level
knowledge between intermediate tasks within a curriculum. In most existing
work (Sinapov et al., 2015; Narvekar et al., 2017; Da Silva and Reali Costa,
2018), value function transfer is achieved by using the parameters of a value
function learned in one intermediate task to initialize the value function in
the next intermediate task in the curriculum, such that the agent learns the
final task with some initial policy that is better than random exploration.
For example, Sinapov et al. (2015) focus on addressing the task selection
problem in curriculum learning using value function transfer, under the
assumption that no samples from the final tasks are available. They propose to
use meta-data (i.e., a fixed-length feature vector that describes the task)
associated with each task to identify suitable intermediate tasks. The main
idea is to use such meta-data to learn the benefits of transfer between
different ‘source-target’ task pairs, and have this generalize to new unseen
task pairs to guide task selection.
When transferring low-level policies or value functions across tasks, there
are several challenges that arise, particularly in the modern context of deep
reinforcement learning. First is the problem of catastrophic forgetting, where
knowledge from previously learned tasks is lost as information on a new task
is incorporated. This effect occurs because the weights of the neural network
optimized for a first task must be changed to meet the objectives of a new
task, often resulting in poorer performance on the original task. Typically,
in the curriculum setting, we only care about performance in the final tasks.
However, if information from two orthogonal tasks needs to be combined (such
as two independent skills), this challenge needs to be addressed. One approach
is progressive neural networks (Rusu et al., 2016), which trains a new network
“column” for each new task, and leverages lateral connections to previously
learned network columns to achieve transfer. When training subsequent columns,
parameters from previous columns are frozen, which prevents catastrophic
forgetting. The limitation is that the number of parameters grows with the
number of tasks, and at inference time, the task label is needed to know which
column to extract output from.
A second problem is the case where the state and action spaces differ between
tasks. One alternative is to transfer higher-level knowledge across tasks,
such as partial policies or options. A partial policy is a policy that is not
necessarily defined for all states in the state space of an MDP. We use
partial policies as an umbrella term to represent closely related ideas such
as options, skills, and macro-actions. Yang and Asada (1996) transfer learned
control parameters between tasks, which are similar to partial policies. To
solve the impedance learning problem for high-speed robotic assembly, they
allow the system to learn impedance parameters associated with different
dynamic motions separately, rather than to learn all the control parameters
simultaneously. For instance, they first learn only the parameters associated
with quasistatic motion by driving the system slowly, leaving other parameters
unlearned. After the quasistatic parameters have been learned, they then
slightly increase the motion speed, and use the learned values to initialize
the quasistatic parameters when learning other parameters. Another example of
transferring partial policies between tasks is the work done by Zimmer et al.
(2018). Their main idea is to progressively increase the dimensionality of the
tackled problem by increasing the (continuous) state and action spaces of the
MDP, while an agent is learning a policy. The agent first learns to solve the
source task with reduced state and action spaces until the increase in
performance stagnates. Then, the partial policy learned by the agent is used
as an initialization to learn the full policy in the target task with full
state and action spaces. A developmental layer (like a dropout layer) is added
to the network to filter dimensions of the states and actions.
Similarly, Fujii et al. (1998) transfer options between tasks. To train mobile
robots to learn collision avoidance behaviors in multi-robot systems more
efficiently, they develop a multi-layered RL mechanism. Rather than gradually
increasing the level of task complexity based on the learner’s performance as
in Yang and Asada (1996), their learning process consists of four stages like
a curriculum in which each stage learns a pre-defined controller. Each
controller learns an option to solve a pre-defined sub-task. For instance, the
first controller learns to move toward a specific goal. Then the output (goal-
directed behavior) of the first controller is used as input for the second
controller, which aims to learn to avoid the collision to a single robot, and
so on.
Vezhnevets et al. (2016) also transfer high-level macro-actions between tasks,
which are simpler instances of options. In their experiment, the agent is
trained with a curriculum where the goal state is first set to be very close
to the start state and is then moved further away during learning process.
Although the task gets progressively harder, the temporally abstracted macro-
actions remain the same. The macro-actions learned early on can also be easily
adapted using their proposed architecture. Specifically, a deep recurrent
neural network architecture is used to maintain a multi-step action plan. The
network learns when to commit to the action plan to generate macro-actions and
when to update the plan based on observations.
Another mechanism for transfer are skills. Tessler et al. (2017) propose a
deep RL method that effectively retains and transfers learned skills to solve
lifelong learning in MineCraft. In their work, a set of $N$ skills are trained
a priori on various sub-tasks, which are then reused to solve the harder
composite task. In their MineCraft experiment, the agent’s action space
includes the original primitive actions as well as the set of pre-learned
skills (e.g., navigate and pickup). A hierarchical architecture is developed
to learn a policy that determines when to execute primitive actions and when
to reuse pre-learned skills, by extending the vanilla DQN architecture (Mnih
et al., 2015). The skills could be sub-optimal when they are directly reused
for more complex tasks, and this hierarchical architecture allows the agent to
learn to refine the policy by using primitive actions. They also show the
potential for reusing the pre-learned skill to solve related tasks without
performing any additional learning.
Rather than selectively reusing pre-learned skills, Karpathy and Van De Panne
(2012) focus on learning motor skills in an order of increasing difficulty.
They decompose the acquisition of skills into a two-level curriculum: a
_high-level_ curriculum specifies the order in which different motor skills
should be learned, while the _low-level_ curriculum defines the learning
process for a specific skill. The high-level curriculum orders the skills in a
way such that each skill is relatively easy to learn, using the knowledge of
the previously learned skills. For instance, the Acrobot first learns the Hop
(easy to learn from scratch) and Flip (similar to hopping very slowly) skills,
and then learns the more complex Hop-Flip skill. The learned skill-specific
task parameters for easier skills will highly constrain the states that the
Acrobat could be in, making it easier to learn more complex skills. For
example, the Hop-Flip skills begin from a hopping gait of some speed, which
can be reached by repeatedly executing the previously learned Hop skill.
In multi-agent settings, several specific methods have been designed for
curricula that progressively scale the number of agents between tasks. In
these settings, the state and action spaces often scale based on the number of
agents present. One common assumption in many of these methods is that the
state space can be factored into elements for the environment $s^{env}$, the
agent $s^{n}$, and all other agents $s^{-n}$. For example, Yang et al. (2020)
propose CM3, which takes a two-stage approach. In the first stage, a single
agent is trained without the presence of other agents. This is done by
inducing a new MDP that removes all dependencies on agent interactions (i.e.,
removing $s^{-n}$) and training a network on this subspace. Then in the second
stage, cooperation is learned by adding the parameters for the other agents
into the network.
Wang et al. (2020) propose 3 different approaches for multi-agent settings.
The first is buffer reuse, which saves the replay buffers from all previous
tasks, and samples experience from all of them to train in the current task.
Samples from lower dimensional tasks are padded with zeros. The second is
curriculum distillation, which adds a distillation loss based on KL divergence
between policies/q-values between tasks. The third is transferring the model
using a new network architecture called Dynamic Agent-number Network (DyAN).
In this architecture, the state space elements related to the agent and
environment go through a fully connected network, while the observations for
each teammate agent are passed through a graph neural network (GNN) and then
aggregated. These networks are subsequently combined to produce q-values or
policies.
## 5 Related Areas and Paradigms
Curriculum learning is an idea that has been studied in other areas of machine
learning and human education, and is similar to several existing paradigms in
reinforcement learning. In this section, we first relate curriculum learning
to approaches in reinforcement learning that aim to improve sample complexity,
and that consider learning multiple sets of tasks (Section 5.1). Then we
describe approaches to learn curricula in supervised learning (Section 5.2)
and for teaching and human education (Section 5.3). We include these
approaches with the idea that the insights discovered in these areas could be
adapted to apply to the reinforcement learning setting with autonomous agents.
### 5.1 Related Paradigms in Reinforcement Learning
One of the central challenges in applying reinforcement learning to real world
problems is sample complexity. Due to issues such as a sparse reward signal or
complex dynamics, difficult problems can take an RL agent millions of episodes
to learn a good policy, with many suboptimal actions taken during the course
of learning. Many different approaches have been proposed to deal with this
issue. To name a few, one method is imitation learning (Schaal, 1997), which
uses demonstrations from a human as labels for supervised learning to
bootstrap the learning process. Another example is off-policy learning (Hanna
et al., 2017), which uses existing data from an observed behavior policy, to
estimate the value of a desired target policy. Model-based approaches (Sutton
and Barto, 1998) first learn a model of the environment, which can then be
used for planning the optimal policy.
Each of these methods come with their advantages and disadvantages. For
imitation learning, the assumption is that human demonstrations are available.
However, these are not always easy to obtain, especially when a good policy
for the task is not known. In off-policy learning, in order to make full use
of existing data, it is assumed that the behavior policy has a nonzero
probability of selecting each action, and typically that every action to be
evaluated or the target policy has been seen at least once. Finally, model-
based approaches typically first learn a model of the environment, and then
use it for planning. However, any inaccuracies in the learned model can
compound as the planning horizon increases. Curriculum learning takes a
different approach, and makes a different set of assumptions. The primary
assumption is that the environment can be configured to create different
subtasks, and that it is easier for the agent to discover _on its own_
reusable pieces of knowledge in these subtasks that can be used for solving a
more challenging task.
Within reinforcement learning, there are also several paradigms that consider
learning on a set of tasks so as to make learning more efficient. Multitask
learning, lifelong/continuous learning, active learning, and meta-learning are
four such examples.
In _multitask learning_ , the goal is to learn how to solve _sets_ of
prediction or decision making tasks. Formally, given a set of tasks
$m_{1},m_{2},\ldots m_{n}$, the goal is to _co-learn_ all of these tasks, by
optimizing the performance over all $n$ tasks simultaneously. Typically, this
optimization is facilitated by learning over some shared basis space. For
example, Caruana (1997) considers multitask learning for supervised learning
problems, and shares layers of a neural network between tasks. In supervised
learning, these tasks are different classification or regression problems.
Similar ideas have been applied in a reinforcement learning context by Wilson
et al. (2007). In reinforcement learning, different tasks correspond to
different MDPs.
_Lifelong learning_ and _continual learning_ can be viewed as an online
version of multitask learning. Tasks are presented one at a time to the
learner, and the learner must use shared knowledge learned from previous tasks
to more efficiently learn the presented task. As in multitask learning,
typically the goal is to optimize performance over all tasks given to the
learner. Lifelong and continual learning have been examined in both the
supervised setting (Ruvolo and Eaton, 2013a) and the reinforcement learning
setting (Ring, 1997; Ammar et al., 2014). The distinguishing feature of
curriculum learning compared to these works is that in curriculum learning, we
have full control over the _order_ in which tasks are selected. Indeed, we may
have control over the _creation_ of tasks as well. In addition, the goal is to
optimize performance for a specific target task, rather than all tasks. Thus,
source tasks in curriculum learning are designed solely to improve performance
on the target task—we are not concerned with optimizing performance in a
source.
In _active learning_ , the learner chooses which task or example to learn or
ask about next, from a given set of tasks. Typically, active learning has been
examined in a semi-supervised learning setting: a small amount of labeled data
exists whereas a larger amount of unlabeled data is present. The labeled data
is used to learn a classifier to infer labels for unlabeled data. Unlabeled
data that the classifier is not confident about is requested for a label from
a human user. For example, Ruvolo and Eaton (2013b) consider active learning
in a lifelong learning setting, and show how a learner can actively select
tasks to improve learning speed for all tasks in a set, or for a specific
target task. The selection of which task to be learned next is similar to the
_sequencing_ aspect of curriculum learning. However, the full method of
curriculum learning is much broader, as it also encompasses creating the space
of tasks to consider. Ruvolo and Eaton (2013b) and similar active learning
work typically assume the set of tasks to learn and select from are already
given. In addition, typically active learning has been examined for supervised
prediction tasks, whereas we are concerned with reinforcement learning tasks.
Finally, in _meta-learning_ (Finn et al., 2017), the goal is to train an agent
on a variety of tasks such that it can quickly adapt to a new task within a
small number of gradient descent steps. Typically, the agent is not given
information identifying the task it is training on. In contrast, in curriculum
learning, the learning agent may or may not have information identifying the
task. However, the process that designs the curriculum by sequencing tasks
usually does have this information. Like in the lifelong setting, there is no
significance attached to the order in which tasks are presented to the
learner. In addition, the objective in meta-learning is to train for fast
adaptability, rather than for a specific final task as is the case in
curriculum learning.
### 5.2 Curricula in Supervised Machine Learning
In addition to reinforcement learning, curriculum learning has been examined
for supervised learning. While it is beyond the scope of this article to
extensively survey supervised CL methods, we would like to highlight a few
that could inspire ideas and draw parallels to the RL setting.
Bengio et al. (2009) first formalized the idea of curriculum learning in the
context of supervised machine learning. They conducted case studies examining
when and why training with a curriculum can be beneficial for machine learning
algorithms, and hypothesized that a curriculum serves as both a continuation
method and a regularizer. A continuation method is an optimization method for
non-convex criteria, where a smoothed version of the objective is optimized
first, with the smoothing gradually reduced over training iterations.
Typically, “easy” examples in a curriculum correspond to a smoother objective.
Using a simple shape recognition and language domain, they showed that
training with a curriculum can improve both learning speed and performance.
While many papers before Bengio et al. (2009) _used_ the idea of a curriculum
to improve training of machine learning algorithms, most work considering how
to systematically _learn_ a curriculum came after. One recent example is work
by Graves et al. (2017). They introduced measures of _learning progress_ ,
which indicate how well the learner is currently improving from the training
examples it is being given. They introduce 2 main measures based on 1) rate of
increase in prediction accuracy and 2) rate of increase of network complexity.
These serve as the reward to a non-stationary multi-armed bandit algorithm,
which learns a stochastic policy for selecting tasks. These signals of
learning progress could in theory be applied or adapted to the reinforcement
learning setting as well. Graves et al. (2017) also make an interesting
observation, which is that using a curriculum is similar to changing the step
size of the learning algorithm. Specifically, in their experiments, they found
that a random curriculum still serves as a strong baseline, because all tasks
in the set provide a gradient333Note however that in the reinforcement
learning setting, because the policy affects the distribution of states an
agent encounters, random training can be significantly worse.. Easier tasks
provide a stronger gradient while harder tasks provide a gradient closer to 0.
Thus, choosing easy, useful tasks allows the algorithm to take larger steps
and converge faster.
More recently, Fan et al. (2018) frame curriculum learning as “Learning to
Teach,” where a teacher agent learned to train a learning agent using a
curriculum. The process is formulated as an MDP between these two interacting
agents, similar to the MDP approaches discussed in Section 4.2.4: the teacher
agent selects the training data, loss function, and hypothesis space, while
the learning agent trains given the parameters specified by the teacher. The
state space of the MDP is represented as a combination of features of the
data, features of the student model, and features that represent the
combination of both data and learner models. The reward signal is the accuracy
on a held-out development set. Training a teacher agent can be computationally
expensive. They amortize this cost by using a learned teacher agent to teach a
new student with the same architecture. For example, they train the teacher
using the first half of MNIST, and use the learned teacher to train a new
student from the second half of MNIST. Another way they amortize the cost is
to train a new student with a different architecture (e.g., changing from
ResNet32 to ResNet110). Similar ideas have been explored in the reinforcement
learning setting. However, the test set distribution is different from the
training set distribution, which makes performing these kind of evaluations
more challenging. However, showing that the cost for training a teacher can be
amortized is an important direction for future work.
Finally, Jiang et al. (2015) explore the idea of self-paced curriculum
learning for supervised learning, which unifies and takes advantage of the
benefits of self-paced learning and curriculum learning. In their terminology,
curriculum learning uses prior knowledge, but does not adapt to the learner.
Specifically, a curriculum is characterized by a ranking function, which
orders a dataset of samples by priority. This function is usually derived by
predetermined heuristics, and cannot be adjusted by feedback from the learner.
In contrast, self-paced learning (SPL) adjusts to the learner, but does not
incorporate prior knowledge and leads to overfitting. In SPL, the curriculum
design is implicitly embedded as a regularization term into the learning
objective. However, during learning, the training loss usually dominates over
the regularization, leading to overfitting. This paper proposes a framework
that unifies these two ideas into a concise optimization problem, and
discusses several concrete implementations. The idea is to replace the
regularization term in SPL with a self-paced function, such that the weights
lie within a predetermined curriculum region. In short, the curriculum region
induces _a weak ordering_ over the samples, and the self-paced function
determines the actual learning scheme within that ordering. The idea has
parallels to a task-level curriculum for RL, where the curriculum induces a
weak ordering over samples from all tasks, and with the learning algorithm
determining the actual scheme within that ordering.
### 5.3 Algorithmically Designed Curricula in Education
Curriculum learning has also been widely used for building effective
Intelligent Tutoring Systems (ITS) for human education (Iglesias et al., 2003,
2009; Green et al., 2011; Brunskill and Russell, 2011; Doroudi et al., 2016).
An ITS system involves a student interacting with an intelligent tutor (a
computer-based system), with the goal of helping the student to master all
skills quickly, using as little learning content as possible. Given that
students have different learning needs, styles, and capabilities, the
intelligent tutor should be able to provide customized instructions to them.
To achieve this goal, one common strategy is called _curriculum sequencing_ ,
which aims to provide the learning materials in a meaningful order that
maximizes learning of the students with different knowledge levels. The main
problem this strategy must solve is to find the most effective lesson to
propose next, given the student’s current learning needs and capabilities.
Reinforcement learning is one of the machine learning techniques that has been
used with intelligent tutors to partially automate construction of the student
model and to automatically compute an optimal teaching policy (Woolf, 2007).
One advantage of using RL methods in tutoring is that the model can learn
adaptive teaching actions based on each individual student’s performance in
real time, without needing to encode complex pedagogical rules that the system
requires to teach effectively (e.g., how to sequence the learning content,
when and how to provide an exercise). Another advantage is that it is a
general domain-independent technique that can be applied in any ITS.
As a concrete example, Iglesias et al. (2003, 2009) adapt $Q$-learning
(Watkins, 1989) to an adaptive and intelligent educational system to allow it
to automatically learn how to teach each student. They formulate the learning
problem as an RL problem, where the state is defined as the description of the
student’s knowledge, indicating whether the student has learned each knowledge
item. The set of actions the intelligent tutor can execute includes selecting
and showing a knowledge item to the student. A positive reward is given when
all required content has been learned, otherwise no reward is given. The
system evaluates the student’s knowledge state through tests, which shows how
much the student knows about each knowledge item. The $Q$-value estimates the
usefulness of executing an action when the student is in a particular
knowledge state. Then, the tutoring problem can be solved using the
traditional $Q$-learning algorithm.
Green et al. (2011) propose using a multi-layered Dynamic Bayes Net (DBN) to
model the teaching problem in an ITS system. The main idea is to model the
dynamics of a student’s skill acquisition using a DBN, which is normally used
in RL to represent transition functions for state spaces. More specifically,
they formulate the problem as a factored MDP, where the state consists of one
factor for each skill, corresponding to the student’s proficiency on that
particular skill. The actions are to either provide a hint or to pose a
problem about a particular skill to the student. From a history of teacher-
student interaction, the teacher can model the student’s proficiency state,
with the goal of teaching the student to achieve the highest possible
proficiency value on each skill, using as few problems and hints as possible.
Subsequently, the learned DBN model is used by a planning algorithm to search
for the optimal teaching policy, mapping proficiency states of student
knowledge to the most effective problem or hint to pose next.
To allow the automated teacher to select a sequence of pedagogical actions in
cases where learner’s knowledge may be unobserved, a different problem
formulation is posed by Rafferty et al. (2016). They formulate teaching as a
partially observable Markov decision process (POMDP), where the learner’s
knowledge state is considered as a hidden state, corresponding to the
learner’s current understanding of the concept being taught. The actions the
automated teacher can select is a sequence of pedagogical choices, such as
examples or short quizzes. The learner’s next knowledge state is dependent on
her current knowledge state and the pedagogical action the teacher chooses.
Changes in the learner’s knowledge state reflect learning. In this framework,
the automated teacher makes some assumptions about student learning, which is
referred to as the learner model: it specifies the space of possible knowledge
states and how the knowledge state changes. Then the teacher can update its
beliefs about the learner’s current knowledge state based on new observations,
given this learner model. Using this POMDP framework, they explore how
different learner models affect the teacher’s selection of pedagogical
actions.
While most approaches seek to solely maximize overall learning gains,
Ramachandran and Scassellati (2014) propose an RL-based approach that uses a
personalized social robot to tutor children, that maximizes learning gains and
sustained engagement over the student-robot interaction. The main goal of the
social robot is to learn the ordering of questions presented to a child, based
on difficulty level and the child’s engagement level in real time. To
represent the idea that children with different knowledge levels need a
different curriculum, each child is categorized into a given group based on
knowledge level at the start of the one-on-one tutoring interaction. An
optimal teaching policy is then learned specific to each group. In particular,
their approach consists of a training phase and an interaction phase. In the
training phase, participants are asked to complete a tutoring exercise. A
pretest and post-test will be used to evaluate the participant’s relative
learning gains, which will also be used as the reward function to learn an
optimal policy during the training phase. Subsequently, in the interaction
phase, the child’s real-time engagement will be detected, serving as another
reward signal for the RL algorithm to further optimize the teaching policy.
Non-RL-based algorithms have been considered as well. Ballera et al. (2014)
leverage the roulette wheel selection algorithm (RWSA) to perform personalized
topic sequencing in e-learning systems. RWSA is typically used in genetic
algorithms to arrange the chromosomes based on their fitness function, such
that individuals with higher fitness value will have higher probability of
being selected (Goldberg, 1989). Similarly, in an e-learning system, a
chromosome is denoted by a lesson. Each lesson has a fitness value that
dynamically changes based on the student’s learning performance. This fitness
value indicates how well the topic was learned by the student, depending on
three performance parameters: exam performance, study performance, and review
performance of the learner. A lower fitness value means that the student has a
poorer understanding of the topic. Thus, a reversed mechanism of RWSA is
implemented, so as to select the lessons with lower fitness values more often
for reinforcement. Then, this reversed RWSA algorithm is combined with linear
ranking algorithm to sort the lessons.
## 6 Open Questions
Through our survey of the literature, we have identified several open problems
that have not been sufficiently studied in past work, and could be useful
avenues for future research.
### 6.1 Fully Automated Task Creation
Task creation is an important piece of the method of curriculum learning.
Whether tasks are created “on-demand” or all in advance, the quality of the
pool of tasks generated directly affects the quality of curricula that can be
produced. In addition, the _quantity_ of tasks produced affect the search
space and efficiency of curriculum sequencing algorithms. Despite this, very
limited work (see Section 4.1) has been done on the problem of automatically
generating tasks. Existing work either assumes the pool of tasks are manually
crafted and specified beforehand, or defines a set of rules for semi-
automatically creating tasks. However, these rules often have hyper-parameters
that control how many tasks are created, and are also usually manually tuned.
Reducing the amount of manual input required by these methods remains an
important area for future work.
### 6.2 Transferring Different Types of Knowledge
Between each pair of tasks in a curriculum, knowledge must be transferred from
one task to the subsequent task. In virtually all of the works surveyed, the
type of knowledge transferred has been fixed. For example, a value function
was always transferred between tasks by Narvekar et al. (2017) while a shaping
reward was always transferred by Svetlik et al. (2017). However, this
limitation opens the question of whether different tasks could benefit from
extracting different types of knowledge. For instance, it may be useful to
extract an option from one task, and a model from another. Thus, in addition
to deciding _which_ task to transfer from, we could also ask _what_ to extract
and transfer from that task. Past transfer learning literature has shown that
many forms of transfer are possible. The best type of knowledge to extract may
differ based on task, and techniques will need to be developed to effectively
combine these different types of knowledge.
### 6.3 Reusing Curricula and Sim-to-Real Curriculum Learning
Another limitation of many curriculum learning approaches is that the time to
generate a curriculum can be greater than the time to learn the target task
outright. This shortcoming stems from the fact that curricula are typically
learned independently for each agent and target task. However, in areas such
as human education, curricula are used to train multiple students in multiple
subjects. Thus, one way to amortize the cost would be to learn a curriculum to
train multiple different agents, or to solve multiple different target tasks
(Narvekar and Stone, 2020).
Another option for amortizing the cost is to learn curricula for a sim-to-real
setting on physical robots, where a curriculum is learned in simulation and
then used to train a physical robot. While the exact weights of the policy
learned in simulation would not apply in the real world, the semantics of the
curriculum tasks might. Therefore, the physical robot could go through the
same training regimen, but learn using the physics and dynamics of the real
world.
### 6.4 Combining Task Generation and Sequencing
The curriculum learning method can be thought of as consisting of 3 parts:
task generation, sequencing, and transfer learning. For the most part,
previous work has tackled each of these pieces independently. For example,
sequencing methods typically assume the tasks are prespecified, or a task
generation method exists. However, an interesting question is whether the task
generation and task sequencing phases can be done simultaneously, by directly
generating the next task in the curriculum. Some very preliminary work has
been done in this direction in the context of video game level generation. For
example, Green et al. (2019) used an evolutionary algorithm to generate maps
for a gridworld, where each tile had a different element. The generator was
optimized to maximize the loss of deep RL agent’s network, inducing a training
curriculum.
Combining task generation and sequencing has additional challenges, such as
specifying the space of possible maps, ensuring those maps are valid/solvable,
and creating maps that are challenging, but not too difficult to solve. In
addition, training the generator can be very expensive. However, it promises
an end-to-end solution that could reduce the amount of human intervention
needed to design curricula.
### 6.5 Theoretical Results
There have been many practical applications of curricula to speed up learning
in both supervised and reinforcement learning. However, despite empirical
evidence that curricula are beneficial, there is a lack of theoretical results
analyzing when and why they are useful, and how they should be created. An
initial analysis in the context of supervised learning was done by Weinshall
et al. (2018) and Weinshall and Amir (2018). They analyzed whether reordering
samples in linear regression and binary classification problems could improve
the ability to learn new concepts. They did this analysis by formalizing the
idea of an Ideal Difficulty Score (IDS), which is the loss of the example with
respect to the optimal hypothesis, and the Local Difficulty Score (LDS), which
is the loss of the example with respect to the current hypothesis. These are 2
ways to classify the difficulty of a sample, which can be used as a means to
sequence samples. They showed that the convergence of an algorithm like
stochastic gradient descent monotonically decreases with the IDS, and
monotonically increases with the LDS. An open question is whether similar
grounded metrics for difficulty of tasks can be identified in reinforcement
learning, and what kind of convergence guarantees we can draw from them.
### 6.6 Understanding General Principles for Curriculum Design
Determining the difficulty of a training example for an agent, and ensuring
that each example presented to the agent is suitable given its current
ability, is a major challenge in curriculum learning. In most existing work,
the curriculum is generated either automatically (see Section 4.2), by
ordering samples from the target tasks or iteratively selecting intermediate
tasks with increasing difficulty tailored to the current ability of the
learner; or manually by domain experts, who will typically have specialized
knowledge of the problem domain. Very limited work (see Section 4.2.5) has
been done to better understand how non-expert humans design curricula. The way
we define curriculum design strategies still leaves a lot to be defined by
human teachers.
Can non-expert humans design effective curricula for a given final task? What
kind of curriculum design strategies do they tend to follow when building
curricula? If we could find some general principles non-expert humans follow
for designing and/or sequencing more “interesting” intermediate tasks into a
curriculum, we could incorporate these insights into the automatic process of
generating useful source tasks for any task domain. Furthermore, can we adapt
curriculum learning algorithms to better take advantage of this type of non-
expert guidance to learn more efficiently? We believe a better understanding
of the curriculum-design strategies used by non-expert humans may help us to
1) understand the general principles that make some curriculum strategies work
better than others, and 2) inspire the design of new machine-learning
algorithms and interfaces that better accommodate the natural tendencies of
human trainers.
## 7 Conclusion
This survey formalized the concept of a curriculum, and the method of
curriculum learning in the context of reinforcement learning. Curriculum
learning is a 3-part approach consisting of 1) task generation, 2) sequencing,
and 3) transfer learning. We systematically surveyed existing work addressing
each of these parts, with a particular focus on sequencing methods. We broke
down sequencing methods into five categories, based on the assumptions they
make about intermediate tasks in the curriculum. The simplest of these are
sample sequencing methods, which reorder samples from the final task itself,
but do not explicitly change the domain. These were followed by co-learning
methods, where a curriculum emerges from the interaction of several agents in
the same environment. Next we considered methods that explicitly changed the
MDP to produce intermediate tasks. Some of these assumed that the environment
dynamics stay the same, but that the initial/terminal state distribution and
reward function can change. Others made no restrictions on the differences
allowed from the target task MDP. Finally, we also discussed how humans
approach sequencing, to shed light on manually designed curricula in existing
work. Our survey of the literature concluded with a list of open problems,
which we think will serve as worthwhile directions for future work. As a
budding area in reinforcement learning, we hope that this survey will provide
a common foundation and terminology to promote discussion and advancement in
this field.
Acknowledgments
We would like to sincerely thank Brad Knox, Garrett Warnell, and the anonymous
reviewers for helpful comments and suggestions that improved the presentation
of many ideas in this article. Part of this work has taken place in the
Learning Agents Research Group (LARG) at the Artificial Intelligence
Laboratory, The University of Texas at Austin. LARG research is supported in
part by grants from the National Science Foundation (CPS-1739964, IIS-1724157,
NRI-1925082), the Office of Naval Research (N00014-18-2243), Future of Life
Institute (RFP2-000), Army Research Office (W911NF-19-2-0333), DARPA, Lockheed
Martin, General Motors, and Bosch. The views and conclusions contained in this
document are those of the authors alone. Peter Stone serves as the Executive
Director of Sony AI America and receives financial compensation for this work.
The terms of this arrangement have been reviewed and approved by the
University of Texas at Austin in accordance with its policy on objectivity in
research. Part of this work has taken place in the Sensible Robots Research
Group at the University of Leeds, which is partially supported by the
Engineering and Physical Sciences Research Council of the UK (EP/R031193/1,
EP/S005056/1), and the British Council. Part of this work has taken place in
the Control, Robotics, Identification and Signal Processing (CRISP) Laboratory
at Tufts University which is partially supported by DARPA (W911NF-19-2-0006),
the Verizon Foundation, PTC Inc., and the Center for Applied Brain and
Cognitive Sciences (CABCS). Part of this work has taken place in the Whiteson
Research Lab at the University of Oxford, which is partially supported by the
European Research Council (ERC), under the European Union’s Horizon 2020
research and innovation programme (grant agreement number 637713). Part of
this work has taken place in the Intelligent Robot Learning (IRL) Lab at the
University of Alberta, which is supported in part by research grants from the
Alberta Machine Intelligence Institute.
## References
* Ammar et al. (2014) Haitham Bou Ammar, Eric Eaton, Paul Ruvolo, and Matthew E Taylor. Online multi-task learning for policy gradient methods. In _International Conference on Machine Learning (ICML)_ , pages 1206–1214, 2014.
* Ammar et al. (2015) Haitham Bou Ammar, Eric Eaton, José Marcio Luna, and Paul Ruvolo. Autonomous cross-domain knowledge transfer in lifelong policy gradient reinforcement learning. In _International Joint Conference on Artificial Intelligence (IJCAI)_ , pages 3345–3351, 2015.
* Andrychowicz et al. (2017) Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 5048–5058, 2017.
* Asada et al. (1996) Minoru Asada, Shoichi Noda, Sukoya Tawaratsumida, and Koh Hosoda. Purposive behavior acquisition for a real robot by vision-based reinforcement learning. _Machine Learning_ , 23(2-3):279–303, 1996.
* Baker et al. (2020) Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In _International Conference on Learning Representations (ICLR)_ , 2020.
* Ballera et al. (2014) Melvin Ballera, Ismail Ateya Lukandu, and Abdalla Radwan. Personalizing e-learning curriculum using reversed roulette wheel selection algorithm. In _International Conference on Education Technologies and Computers (ICETC)_ , pages 91–97. IEEE, 2014.
* Bansal et al. (2018) Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multi-agent competition. In _International Conference on Learning Representations (ICLR)_ , 2018.
* Baranes and Oudeyer (2013) Adrien Baranes and Pierre-Yves Oudeyer. Active learning of inverse models with intrinsically motivated goal exploration in robots. _Robotics and Autonomous Systems_ , 61(1):49–73, 2013.
* Bassich et al. (2020) Andrea Bassich, Francesco Foglino, Matteo Leonetti, and Daniel Kudenko. Curriculum learning with a progression function. https://arxiv.org/abs/2008.00511, 2020.
* Bellemare et al. (2013) Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. _Journal of Artificial Intelligence Research_ , 47:253–279, 2013.
* Bengio et al. (2009) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In _International Conference on Machine Learning (ICML)_ , pages 41–48, 2009.
* Brunskill and Russell (2011) Emma Brunskill and Stuart Russell. Partially observable sequential decision making for problem selection in an intelligent tutoring system. In _Poster at International Conference on Educational Data Mining (EDM)_. Citeseer, 2011.
* Caruana (1997) Rich Caruana. Multitask learning. _Machine Learning_ , 28(1):41–75, 1997.
* Clegg et al. (2017) Alexander Clegg, Wenhao Yu, Zackory Erickson, Jie Tan, C Karen Liu, and Greg Turk. Learning to navigate cloth using haptics. In _International Conference on Intelligent Robots and Systems (IROS)_ , pages 2799–2805, 2017.
* Da Silva and Reali Costa (2018) Felipe Leno Da Silva and Anna Reali Costa. Object-oriented curriculum generation for reinforcement learning. In _International Conference on Autonomous Agents & Multiagent Systems (AAMAS)_, 2018.
* Dorigo et al. (1991) Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. The ant system: An autocatalytic optimizing process. _Technical Report_ , 1991.
* Doroudi et al. (2016) Shayan Doroudi, Kenneth Holstein, Vincent Aleven, and Emma Brunskill. Sequence matters but how exactly? a method for evaluating activity sequences from data. _Grantee Submission_ , 2016.
* Elman (1993) Jeffrey L Elman. Learning and development in neural networks: The importance of starting small. _Cognition_ , 48(1):71–99, 1993.
* Fachantidis et al. (2013) Anestis Fachantidis, Ioannis Partalas, Grigorios Tsoumakas, and Ioannis Vlahavas. Transferring task models in reinforcement learning agents. _Neurocomputing_ , 107:23–32, 2013.
* Fan et al. (2018) Yang Fan, Fei Tian, Tao Qin, Xiang-Yang Li, and Tie-Yan Liu. Learning to teach. In _International Conference on Learning Representations (ICLR)_ , 2018.
* Fang et al. (2019) Meng Fang, Tianyi Zhou, Yali Du, Lei Han, and Zhengyou Zhang. Curriculum-guided hindsight experience replay. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 12602–12613, 2019.
* Fernández et al. (2010) Fernando Fernández, Javier García, and Manuela Veloso. Probabilistic policy reuse for inter-task transfer learning. _Robotics and Autonomous Systems_ , 58(7):866–871, 2010.
* Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In _International Conference on Machine Learning (ICML)_ , pages 1126–1135. JMLR. org, 2017.
* Florensa et al. (2017) Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse curriculum generation for reinforcement learning. In _Conference on Robot Learning (CoRL)_ , 2017.
* Florensa et al. (2018) Carlos Florensa, David Held, Xinyang Geng, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. In _International Conference on Machine Learning (ICML)_ , pages 1514–1523, 2018.
* Foglino et al. (2019a) Francesco Foglino, Christiano Coletto Christakou, and Matteo Leonetti. An optimization framework for task sequencing in curriculum learning. In _International Conference on Developmental Learning (ICDL-EPIROB)_ , 2019a.
* Foglino et al. (2019b) Francesco Foglino, Christiano Coletto Christakou, Ricardo Luna Gutierrez, and Matteo Leonetti. Curriculum learning for cumulative return maximization. In _International Joint Conference on Artificial Intelligence (IJCAI)_ , 2019b.
* Foglino et al. (2019c) Francesco Foglino, Matteo Leonetti, Simone Sagratella, and Ruggiero Seccia. A gray-box approach for curriculum learning. In _World Congress on Global Optimization_ , 2019c.
* Fujii et al. (1998) Teruo Fujii, Yoshikazu Arai, Hajime Asama, and Isao Endo. Multilayered reinforcement learning for complicated collision avoidance problems. In _International Conference on Robotics and Automation (ICRA)_ , volume 3, pages 2186–2191. IEEE, 1998.
* Glover and Laguna (1998) Fred Glover and Manuel Laguna. Tabu search. In _Handbook of combinatorial optimization_ , pages 2093–2229. Springer, 1998.
* Goldberg (1989) David E Goldberg. _Genetic Algorithms in Search, Optimization and Machine Learning_. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1st edition, 1989.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 2672–2680, 2014.
* Graves et al. (2017) Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. Automated curriculum learning for neural networks. In _International Conference on Machine Learning (ICML)_ , 2017.
* Green et al. (2011) Derek T Green, Thomas J Walsh, Paul R Cohen, and Yu-Han Chang. Learning a skill-teaching curriculum with dynamic Bayes nets. In _Innovative Applications of Artificial Intelligence (IAAI)_ , 2011\.
* Green et al. (2019) Michael Cerny Green, Benjamin Sergent, Pushyami Shandilya, and Vibhor Kumar. Evolutionarily-curated curriculum learning for deep reinforcement learning agents. In _AAAI Reinforcement Learning in Games Workshop_ , 2019.
* Griffith et al. (2013) Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles Isbell, and Andrea L Thomaz. Policy shaping: Integrating human feedback with reinforcement learning. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 2625–2633, 2013.
* Hanna et al. (2017) Josiah Hanna, Philip Thomas, Peter Stone, and Scott Niekum. Data-efficient policy evaluation through behavior policy search. In _International Conference on Machine Learning (ICML)_ , August 2017\.
* Hosu and Rebedea (2016) Ionel-Alexandru Hosu and Traian Rebedea. Playing Atari games with deep reinforcement learning and human checkpoint replay. In _Workshop on Evaluating General-Purpose AI (EGPAI)_ , 2016.
* Iglesias et al. (2003) Ana Iglesias, Paloma Martínez, and Fernando Fernández. An experience applying reinforcement learning in a web-based adaptive and intelligent educational system. _Informatics in Education_ , 2:223–240, 2003.
* Iglesias et al. (2009) Ana Iglesias, Paloma Martínez, Ricardo Aler, and Fernando Fernández. Learning teaching strategies in an adaptive and intelligent educational system through reinforcement learning. _Applied Intelligence_ , 31(1):89–106, 2009.
* Ivanovic et al. (2019) Boris Ivanovic, James Harrison, Apoorva Sharma, Mo Chen, and Marco Pavone. Barc: Backward reachability curriculum for robotic reinforcement learning. In _International Conference on Robotics and Automation (ICRA)_ , pages 15–21. IEEE, 2019.
* Jaderberg et al. (2017) Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In _International Conference on Learning Representations (ICLR)_ , 2017.
* Jain and Tulabandhula (2017) Vikas Jain and Theja Tulabandhula. Faster reinforcement learning using active simulators. In _NIPS Workshop on Teaching Machines, Robots, and Humans_ , 2017\.
* Jiang et al. (2015) Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. Self-paced curriculum learning. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , 2015.
* Karpathy and Van De Panne (2012) Andrej Karpathy and Michiel Van De Panne. Curriculum learning for motor skills. In _Canadian Conference on Artificial Intelligence_ , pages 325–330. Springer, 2012.
* Khan et al. (2011) Faisal Khan, Bilge Mutlu, and Xiaojin Zhu. How do humans teach: On curriculum learning and teaching dimension. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 1449–1457, 2011.
* Kim and Choi (2018) Tae-Hoon Kim and Jonghyun Choi. Screenernet: Learning self-paced curriculum for deep neural networks. _arXiv preprint arXiv:1801.00904_ , 2018.
* Knox and Stone (2009) W Bradley Knox and Peter Stone. Interactively shaping agents via human reinforcement: The TAMER framework. In _International Conference on Knowledge Capture_ , 2009.
* Knox and Stone (2012) W Bradley Knox and Peter Stone. Reinforcement learning from simultaneous human and MDP reward. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 475–482, 2012.
* Lazaric (2012) Alessandro Lazaric. Transfer in reinforcement learning: a framework and a survey. In _Reinforcement Learning_ , pages 143–173. Springer, 2012.
* Lazaric and Restelli (2011) Alessandro Lazaric and Marcello Restelli. Transfer from multiple MDPs. In _Advances in Neural Information Processing Systems (NIPS)_ , 2011\.
* Lazaric et al. (2008) Alessandro Lazaric, Marcello Restelli, and Andrea Bonarini. Transfer of samples in batch reinforcement learning. In _International Conference on Machine Learning (ICML)_ , pages 544–551, 2008.
* Lee et al. (2019) Su Young Lee, Choi Sungik, and Sae-Young Chung. Sample-efficient deep reinforcement learning via episodic backward update. In _Advances in Neural Information Processing Systems (NeurIPS)_ , pages 2110–2119, 2019.
* Loftin et al. (2016) Robert Loftin, Bei Peng, James MacGlashan, Michael L Littman, Matthew E Taylor, Jeff Huang, and David L Roberts. Learning behaviors via human-delivered discrete feedback: modeling implicit feedback strategies to speed up learning. _Autonomous Agents and Multi-Agent Systems_ , 30(1):30–59, 2016.
* MacAlpine and Stone (2018) Patrick MacAlpine and Peter Stone. Overlapping layered learning. _Artificial Intelligence_ , 254:21–43, 2018.
* MacGlashan et al. (2017) James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, Guan Wang, David L Roberts, Matthew E Taylor, and Michael L Littman. Interactive learning from policy-dependent human feedback. In _International Conferences on Machine Learning (ICML)_ , 2017\.
* Matiisen et al. (2017) Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. _IEEE Transactions on Neural Networks and Learning Systems_ , 2017\.
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. _Nature_ , 518(7540):529, 2015.
* Narvekar and Stone (2019) Sanmit Narvekar and Peter Stone. Learning curriculum policies for reinforcement learning. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , May 2019.
* Narvekar and Stone (2020) Sanmit Narvekar and Peter Stone. Generalizing curricula for reinforcement learning. In _Lifelong Learning Workshop at ICML_ , 2020.
* Narvekar et al. (2016) Sanmit Narvekar, Jivko Sinapov, Matteo Leonetti, and Peter Stone. Source task creation for curriculum learning. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , Singapore, 2016.
* Narvekar et al. (2017) Sanmit Narvekar, Jivko Sinapov, and Peter Stone. Autonomous task sequencing for customized curriculum design in reinforcement learning. In _International Joint Conference on Artificial Intelligence (IJCAI)_ , volume 147, page 149, 2017.
* Ow and Morton (1988) Peng Si Ow and Thomas E Morton. Filtered beam search in scheduling. _The International Journal Of Production Research_ , 26(1):35–62, 1988.
* Peng et al. (2018) Bei Peng, James MacGlashan, Robert Loftin, Michael L Littman, David L Roberts, and Matthew E Taylor. Curriculum design for machine learners in sequential decision tasks. _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 2(4):268–277, 2018.
* Peterson (2004) Gail B Peterson. A day of great illumination: B. F. Skinner’s discovery of shaping. _Journal of the Experimental Analysis of Behavior_ , 82(3):317–328, 2004.
* Pinto et al. (2017) Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In _International Conference on Machine Learning (ICML)_ , pages 2817–2826, 2017.
* Racaniere et al. (2019) Sebastien Racaniere, Andrew Lampinen, Adam Santoro, David Reichert, Vlad Firoiu, and Timothy Lillicrap. Automated curriculum generation through setter-solver interactions. In _International Conference on Learning Representations (ICLR)_ , 2019.
* Rafferty et al. (2016) Anna N Rafferty, Emma Brunskill, Thomas L Griffiths, and Patrick Shafto. Faster teaching via pomdp planning. _Cognitive Science_ , 40(6):1290–1332, 2016.
* Ramachandran and Scassellati (2014) Aditi Ramachandran and Brian Scassellati. Adapting difficulty levels in personalized robot-child tutoring interactions. In _Workshop at the AAAI Conference on Artificial Intelligence_ , 2014\.
* Ren et al. (2018) Zhipeng Ren, Daoyi Dong, Huaxiong Li, and Chunlin Chen. Self-paced prioritized curriculum learning with coverage penalty in deep reinforcement learning. _IEEE Transactions on Neural Networks and Learning Systems_ , 29(6):2216–2226, 2018.
* Riedmiller et al. (2018) Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom van de Wiele, Vlad Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing solving sparse reward tasks from scratch. In _International Conference on Machine Learning (ICML)_ , pages 4344–4353, 2018.
* Ring (1997) Mark B Ring. Child: A first step towards continual learning. _Machine Learning_ , 28(1):77–104, 1997.
* Rohde and Plaut (1999) Douglas LT Rohde and David C Plaut. Language acquisition in the absence of explicit negative evidence: How important is starting small? _Cognition_ , 72(1):67–109, 1999.
* Rosin and Belew (1997) Christopher D Rosin and Richard K Belew. New methods for competitive coevolution. _Evolutionary computation_ , 5(1):1–29, 1997\.
* Rusu et al. (2016) Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. _arXiv preprint arXiv:1606.04671_ , 2016.
* Ruvolo and Eaton (2013a) Paul Ruvolo and Eric Eaton. ELLA: An efficient lifelong learning algorithm. In _International Conference on Machine Learning (ICML)_ , 2013a.
* Ruvolo and Eaton (2013b) Paul Ruvolo and Eric Eaton. Active task selection for lifelong machine learning. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , 2013b.
* Sanger (1994) Terence D Sanger. Neural network learning control of robot manipulators using gradually increasing task difficulty. _IEEE Transactions on Robotics and Automation_ , 10(3):323–333, 1994.
* Schaal (1997) Stefan Schaal. Learning from demonstration. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 1040–1046, 1997.
* Schaul et al. (2015) Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In _International Conference on Machine Learning (ICML)_ , 2015.
* Schaul et al. (2016) Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In _International Conference on Learning Representations (ICLR)_ , 2016.
* Schmidhuber (2013) Jürgen Schmidhuber. Powerplay: Training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. _Frontiers in Psychology_ , 4:313, 2013.
* Shao et al. (2018) Kun Shao, Yuanheng Zhu, and Dongbin Zhao. Starcraft micromanagement with reinforcement learning and curriculum transfer learning. _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 2018.
* Silver et al. (2014) David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In _International Conference on Machine Learning (ICML)_ , 2014.
* Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. _Nature_ , 529(7587):484, 2016.
* Sinapov et al. (2015) Jivko Sinapov, Sanmit Narvekar, Matteo Leonetti, and Peter Stone. Learning inter-task transferability in the absence of target task samples. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 725–733, 2015.
* Skinner (1958) Burrhus F Skinner. Reinforcement today. _American Psychologist_ , 13(3):94, 1958.
* Soni and Singh (2006) Vishal Soni and Satinder Singh. Using homomorphisms to transfer options across continuous reinforcement learning domains. In _American Association for Artificial Intelligence (AAAI)_ , 2006\.
* Srivastava et al. (2013) Rupesh Kumar Srivastava, Bas R. Steunebrink, and Jürgen Schmidhuber. First experiments with powerplay. _Neural Networks_ , 41:130 – 136, 2013. Special Issue on Autonomous Learning.
* Stanley et al. (2005) Kenneth O Stanley, Bobby D Bryant, and Risto Miikkulainen. Evolving neural network agents in the nero video game. In _IEEE Symposium on Computational Intelligence and Games (CIG)_ , Piscataway, NJ, 2005.
* Stone and Veloso (1994) Peter Stone and Manuela Veloso. Learning to solve complex planning problems: Finding useful auxiliary problems. In _AAAI Fall Symposium on Planning and Learning_ , pages 137–141, 1994.
* Suay and Chernova (2011) Halit Bener Suay and Sonia Chernova. Effect of human guidance and state space size on interactive reinforcement learning. In _International Conference on Robot and Human Interactive Communication (RO-MAN)_ , pages 1–6, 2011.
* Subramanian et al. (2016) Kaushik Subramanian, Charles L Isbell Jr, and Andrea L Thomaz. Exploration from demonstration for interactive reinforcement learning. In _International Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 447–456, 2016.
* Sukhbaatar et al. (2018) Sainbayar Sukhbaatar, Zeming Li, Ilya Kostrikov, Gabriel Synnaeve, Arthur Szlam, and Rob Fergus. Intrinsic motivation and automatic curricula via asymmetric self-play. In _International Conference on Learning Representations (ICLR)_ , 2018.
* Sutton and Barto (1998) Richard Sutton and Andrew Barto. _Reinforcement Learning: An Introduction_. MIT Press, 1998.
* Svetlik et al. (2017) Maxwell Svetlik, Matteo Leonetti, Jivko Sinapov, Rishi Shah, Nick Walker, and Peter Stone. Automatic curriculum graph generation for reinforcement learning agents. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , pages 2590–2596, 2017.
* Taylor (2009) Matthew E Taylor. Assisting transfer-enabled machine learning algorithms: Leveraging human knowledge for curriculum design. In _The AAAI Spring Symposium on Agents that Learn from Human Teachers_ , 2009.
* Taylor and Stone (2005) Matthew E Taylor and Peter Stone. Behavior transfer for value-function-based reinforcement learning. In Frank Dignum, Virginia Dignum, Sven Koenig, Sarit Kraus, Munindar P. Singh, and Michael Wooldridge, editors, _International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , pages 53–59, New York, NY, 2005. ACM Press.
* Taylor and Stone (2009) Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. _Journal of Machine Learning Research_ , 10(1):1633–1685, 2009.
* Taylor et al. (2007) Matthew E Taylor, Peter Stone, and Yaxin Liu. Transfer learning via inter-task mappings for temporal difference learning. _Journal of Machine Learning Research_ , 8(1):2125–2167, 2007.
* Taylor et al. (2008) Matthew E Taylor, Gregory Kuhlmann, and Peter Stone. Autonomous transfer for reinforcement learning. In _International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS)_ , 2008.
* Tesauro (1995) Gerald Tesauro. Temporal difference learning and td-gammon. _Communications of the ACM_ , 38(3):58–68, 1995\.
* Tessler et al. (2017) Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , pages 1553–1561, 2017.
* Thomaz and Breazeal (2006) Andrea Lockerd Thomaz and Cynthia Breazeal. Reinforcement learning with human teachers: Evidence of feedback and guidance with implications for learning performance. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , volume 6, pages 1000–1005, 2006.
* Thrun (1998) Sebastian Thrun. Lifelong learning algorithms. In Sebastian Thrun and Lorien Pratt, editors, _Learning to Learn_ , pages 181–209. Kluwer Academic Publishers, Norwell, MA, USA, 1998.
* Vezhnevets et al. (2016) Alexander Vezhnevets, Volodymyr Mnih, Simon Osindero, Alex Graves, Oriol Vinyals, John Agapiou, et al. Strategic attentive writer for learning macro-actions. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 3486–3494, 2016.
* Vinyals et al. (2019) Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. _Nature_ , pages 1–5, 2019.
* Vygotsky (1978) Lev Semenovich Vygotsky. _Mind in Society: Development of Higher Psychological Processes_. Harvard University Press, 1978.
* Wang et al. (2020) Weixun Wang, Tianpei Yang, Yong Liu, Jianye Hao, Xiaotian Hao, Yujing Hu, Yingfeng Chen, Changjie Fan, and Yang Gao. From few to more: Large-scale dynamic multiagent curriculum learning. In _Association for the Advancement of Artificial Intelligence (AAAI)_ , pages 7293–7300, 2020.
* Watkins and Dayan (1992) Christopher JCH Watkins and Peter Dayan. Q-learning. _Machine learning_ , 8(3-4):279–292, 1992.
* Watkins (1989) Christopher John Cornish Hellaby Watkins. _Learning from delayed rewards_. PhD thesis, King’s College, Cambridge, 1989.
* Weinshall and Amir (2018) Daphna Weinshall and Dan Amir. Theory of curriculum learning, with convex loss functions. _arXiv preprint arXiv:1812.03472_ , 2018.
* Weinshall et al. (2018) Daphna Weinshall, Gad Cohen, and Dan Amir. Curriculum learning by transfer learning: Theory and experiments with deep networks. In _International Conference on Machine Learning (ICML)_ , pages 5235–5243, 2018.
* Wilson et al. (2007) Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In _International Conference on Machine Learning (ICML)_ , pages 1015–1022. ACM, 2007.
* Woolf (2007) Beverly Park Woolf. _Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing e-Learning_. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2007.
* Wu and Tian (2017) Yuxin Wu and Yuandong Tian. Training agent for first-person shooter game with actor-critic curriculum learning. In _International Conference on Learning Representations (ICLR)_ , 2017.
* Yang and Asada (1996) Boo-Ho Yang and Haruhiko Asada. Progressive learning and its application to robot impedance learning. _IEEE Transactions on Neural Networks_ , 7(4):941–952, 1996.
* Yang et al. (2020) Jiachen Yang, Alireza Nakhaei, David Isele, Kikuo Fujimura, and Hongyuan Zha. Cm3: Cooperative multi-goal multi-stage multi-agent reinforcement learning. In _International Conference on Learning Representations (ICLR)_ , 2020.
* Zimmer et al. (2018) Matthieu Zimmer, Yann Boniface, and Alain Dutech. Developmental reinforcement learning through sensorimotor space enlargement. In _International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)_ , pages 33–38. IEEE, 2018.
|
2024-09-04T02:54:58.913772 | 2020-02-15T06:15:17 | 2003.04978 | {
"authors": "Sairamvinay Vijayaraghavan, Ye Wang, Zhiyuan Guo, John Voong, Wenda\n Xu, Armand Nasseri, Jiaru Cai, Linda Li, Kevin Vuong, and Eshan Wadhwa",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26146",
"submitter": "Sairamvinay Vijayaraghavan",
"url": "https://arxiv.org/abs/2003.04978"
} | arxiv-papers | # Fake News Detection with Different Models
Sairamvinay Vijayaraghavan
<EMAIL_ADDRESS>
&Zhiyuan Guo
<EMAIL_ADDRESS>
&Ye Wang
<EMAIL_ADDRESS>
John Voong
<EMAIL_ADDRESS>
&Wenda Xu
<EMAIL_ADDRESS>
Armand Nasseri
<EMAIL_ADDRESS>
&Jiaru Cai
<EMAIL_ADDRESS>
Linda Li
<EMAIL_ADDRESS>
&Kevin Vuong
<EMAIL_ADDRESS>
&Eshan Wadhwa
<EMAIL_ADDRESS>
###### Abstract
Problem: The problem we intend to solve is modelled as a binary classification
problem. We intend to find the relation in the words and the context in which
the words appear within the text and how it could be used to classify texts as
real (negative cases) or fake (positive).
High-level description: Many news sources contain false information and are
therefore “fake news.” Because there is a lot of “fake news” articles and
fabricated, misleading information on the web, we would like to determine
which texts are legitimate (real) and which are illegitimate (fake). To solve
this as a binary classification problem, we investigate the effectiveness of
different Natural Language Processing models which are used to convert
character based texts into numeric representations such as TFIDF,
CountVectorizer and Word2Vec models and find out which model is able to
preserve most of the contextual information about the text used in a fake news
data set and how helpful and effective it is in detecting whether the text is
a fake news or not.
Results:We find that out of the three pre-training vectorizing algorithms,
Word2Vec performs comparatively the worst in general and the CountVectorizer
performs slightly better than the TF-IDF models in most of the cases. Out of
the five fine-tuning algorithms, neural networks (ANNs and LSTMs) perform
better. A combination of cv with LSTM achieves the best performance.
Contribution to the machine learning field: We presented a simple model which
can be used to classify a given text as “real” or “fake” mostly accurately.
This form of pre-training embedding algorithms and then fine-tuning on the
downstream supervised task (of binary classification) proves to be efficient
and effective in classifying susceptible news text.
## 1 Introduction
For this report, we are exploring the field of natural language processing,
which is the broad study of how computers and machines can understand human to
human communication and how texts are analyzed based on contextual information
by machines.
In particular, we are using natural language processing to classify news
articles as real news or “fake news”. Fake news is misinformation masked under
the guise of a real news article, and is used to deceptively influence
people’s beliefs.
For this report, we are classifying news articles as “real” or “fake”, which
will be a binary classification problem - classifying the samples as a
positive (with fake news) or negative (not fake news) sample. Many studies
have used machine learning algorithms and build classifiers based on features
like content, the author’s name and job-title, using lots of models like the
convolutional neural network (CNN), recurrent neural network (RNN), feed-
forward neural network (FFNN), long-short term memory (LSTM) and logistic
regression to find the most optimal model and return its results. In [1], the
author built a classifier using natural language processing and used models
like CNN, RNN, FFNN, and Logistic Regression and concluded that the CNN
classifiers could not be as competitive as the RNN classifiers. The authors in
[2] think that their study can be improved by having more features like
knowing the history of lies spoken by the news reporter or the speaker.
Moreover, apart from the traditional machine learning methods, new models have
also been developed. One of the newer models, TraceMiner, creates an LSTM-RNN
model inferring from the embedding of social media users in the social network
structure to propagate through the path of messages and has provided high
classification accuracy5. FAKEDETECTOR is another inference model developed to
detect the credibility of the fake news which is considered to be quite
reliable and accurate7.
There also have been studies that have a different approach. A paper surveys
the current state-of-the-art technologies that are imperative when adopting
and developing fake news detection and provides a classification of several
accurate assessment methods that analyze the text and detect anomalies3.
These previous approaches lack a clear contextual analysis used in NLP. We
considered the semantic meaning of each word and we feel that the presence of
particular words influence the meaning. We reckoned this important since we
felt the contextual meaning of the text needs to be preserved and analyzed for
better classification. Other studies emphasize the user and features related
to them. In [4], “45 features…[were used] for predicting accuracy…across four
types: structural, user, content, and temporal,” so features included
characteristics beyond the text. Article [6] "learn[s] the representations of
news articles, creators and subjects simultaneously." In our project, we
emphasize the content by working with articles whose labels only relate to the
text, nothing outside that scope, and have used SVM, Logistic Regression, ANN,
LSTM, and Random Forest.
We had devised this problem into 3 different phases: pre-processing, text-to-
numeric representation conversion using pre-trained algorithms, and then
evaluate the models using state-of-the-art machine learning algorithms. We had
analysed the data set and in particular the text part of the data explaining
how it is distributed and then we converted each text into numeric
representation using pre-training models such as TFIDF, CV and W2V for vector
representation. Finally, we evaluated our numeric conversion data using
significant machine learning algorithms such as neural networks,
classification algorithms etc to perform the classification.
## 2 Methods
### 2.1 The Dataset
The training data set has five features: ID, title, author, text, and label.
The ID uniquely identifies the news article. The title and author are the
title and author of the news article respectively. The text is the content of
the article, and may be incomplete. The label indicates whether the article is
reliable (real) or not (fake):
label = $\begin{cases}0&\textrm{if reliable news}\\\ 1&\textrm{if fake
news}\end{cases}$
The training data set contains 20800 odd number of samples.
The test data set does not have labels, so we do not use it. The test data set
will be selected from the training data set randomly when we are evaluating
our models.
In our project, since we hypothesized that the text and the words used within
the text are key to distinguish between real and fake news samples, we decided
to investigate only the text column.
### 2.2 Data Pre-processing
#### 2.2.1 Removed numbers
Within the context of a news article title or text, numbers simply quantify
claims and do not change the meaning of the text. Therefore it is best to
remove all numbers to minimize noise in our data. We use the string.digits
string constant in Python as well as the translate and maketrans methods from
Python’s string module to convert all numerical digits to an empty string,
effectively removing all digits.
#### 2.2.2 Removed punctuation and special characters
In addition of pre-processing the textual data, we removed all characters that
are not textual (not alphabets such as punctuation, extra delimiters etc.). We
used the string.punctuation module in Python to find all punctuation
characters. We remove all those punctuation characters from every word in the
texts, with the exception of the symbols ‘#’ and ‘@’. Because these are
characters used for Twitter hashtags and mentions, we handle these later.
Next, we removed an assortment of special characters that don’t appear on
traditional American keyboards and don’t contribute to the meaning of the
tweets. The long dash (“–”), single and double Asian quotations, ellipse
characters (…), and bullet points (•) all were removed for this reason.
After removing all special characters, there are still a couple of pre-
processing cases we account for. For these cases, we used regular expressions
to detect certain patterns we wish to remove. One of the patterns is Twitter
hashtags and mentions. In a news setting, Twitter hashtags and mentions are
often added to try to obtain more search results and relevance, but often
distract from the overall meaning of the news content itself. In our problem,
we are primarily concerned with words and mostly their contextual meanings
used in the text and we assumed that these unnecessary characters. To detect
the hashtags and mentions, we simply use regular expressions to remove all
text after a hashtag (#) or @ symbol, and stop removing text when we reach the
next space. We also use regular expressions to handle em dashes (—) and more
than two consecutive spaces. Em dashes are used in various linguistic contexts
like joining independent clauses. They do not add to the meaning of the text,
however they are surrounded by two words of different clauses, so we replaced
all em dashes with a single space to maintain the integrity of each phrase.
Lastly, we replace any set of two or more consecutive spaces with just one
space.
Proceeding further, we make all of our texts lowercase and then remove all
rows that have foreign language characters in their text, since we are only
interested in identifying fake news in English. To do this we used the package
langid in Python to identify the language of all texts, and removed all rows
with foreign characters. This finally ensures the text we preserve is only
with English words with no non-alpha character.
#### 2.2.3 Removed stop words
Stop words are a list of the most common words in a language, such as “a”,
“be”, “quite”, “should”…etc. They are often void of meaning, and does not add
anything to the content. They are also most frequently present in every text.
Hence, we presumed removal of stop words can have multiple advantages. For
once, it decreases memory overhead, since we cut down a huge amount of text
(and hence narrows down the number of features to train our models on).
Second, it reduces noise, since by eliminating stop words, we are able to
focus on more meaningful contents (the more distinct features between these
two classes). Although it is not often the case that removing stop words are
the most optimal, sometimes the information that we are looking for may be
included in the stop words that we removed. For example, in most cases of
language modeling, or translation, where it is important that we keep all the
stop words. However, in our circumstances, we are using the semantics of the
text to make a decision. In this case, we can safely remove stop words to
observe the more meaningful context words.
### 2.3 Data Distribution
We performed some data analysis on the text and wanted to understand how the
text is distributed. We had analyzed and represented our data (text)
distribution in a few different perspectives. We first analyzed the data
through graphing its sentiment polarity, most popular unigram and bigram, as
well as looking at the distribution of the word types. We will be comparing
the graphs before and after preprocessing, which includes, stop word removal,
removing punctuation and special characters, and numbers.
#### 2.3.1 Sentiment Polarity
Polarity Graphs before pre-processing
Polarity Graphs after pre-processing
For both before and after pre-processing, the distribution of the polarity of
fake news sentiment and real news sentiment are mostly the same. For both fake
news and real news, there are slightly more positive news than the negatives.
However, there is a noticeable difference between the polarity. We can see
that although not by much, fake news are a little bit more polar than real
news. There are more outliers, and the data are a little bit more spread out.
#### 2.3.2 Part of Speech Distribution
Part of Speech Graphs before pre-processing
Part of Speech Graphs after pre-processing
Although the differences are slight, there is a difference in part of speech
distribution between real and fake news. In fake news, there are a higher
percentage of adverbs and adjectives compared to all the other parts of
speech, while there is a lower percentage of proper pronoun; however, in real
news, there are a higher percentage of pronoun. We can interpret this as there
are more adverbs and adjectives in fakes new, and there are more pronoun in
real news. Perhaps, this is indicating that fake news are more likely to use
adverbs and adjectives to embellish their sentences, while real news use more
pronouns to establish as reference to their legitimacy.
#### 2.3.3 Unigram and Bigram
Unigrams
Real News Fake News Before After Before After the nt the nt to trump to Trump
of people of people and clinton and clinton in hillary in hillary that said
that said for like is like on new for new he time it time is World on world it
state as state was election with election said government are government mr
president this preseident with war by war as years before years his states was
states at american you american by obama have obama from media they media
Bigrams
Real News Fake News Before After Before After of the mr trump of the hillary
clinton in the united states in the donald trump to the new york to the united
states on the mr trumps on the white house mr trump white house and the new
york at the donald trump that the hillary clintons and the mrs clinton to be
clinton campaign that the said mr for the clinton foundation to be york times
it is secretary state he said islamic state with the nt know with the mr obama
from the american people from the breitbart news by the mainstream media by
the president trump at the foreign policy it was years ago hillary clinton
bill clinton
The comparison between the result of the top unigram and bigram before and
after preprocessing demonstrates that our decision to remove stop words is the
correct choice. The top unigram and bigram are all consisted of words, in
other words, filler words that does supply us with any explanation.
After removing the stop words, we can see that the top unigrams and bigrams
become much more specific.
### 2.4 Unsupervised Pre-training to encode our texts into numeric
representations
#### 2.4.1 Natural Language Processing Models
After text have been cleaned, they are mapped into numeric representations in
form of vectors of the textual data using three pre-training algorithms (i.e.
CountVectorizer, TF-IDFVectorizer, and Word2Vec). Each sample, originally
consisting of all text, is converted into a vector of features. Since only the
text is passed into these pre-training algorithm, this stage is unsupervised.
In the cases of CountVectorizer and TfidfVectorizer, the number of features is
clipped at 10000 to avoid memory overrun and overfitting (because of the large
number of features (the vocabulary)).
#### 2.4.2 CountVectorizer
The CountVectorizer provides a simple way to both tokenize a collection of
text documents and build a vocabulary of known distinct words, but also to
encode new documents using that vocabulary13.
Given a collection of text documents, $S$ , CountVectorizer will generate a
sparse matrix $A$ of size $m$ by $n$, where $m=$ total number of documents,
$n=$ total number of distinct words used in $S$.
$A=\begin{pmatrix}a_{11}&a_{12}&\cdots&a_{1n}\\\
\vdots&\vdots&\vdots&\vdots\\\ a_{m1}&a_{m2}&\cdots&a_{mn}\end{pmatrix}$
This matrix is the one hot encoded representation of the different words
present in the corpus. Entry $a_{ij}=$ total number of times $j$th word
appears in the $i$th document.
We had converted the sparse matrix into a dense one since we found that there
are plenty of distinct words in the corpus which may not even be present in
some of the samples and hence they may be populated with zeros. Hence, we felt
that since zeros may be entirely populated, we decided to convert it to a
dense matrix using the todense() method call which a dense representation of
the sparse matrix.
#### 2.4.3 TF-IDFVectorizer
Although TF-IDF is an old algorithm, it is simple and effective to be used in
the phase of pre-training11. The computation of TfidfVectorizer involves
computing the product of term frequency and inverse document frequency. As the
term implies, TF-IDF calculates values for each word in a document through an
inverse proportion of the frequency of the word in a particular document to
the percentage of documents the word appears in12.
The term frequency $tf(t,d)$ calculates the proportion of times that the term
$t\in V(d)$ appears in the document $d$. The vocabulary $V(d)=\sum_{t}n(t,d)$
is constructed by the document $d$. Thus, if a word $w^{\prime}$ does not
appear in a document $d^{\prime}$, the term frequency
$tf(t^{\prime},d^{\prime})$ in this case would be zero. The idea of the term
frequency is essentially the same as CountVectorizer.
$tf(t,d)=\frac{n(t,d)}{V(d)}$ $n(t,d)=\textrm{ occurrence of the word
}t\textrm{ in the document }d$
Given a document collection $D$, the inverse document frequency $idf(t,D)$ is
the log of the number of documents $N$ divided by $df(t,D)$, the number of
documents $d\in D$ containing the term $t$. As a result, common words in $D$
will have a low term frequency score, while infrequent words will have a high
term frequency. Thus, the term frequency will be very likely to separate fake
news that often have less common words (even ungrammatical) from real news
that usually consist of common words.
$idf(t,D)=\log\Big{(}\frac{N}{df(t,D)}\Big{)}$
As a summary, TF-IDF score $w(t,d)$ for a word increases with its count, but
will be counteracted if the word appears in too many documents.
$w(t,d)=tf(t,d)\times idf(t,D)$
Similar to CountVectorizer, we found that most of the entries within the
matrix were 0. Hence, we used the dense (todense() call) to return the dense
representation of the sparse TFIDF matrix representation.
#### 2.4.4 Word2Vec
Word2Vec is another state of the art model used to represent words into
vectors. Word2Vec is a simple neural network which basically tries to predict
the next word within a context given a set of words provided. Word2Vec
basically represents a vector for each word within the context and the vector
representation is the weights of the particular connection from the input
layer node into one of the hidden layer neurons. This information is mainly
encoding the contextual information of the particular word within the corpus
(collection of texts) on which we train our word2vec model.
In this project, all we did was we trained the word2vec model on our current
corpus. We did this because we felt that the corpus contained very specific
words which had a contextual meaning completely different from what is used in
general. Hence, we chose to train the corpus on the existing texts in our
corpus texts over the pre-trained word2vec models such as google models. For
training our word2vec models, we chose the minimum count as the average number
of words in each of the texts in general, since we believed that texts which
are shorter than the mean length have less context and hence we rejected those
sentences to train on. We then used the number of features as the default
number of features as 100 since we wanted to analyze on a short number of
features.
For this project, we decided on a very simple and plain approach. We obtained
the vector for each sentence by summing all the vector representations for
each word in the sentence only if the word belongs to the word2vec model. The
summed up vector is finally divided with the number of words in the sentence
since we wanted to make sure that the size of the text doesn’t affect the
vector embeddings and hence we normalized our word2vec embedding.
### 2.5 Outlier Removal
During outlier removal, the Isolation Forest algorithm isolates observations
by randomly selecting a feature and then randomly selecting a split value
between the maximum and minimum values of selected features. In Isolation
Forest, an anomaly score can be calculated as the number of conditions
required to separate given observation.
In our outlier detections and removals, Isolation Forest has been applied to
three different features. Generated from TFIDF, CV, WV. Percentages of outlier
in each feature set is calculated, bar graph of percentage of training
outliers are included.
### 2.6 Fine-tuning
Once the representations of text are pre-trained from previous unsupervised
learning, the representations are then fed into 5 different models to perform
supervised learning on the downstream task. In this case, the downstream task
is a binary classification of the fake news as either real or fake. A k-fold
prediction error is obtained from each of the 5 models, and since we have 3
different pre-training models, we have a total of 15 models to compare.
#### 2.6.1 Artificial Neural Network (ANN)
We trained simple Artificial Neural Networks which contains an input layer,
particular number of output layers (specified by a hyperparameter) in which
each hidden layer contains the same number of neurons and the same activation
function, and an output layer with just one node for the classification (real
or fake) which uses sigmoid as an activation function. We chose sigmoid as the
output layer activation and the binary_crossentropy as the loss since it is a
binary classification problem and the use of softmax normalizes the results
which is not needed for this problem and since we use only one output node to
return the activation, we applied sigmoid for the output layer activation. We
performed Grid Search strategy to find the best hyper-parameters such as
activations, optimizers, number of hidden layers and number of hidden neurons.
We had used Keras Sequential model and we used Dense Layers which contains
connections to every hidden node in the next layer.
Due to the limitation of computing resource, the grid search for Neural
Networks is divided into three sequential steps. Instead of performing grid
search on all the hyperparameters all at once, we chose to do grid search for
the activations for the hidden layers, optimizers and the number of hidden
layers and hidden neurons (done together). We coupled the number of hidden
layers and the number of neurons since we believed that each of these
hyperparameters interact with each other in improving the model training. We
also did a K-fold Split for 3 splits at each step and picked the best
hyperparameters which renders the highest accuracy.
#### 2.6.2 Long Short Term Memory networks (LSTMs)
Long Short Term Memory networks (LSTMs) is a special recurrent neural network
(RNN) introduced by Hochreiter & Schmidhuber (1997)8.
(Christopher Olah. “Understanding LSTM Networks.”)
The chain-like nature of an RNN allows information to be passed from the
beginning all the way to the end. The prediction at time step $t$ depends on
all previous predictions at time step $t\textquoteright<t$. However, when a
typical RNN is used in a larger context (i.e. a relatively large time steps),
the RNN suffers from the issue of vanishing gradient descent 9. LSTMs, a
special kind of RNN, can solve this long-term dependency problem.
(Christopher Olah. “Understanding LSTM Networks.”)
Each cell in a typical LSTMs network contains 3 gates (i.e., forget gate,
input gate, and output gate) to decide whether or not information should be
maintained in the cell state $C_{t}$.
For CountVectorizer and TfidfVectorizer, each sample of text is converted into
a 1-d feature vector of size 10000. As a result, the number of time steps
(i.e. the maximum amount of word vectors for each sample) for these two can
only be set to 1, as the pre-trained representations are done at the sample’s
level. By contrast, the number of time steps for Word2Vec can either be 1, if
we simply take an average of the word embeddings, or the length of the
sentence, where each word has an embedding and thus the pre-trained
representations are done at the word’s level. We choose the approach with 1
timestep in our model because it requires less computation power. Meanwhile,
we also do the length of the sentence, and 200 time steps are chosen as 200 is
close to the mean amount of words in each sample and it is a fairly common
choice in practice. However, since we do not have enough computation power to
fine-tune (grid search) our model, we leave it out for our model and include
it only in the final section.
In the LSTM layer, a dropout rate of 0.2, a common choice in practice10 , is
used to prevent overfitting. Grid search is performed in order to pick decent
values of hyperparameters, including the number of hidden units in the LSTM
layer, the number of hidden layers, the activation functions and the number of
nodes in the hidden layer, and the optimizer. Relatively small numbers of
hidden layers (i.e., {0, 1, 2}) and nodes (i.e., {200, 400, 600}) are selected
as the basis for grid search, because this is a simple binary classification
task and too many of them would cause overfitting.
Due to the limitation of computing resource, the grid search for LSTMs is
divided into four sequential steps. Instead of performing grid search on all
the hyperparameters all at once, the grid search is first done on the number
of hidden layers and all other hyperparameters are randomly selected from the
subset. Then, the grid search is done on the number of nodes in the hidden
layer(s), using the best number of hidden layer found in step 1. The grid
search completes when all four steps are finished. In each step we used K-fold
cross validation with $K=3$.
#### 2.6.3 Random Forest
A random forest is an ensemble classifier that estimates based on the
combination of different decision trees. So random forest will fit a number of
decision tree classifiers on various subsamples of the dataset. A random best
subsets are built by each tree in the forest. In the end, it gives the best
subset of features among all the random subsets of features.
In our project, 3 random forest algorithms have been applied with models count
vectorizer, tfidf and word-to-vector. Random forest algorithm requires 4
hyperparameters to tune, such as the number of trees in the forest (i.e.,
{200, 400, 800}); the maximum depth of the tree (i.e., {1,5,9}); the minimum
number of samples required to be at a lead node (i.e., {2, 4}); The minimum
number of samples at each leaf node has the effect of smoothing the model,
especially during regression; the minimum number of samples required to be at
a leaf node (i.e., {5, 10}). All parameters are applied to grid search and in
the end, the best set of parameters can be determined as we used K-fold cross
validation with $K=3$.
#### 2.6.4 Logistic Regression
Logistic regression is a statistical machine learning algorithm that
classifies the data by considering outcome variables on extreme ends and this
algorithm is providing a discriminatory line between classes. Compared to
another simple model, linear regression, which requires hard threshold in
classification, logistic regression can overcome threshold values for a large
dataset. Logistic regression produces a logistic curve, which is limited to
values between 0 to 1, by adding sigmoid function in the end.
In regards to our project, three logistic regressions have been applied with
models CountVectorizer, TF-IDF and Word2Vec. We did grid search on the
solvers, including newton-cg, sag, lbfgs and liblinear. Grid search is also
performed on the inverse of regularization parameter with values being {0, 4,
10}. Best parameter sets can be determined as we used K-fold cross validation
with $K=3$.
#### 2.6.5 Support Vector Machine (SVM)
SVM is a supervised machine learning algorithm in which a hyperplane is
created in order to separate and categorize features. The optimal hyperplane
is usually calculated by creating support vectors on both sides of the
hyperplane in which each vector must maximize the distance between each other.
In other words, the larger the distance between each vector around the
hyperplane, the more accurate the decision boundary will be between the
categories of features.
In regards to our project, we fit 3 support vector machines on
CountVectorizer, TfidfVectorizer, and WordToVectorizer. An SVM requires
specific parameters such as a kernel type, $C$, maximum iterations, etc. In
our case, we needed to determine the optimal $C$ as well as the optimal kernel
for each fit. We used K-fold cross validation with $K=3$. A grid search of
kernel types and $C$ was performed in order to give us the most accurate svm
model. The parameters we used for each kernel were linear and rbf while the
values we used for $C$ were 0.25 ,0.5, and 0.75. Once the grid search was
completed for these hyperparameters, the model was evaluated with the most
optimal hyperparameters using cross validation of 3 splits.
## 3 Results
Grid Search Results CountVectorizer TF-IDF Word2Vec SVM Kernel = Linear Kernel
= Linear Kernel = Linear C = 0.25 C = 0.75 C = 0.75 Logistic Regression Solver
= sag Solver = sag Solver = newton-cg C = 21.54 C = 7.74 C = 3593.81 Random
Forest Max Depth = 9 Max Depth = 9 Max Depth = 9 Min_samples_leaf = 2
Min_samples_leaf = 4 Min_samples_leaf = 2 Min_samples_split = 10
Min_samples_split = 5 Min_samples_split = 10 N_estimators = 200 N_estimators =
400 N_estimators = 400 ANN Activation = relu Activation = sigmoid Activation =
relu Optimizer = Adam Optimizer = Adam Optimizer = Adam Hidden_layers = 2
Hidden_layers = 3 Hidden_layers = 1 Num_Neurons = 600 Num_Neurons = 400
Num_Neurons = 600 LSTM Activation = sigmoid Activation = sigmoid Activation =
relu Optimizer = Adam Optimizer = Adam Optimizer = Adam Hidden_layers = 2
Hidden_layers = 2 Hidden_layers = 2 Memcells = 200 Memcells = 200 Memcells =
200 Num_Neurons = 200 Num_Neurons = 600 Num_Neurons = 600
Mean Test Scores SVM ANNs LSTMs LOGISTIC RANDOM FOREST CV 93.06% 94.29% 94.88%
94.45% 87.64% TFIDF 94.58% 93.73% 93.89% 94.79% 87.64% Word2Vec 91.17% 93.06%
92.29% 91.30% 88.60%
ANN Loss and Accuracy
LSTM Loss and Accuracy
The model is evaluated using a 3-fold of cross validation. Out of the fifteen
models, CountVectorizer with LSTMs performs the best. Word2Vec performs the
worst among the three pre-training algorithms. Random forest performs the
worst among the five fine-tuning algorithms.
## 4 Discussion
Among our three pre-training models, CountVectorizer achieves in general the
best performance comparatively and Word2Vec performs relatively poor amongst
the three models. The essential idea behind both CountVectorizer and TF-IDF is
computing a score which depends on the frequency of the word belonging to the
vocabulary. However, comparing to CountVectorizer, the TF-IDF includes an
extra inverse document frequency that “penalizes” (apparently masks) the
contextual meaning within the words that appear more frequently across
documents. They represent the importance of the word within a document. The
results may imply that even though the penalization is smoothed by a log
function, the punishment may be too high.
The results also show that in general neural networks do the best
consistently, as neural networks serve as a powerful universal approximator.
However, the loss and accuracy plots show that we are using too many epochs
and thus have the issue of overfitting. This is because our pre-training model
is already very strong so it learns a good contextual representation of text.
As a result, the epochs needed for downstream task are not much. In addition,
one thing to note is that logistic regression also performs very well. This
implies that our data are mostly linearly separable. While neural networks can
fit the data very well, but they run the risk of overfitting the data. As a
result, neural networks are not as good as SVM and Logistic Regression for TF-
IDF.
A combination of CountVectorizer and LSTMs is the best among all the models.
While LSTMs with one timestep are very similar to ANN in terms of
architecture, LSTMs have gates and a tanh activation function inside the
module. This different design may let LSTMs perform slightly better than ANN.
Word2Vec does not perform well. One reason is that we are simply taking an
average of the word embedding vectors to get a generalized vector
representation of each sample of paragraph. Taking an average fails to
represent the dependencies between words. Another reason is that we do not use
pre-trained Word2Vec embeddings available online from huge corpus but instead
build our own from the dataset. While we thought that building our own
Word2Vec would make the model specific to this task, the results show that
Word2Vec may need to be built from larger dataset.
## 5 Conclusion
This report provides a fairly simple approach to encode texts and how the
presence of words in general impacts the classification of texts as real and
fake.
We achieved high accuracy results in most of our algorithms and in particular
neural networks generally do better than the others.
What’s worth noting is that our LSTMs only use a timestep of 1 and are
essentially multi-layer perceptrons. Still, as mentioned is the LSTM’s method
section, the LSTMs with the real recurrence are performed by using Word2Vec
for representations at the word’s level. In this case, each word has its own
vector, and a sample will be a collection of vectors and thus a 2-D matrix. As
mentioned before, each vectorized word will become a timestep, and a total of
200 timesteps is used (If the paragraph has more than 200 words, only the
first 200 words will be selected). We run our model and get the following
results.
The results seem solid, but this approach is not included in our model because
it takes too much time to run and we do not have time to fine-tune the
hyperparameters. But in future work, we believe that using LSTMs with real
recurrence will give an even better results.
While we achieve great performance in this dataset, the question remains as to
whether X (to be replaced by the best model) can still perform well in tasks
that classify news into more than two categories, such as the Fake News
Challenge. In that case, a simple unidirectional LSTMs may not be so well and
may need to be replaced by a bidirectional one. In addition, it would be
interested to know how well our pre-trained model performs in other downstream
tasks, such as Spam Detection. Lastly, in our model, the pre-training is done
on the dataset given (will make the model specific to the task), instead of on
the big corpus available online, such as Google’s pre-trained Word2Vec model.
If the task were a classification of four or eight categories, pre-trained
model on large corpus may perform better as the model is pre-trained on more
words.
We can also try to improve the training by using different word embeddings.
While we only chose only 3 different types of embeddings, we could have tried
different embeddings such as GloVe and the features used are entirely
dependent only on context words. We can use different forms for encoding texts
which can be used to be trained using these algorithms to achieve a better
model. In another
State-of-the-art pre-trained models can be used if the task is no longer a
binary classification. Models like Transformer and BERT will be strong
candidates as they have learned a very strong representation that takes the
context into account when computing an embedding for a word. Unlike LSTMs
whose sequential nature prohibits parallelization, the Transformer and the
BERT can achieve parallelization by replacing recurrence with the attention
mechanism. Thus, they require less computation power and can be easily fine-
tuned in downstream tasks.
## 6 Appendix
## Github Repo
https://github.com/Sairamvinay/Fake-News-Dataset
## Author Contributions
Sairamvinay Vijayaraghavan: Project Planning, Problem Formation, DataSet
Search, POS Distribution graph, Code for CountVectorizer, Word2Vec, ANN,
Randomforest,To parse csv files (readdata), Code integration for
TextVectorizer, Grid Search model running, ROC model running, Code Base
Cleanup and management (further cleanup), PowerPoint Checking, Report Analysis
for W2V, ANN, Report editing
Zhiyuan Guo: Project Planning, DataSet Search, Polarity Graphs, Code for LSTM,
RandomForest, Adding Functionality and Readability in each of the scripts,
Code Integration, Grid Search model running, ROC model running, PowerPoint
Development, Report Analysis for TFIDF and LSTM, Report Analysis for the
Abstract, the Discussion, Conclusion, Pipeline Diagram, Report editing
Ye Wang: Project Planning, DataSet Search, Code for TFIDF, PCA, Grid Search
model running, ROC model running, Report Integration into Latex, Report
Analysis of the Results (table creations), Report Analysis for the Outlier
Removal, Random Forest, Report editing
John Voong: Word2Vec, DataCleanup (StopWord Cleanup), Grid Search model
running, ROC model running, PowerPoint Development, Report Analysis for W2V,
Pipeline Diagram, Report editing, Paper structure
Wenda Xu: Code for PCA, ROC model running, Code Base Cleanup and management,
PowerPoint Development, Report Analysis about Count Vectorizer, Report
Analysis about Logistic Regression
Armand Nasseri: Project Planning, Dataset search, Code for SVM, Data Cleanup
(StopWord Cleanup), ROC model running, PowerPoint Development, Report Analysis
about SVM
Jiaru Cai: Outlier Removal, Accuracy and Loss Plots for Neural Network,
PowerPoint Framework
Kevin Vuong: DataCleanup (remove punctuations), Code for Logistic Regression,
Grid Search model running, PowerPoint Cleanup, Report Analysis about Data
Cleanup, Introduction and Abstract
Linda Li: Unigram and Bigram analysis, Code for ROC plots, Report Analysis of
the Data Cleanup section, Graph analysis
Eshan Wadhwa: Related Work, References and Citation (Introduction and Field
research), Report Editing, PowerPoint slides,
## References
[1] Samir Bajaj, “The Pope Has a New Baby!” Fake News Detection Using Deep
Learning”, Winter 2017,
https://pdfs.semanticscholar.org/19ed/b6aa318d70cd727b3cdb006a782556ba657a.pdf
[2] Arjun Roy, Kingshuk Basak, Asif Ekbal, and Pushpak Bhattacharyya, “A Deep
Ensemble Framework for Fake News Detection and Classification”, 12 November
2018,
https://arxiv.org/pdf/1811.04670.pdf
[3] Niall J. Conroy, Victoria L. Rubin, and Yimin Chen, “Automatic Deception
Detection: Methods for Finding Fake News”, November 2015,
https://asistdl.onlinelibrary.wiley.com/doi/epdf/10.1002/pra2.2015.145052010082.
[4] Liang Wu and Huan Liu, “Tracing Fake-News Footprints: Characterizing
Social Media Messages by How They Propagate”, February 2018,
http://www.public.asu.edu/~liangwu1/WSDM18_TraceMiner.pdf
[5] Adrian Colyer, “Tracing fake news footprints: characterizing social media
messages by how they propagate”,the morning paper, February 2018,
https://blog.acolyer.org/2018/02/19/tracing-fake-news-footprints-
characterizing-social-media-messages-by-how-they-propagate/
[6] Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang and Huan Liu, “Fake News
Detection on Social Media: A Data Mining Perspective”, August 2017,
https://arxiv.org/abs/1708.01967
[7] Jiawei Zhang, Bowen Dong and Philip S. Yu, “FAKEDETECTOR: Effective Fake
News Detection with Deep Diffusive Neural Network”, August 2019,
https://arxiv.org/pdf/1805.08751.pdf
[8] Sepp Hochreiter and Jurgen Schmidhuber, “Long short-term memory”, November
1997,
http://www.bioinf.jku.at/publications/older/2604.pdf
[9] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. “Learning long-term
dependencies with gradient descent is difficult”, March 1994,
http://www.comp.hkbu.edu.hk/~markus/teaching/comp7650/tnn-94-gradient.pdf
[10] Gaofeng Cheng, Vijayaditya Peddinti, Daniel Povey, et al., “An
Exploration of Dropout with LSTMs”. August 2017,
https://www.danielpovey.com/files/2017_interspeech_dropout.pdf
[11] Juan Ramos. “Using tf-idf to determine word relevance in document
queries”, December 2003,
https://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/ramos.pdf
[12] Gerard Salton and Christopher Buckley. “Term-weighting approaches in
automatic text retrieval”, January 1988,
https://www.sciencedirect.com/science/article/abs/pii/0306457388900210
[13] Jason Brownlee. “How to Prepare Text Data for Machine Learning with
scikit-learn”, August 2019,
https://machinelearningmastery.com/prepare-text-data-machine-learning-scikit-
learn/
|
2024-09-04T02:54:58.923713 | 2020-02-18T13:58:06 | 2003.04986 | {
"authors": "Vukosi Marivate, Tshephisho Sefara, Vongani Chabalala, Keamogetswe\n Makhaya, Tumisho Mokgonyane, Rethabile Mokoena, Abiodun Modupe",
"full_text_license": null,
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26147",
"submitter": "Vukosi Marivate",
"url": "https://arxiv.org/abs/2003.04986"
} | arxiv-papers | # Investigating an approach for low resource language dataset creation,
curation and classification: Setswana and Sepedi
###### Abstract
The recent advances in Natural Language Processing have been a boon for well
represented languages in terms of available curated data and research
resources. One of the challenges for low-resourced languages is clear
guidelines on the collection, curation and preparation of datasets for
different use-cases. In this work, we take on the task of creation of two
datasets that are focused on news headlines (i.e short text) for Setswana and
Sepedi and creation of a news topic classification task. We document our work
and also present baselines for classification. We investigate an approach on
data augmentation, better suited to low resource languages, to improve the
performance of the classifiers.
languageresourceLanguage Resources
Investigating an approach for low resource language dataset creation, curation
and classification: Setswana and Sepedi
Vukosi Marivate${}^{1}{}^{,}{}^{2}$, Tshephisho Sefara2, Vongani Chabalala3,
Keamogetswe Makhaya4, Tumisho Mokgonyane5, Rethabile Mokoena6, Abiodun
Modupe${}^{7}{}^{,}{}^{1}$
---
University of Pretoria1, CSIR2, University of Zululand3, University of Cape
Town4,
University of Limpopo5, North-West University6, University of the
Witwatersrand7
<EMAIL_ADDRESS><EMAIL_ADDRESS>
Abstract content
## 1\. Introduction
The most pressing issue with low-resource languages is of insufficient
language resources. In this study, we introduce an investigation of a low-
resource language that provides automatic formulation and customization of new
capabilities from existing ones. While there are more than six thousand
languages spoken globally, the openness of resources for each is
extraordinarily unbalanced [Nettle, 1998]. For example, if we focus on
language resources annotated on the public domain, as of November 2019, AG
corpus released about $496,835$ news articles only in English languages from
more than $200$ sources111http://groups.di.unipi.it/~gulli, Reuters News
Dataset [Lewis, 1997] comprise an roughly $10,788$ annotated texts from the
Reuters financial newswire. The New York Times Annotated Corpus [Sandhaus,
2008] holds over $1.8$ million articles, and there are no standard annotated
tokens in the low-resource language. Google Translate only supports around 100
languages [Johnson et al., 2017]. A significant number of bits of knowledge
focus on a small number of languages neglecting $17\%$ out of the world’s
language categories label as low-resource [Strassel and Tracey, 2016], which
makes it challenging to develop various mechanisms for Natural Language
Processing (NLP).
In South Africa several of the news websites (private and public) are
published in English, even though there are 11 official languages (including
English). We list the top premium newspapers by circulation as per first
Quarter 2019 [Bureau of Circulations, 2019] in Table 1. We do not have a
distinct collection of a diversity of languages with most of the reported
datasets as existed in English, Afrikaans and isiZulu. In this work, we aim to
provide a general framework that enables us to create an annotated linguistic
resource for Setswana and Sepedi news headlines. We apply data sources of the
news headlines from the South African Broadcast Corporation (SABC)
222http://www.sabc.co.za/, their social media streams and a few acoustic news.
Unfortunately, we do not have any direct access to news reports, so we hope
this study will promote collaboration between the national broadcaster and NLP
researchers.
Table 1: Top newspapers in South Africa with their languages Paper | Language | Circulation
---|---|---
Sunday Times | English | 260132
Soccer Laduma | English | 252041
Daily Sun | English | 141187
Rapport | Afrikaans | 113636
Isolezwe | isiZulu | 86342
Sowetan | English | 70120
Isolezwe ngeSonto | isiZulu | 65489
Isolezwe ngoMgqibelo | isiZulu | 64676
Son | Afrikaans | 62842
The rest of the work is organized as follows. Section 2. discusses prior work
that has gone into building local corpora in South Africa and how they have
been used. Section 3. presents the proposed approach to build a local news
corpora and annotating the corpora with categories. From here, we focus on
ways to gather data for vectorization and building word embeddings (needing an
expanded corpus). We also release and make pre-trained word embeddings for 2
local languages as part of this work [Marivate and Sefara, 2020a]. Section 4.
investigate building classification models for the Setswana and Sepedi news
and improve those classifiers using a 2 step augmentation approach inspired by
work on hierarchical language models [Yu et al., 2019]. Finally, Section 5.
concludes and proposes a path forward for this work.
## 2\. Prior Work
Creating sizeable language resources for low resource languages is important
in improving available data for study [Zoph et al., 2016] and cultural
preservation. If we focus our attention on the African continent, we note that
there are few annotated datasets that are openly available for tasks such as
classification. In South Africa, the South African Center for Digital Language
Resources (SADILAR) 333www.sadilar.org has worked to curate datasets of local
South African languages. There remain gaps such as accessing large corpora and
data from sources such as broadcasters and news organizations as they have
sizeable catalogs that are yet to make it into the public domain. In this
work, we work to fill such a gap by collecting, annotating and training
classifier models for news headlines in Setswana and Sepedi. As the data that
we do find publicly is still small, we also have to deal with the challenges
of Machine Learning on small data.
Machine learning systems perform poorly in presence of small training sets due
to overfitting. To avoid this problem, data augmentation can be used. The
technique is well known in the field of image processing [Cubuk et al., 2019].
Data augmentation refers to the augmentation of the training set with
artificial, generated, training examples. This technique is used less
frequently in NLP but a number of few studies applied data augmentation.
?) use data augmentation to counteract overfitting where recurrent neural
network (RNN) Encoder-Decoder is implemented specifically geared toward a low-
resource setting. Authors apply data augmentation by finding words that share
word stem for example fizzle and fizzling share fizzl. Then authors replace a
stem with another string.
?) apply data augmentation by using synonyms as substitute words for the
original words. However, ?) states that synonyms are very limited and the
synonym-based augmentation cannot produce numerous different patterns from the
original texts. Hence, ?) proposes contextual data augmentation by replacing
words that are predicted by a language model given the context surrounding the
original words to be augmented.
As ?) states that these techniques are valid, they are not often used in
practice because they have a high cost of implementation relative to
performance gain. They propose an easy data augmentation as techniques for
boosting performance on text classification tasks. These techniques involve
synonym replacement, random insertion, random swap, and random deletion of a
word. Authors observed good performance when using fraction of the dataset
(%):1, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, as the data size increases,
the accuracy also increases for augmented and original data. Original data
obtained highest accuracy of 88.3% at 100% data size while augmented data
obtained accuracy of 88.6% at 50% data size.
In this work, we investigate the development of a 2 step text augmentation
method in order to be improve classification models for Setswana and Sepedi.
To do this we had to first identify a suitable data source. Collect the data,
and then annotate the datasets with news categories. After the data is
collected and annotated, we then worked to create classification models from
the data as is and then use a word embedding and document embedding
augmentation approach.
## 3\. Developing news classification models for Setswana and Sepedi
Here we discuss how data was collected as well as the approach we use to build
classification models.
### 3.1. Data Collection, Cleaning and Annotation
Before we can train classification models, we first have to collect data for 2
distinct processes. First, we present our collected news dataset as well as
its annotation. We then discuss how we collected larger datasets for better
vectorization.
#### 3.1.1. News data collection and annotation
The news data we collected is from the SABC444http://www.sabc.co.za/ Facebook
pages. The SABC is the public broadcaster for South Africa. Specifically, data
was collected from the Thobela FM (An SABC Sepedi radio
station)555https://www.facebook.com/thobelafmyaka/ and Motsweding FM (An SABC
Setswana radio station)666https://www.facebook.com/MotswedingFM/. We scraped
the news headlines that are published as posts on both Facebook pages. We
claim no copyright for the content but used the data for research purposes. We
summarize the datasets in Table 2. We visualize the token distributions in
Sepedi and Setswana in Figures 1 and 2 respectively.
Table 2: News Data Sets | Setswana | Sepedi
---|---|---
Corpus Size (headlines) | 219 | 491
Number of Tokens (words) | 1561 | 3018
Figure 1: Setswana Wordcloud Figure 2: Sepedi Wordcloud
As can be seen, the datasets are relatively small and as such, we have to look
at other ways to build vectorizers that can better generalize as the word
token diversity would be very low.
We annotated the datasets by categorizing the news headlines into: _Legal_ ,
_General News_ ,_Sports_ , _Other_ , _Politics_ , _Traffic News_ , _Community
Activities_ , _Crime_ , _Business_ and _Foreign Affairs_. Annotation was done
after reading the headlines and coming up with categories that fit both
datasets. We show the distribution of the labels in both the Setswana and
Sepedi data sets in Figures 3 and 4 respectively. For this work, we only
explore single label categorization for each article. It remains future work
to look at the multi-label case. As such, there might be some noise in the
labels. Examples from the Sepedi annotated news corpus are shown next:
> _Tsela ya N1 ka Borwa kgauswi le Mantsole Weighbridge ka mo Limpopo ebe e
> tswaletswe lebakanyana ka morago ga kotsi yeo e hlagilego._ Traffic
>
> _Tona ya toka Michael Masutha,ore bahlankedi ba kgoro ya ditirelo tsa
> tshokollo ya bagolegwa bao ba tateditswego dithieeletsong tsa khomisene ya
> go nyakisisa mabarebare a go gogwa ga mmuso ka nko,ba swanetse go hlalosa
> gore ke ka lebaka la eng ba sa swanelwa go fegwa mesomong_ Legal
Figure 3: Setswana news title category distribution Figure 4: Sepedi news
title category distribution
The full dataset is made available online [Marivate and Sefara, 2020b] for
further research use and improvements to the
annotation777https://zenodo.org/record/3668495. As previously discussed, we
used larger corpora to create language vectorizers for downstream NLP tasks.
We discuss this next.
#### 3.1.2. Vectorizers
Before we get into the annotated dataset, we needed to create pre-trained
vectorizers in order to be able to build more classifiers that generalize
better later on. For this reason we collected different corpora for each
language in such as way that we could create Bag of Words, TFIDF, Word2Vec
[Mikolov et al., 2013] and FastText [Bojanowski et al., 2017] vectorizers
(Table 3). We also make these vectorizers available for other researchers to
use.
Table 3: Vectorizer Corpora Sizes in number of lines (number of tokens) Source | Setswana | Sepedi
---|---|---
Wikipedia | 478(_21924_)888https://tn.wikipedia.org/ | 300(_10190_)999https://nso.wikipedia.org/
JW300101010http://opus.nlpl.eu/JW300.php | 874464(_70251_) | 618275(_53004_)
Bible | 3110(_40497_) | 29723
Constitution111111https://www.justice.gov.za/legislation/constitution/pdf.html | 7077(_3940_) | 6564(_3819_)
SADILAR121212https://www.sadilar.org/index.php/en/resources | 33144(_61766_) | 67036(_87838_)
Total | 946264(_152027_) | 721977(_149355_)
### 3.2. News Classification Models
We explore the use of a few classification algorithms to train news
classification models. Specifically we train
* •
Logistic Regression,
* •
Support Vector Classification,
* •
XGBoost, and
* •
MLP Neural Network.
To deal with the challenge of having a small amount of data on short text, we
use data augmentation methods, specifically a word embedding based
augmentation [Wang and Yang, 2015], approach that has been shown to work well
on short text [Marivate and Sefara, 2019]. We use this approach since we are
not able to use other augmentation methods such as synonym based (requires
developed Wordnet Synsets [Kobayashi, 2018]), language models (larger corpora
needed train) and back-translation (not readily available for South African
languages). We develop and present the use of both word and document
embeddings (as an augmentation quality check) inspired by a hierarchical
approach to augmentation [Yu et al., 2019].
## 4\. Experiments and Results
This Section presents the experiments and results. As this is still work in
progress, we present some avenues explored in both training classifiers and
evaluating them for the task of news headline classification for Setswana and
Sepedi.
### 4.1. Experimental Setup
For each classification problem, we perform 5 fold cross validation. For the
bag-of-words and TFIDF vectorizers, we use a maximum token size of 20,000. For
word embeddings and language embeddings we use size 50. All vectorizers were
trained on the large corpora presented earlier.
#### 4.1.1. Baseline Experiments
We run the baseline experiments with the original data using 5-fold cross
validation. We show the performance (in terms of weighted F1 score) in the
Figures 5 & 6\. We show the baseline results as _orig_. For both the Bag-of-
Words (TF) and TFIDF, the MLP performs very well comparatively to the other
methods. In general the TFIDF performs better.
Figure 5: Baseline classification model performance for Setswana news title
categorisation
Figure 6: Baseline classification model performance for Sepedi news title
categorisation
#### 4.1.2. Augmentation
We applied augmentation in different ways. First for Sepedi and Setswana word
embeddings (word2vec), we use word embedding-based augmentation. We augment
each dataset 20 times on the training data while the validation data is left
intact so as to be comparable to the earlier baselines. We show the effect of
augmentation in Figure 5 and 6 (performance labeled with _aug_
The contextual, word2vec based, word augmentation improves the performance of
most of the classifiers. If we now introduce a quality check using doc2vec
(Algorithm 1) we also notice the impact on the performance for Sepedi (Figure
6 _aug qual_ ). We were not able to complete experiments with Setswana for the
contextual augmentation with a quality check, but will continue working to
better under stand the impact of such an algorithm in general. For example, it
remains further work to investigate the effects of different similarity
thresholds for the algorithm on the overall performance, how such an algorithm
works on highly resourced languages vs low resourced languages, how we can
make the algorithm efficient etc.
Input: $s$: a sentence, $run$: maximum number of attempts at augmentation
Output: $\hat{s}$ a sentence with words replaced
1 def _Augment(_s,run_)_:
2 Let $\vv{V}$ be a vocabulary;
3 for _ $i$ in range(_run_) _ :
4 $w_{i}\leftarrow$ randomly select a word from $s$;
5 $\vv{w}\leftarrow$ find similar words of $w_{i}$;
6 $s_{0}\leftarrow$ randomly select a word from $\vv{w}$ given weights as
distance;
7 $\hat{s}\leftarrow$replace $w_{i}$ with similar word $s_{0}$;
8 $\vv{s}\leftarrow Doc2vec(s)$;
9 $\vv{\hat{s}}\leftarrow Doc2vec(\hat{s})$;
10 $similarity$ $\leftarrow$ Cosine Similarity($\vv{s}$, $\vv{\hat{s}}$);
11 if _$similarity$ $>$ $threshold$_ :
12 return($\hat{s}$);
13
14
15
16
17
18
Algorithm 1 Contextual (Word2vec-based) augmentation algorithm with a doc2vec
quality check
Figure 7: Word2Vec feature based performance for news headline classification
Figure 8: Confusion Matrix of News headline classification models
It also interesting to look at how performance of classifiers that were only
trained with word2vec features would fair. Deep neural networks are not used
in this current work and as such we did not use recurrent neural networks, but
we can create sentence features from - word2vec by either using: the mean of
all word vectors in a sentence, the median of all word vectors in a sentence
or the concatenated power means [Rücklé et al., 2018]. We show the performance
of using this approach with the classifiers used for Bag of Words and TFIDF
earlier in Figure 8.
The performance for this approach is slightly worse with the best results for
Sepedi news headline classification being with XGBoost on the augmented data.
We hope to improve this performance using word2vec feature vectors using
recurrent neural networks but currently are of the view that increasing the
corpora sizes and the diversity of corpora for the pre-trained word embeddings
may yield even better results.
Finally, we show the confusion matrix of the best model in Sepedi on a test
set in Figure 8. The classifier categorises _General News_ , _Politics_ and
_Legal_ news headlines best. For the others there there is more error. A
larger news headline dataset is required and classification performance will
also need to be compared to models trained on full news data (with the article
body). For the Setswana classifiers, the confusion matrix shows that the data
skew results in models that mostly can categorise between categorises _General
News_ and _Other_. We need to look at re-sampling techniques to improve this
performance as well as increasing the initial dataset size.
## 5\. Conclusion and Future Work
This work introduced the collection and annotation of Setswana and Sepedi news
headline data. It remains a challenge that in South Africa, 9 of the 11
official languages have little data such as this that is available to
researchers in order to build downstream models that can be used in different
applications. Through this work we hope to provide an example of what may be
possible even when we have a limited annotated dataset. We exploit the
availability of other free text data in Setswana and Sepedi in order to build
pre-trained vectorisers for the languages (which are released as part of this
work) and then train classification models for news categories.
It remains future work to collect more local language news headlines and text
to train more models. We have identified other government news sources that
can be used. On training embedding models with the data we have collected,
further studies are needed to look at how augmentation using the embedding
models improve the quality of augmentation.
## 6\. Bibliographical References
## References
* Bojanowski et al., 2017 Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146.
* Bureau of Circulations, 2019 Bureau of Circulations, A. (2019). Newspaper circulation statistics for the period january–march 2019 (abc q1 2019).
* Cubuk et al., 2019 Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. (2019). Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 113–123.
* Johnson et al., 2017 Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viégas, F., Wattenberg, M., Corrado, G., et al. (2017). Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351.
* Kobayashi, 2018 Kobayashi, S. (2018). Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 452–457.
* Lewis, 1997 Lewis, D. D. (1997). Reuters-21578 text categorization collection data set.
* Marivate and Sefara, 2019 Marivate, V. and Sefara, T. (2019). Improving short text classification through global augmentation methods. arXiv preprint arXiv:1907.03752.
* Marivate and Sefara, 2020a Marivate, V. and Sefara, T. (2020a). African embeddings [nlp]. https://doi.org/10.5281/zenodo.3668481, February.
* Marivate and Sefara, 2020b Marivate, V. and Sefara, T. (2020b). South African news data dataset. https://doi.org/10.5281/zenodo.3668489.
* Mikolov et al., 2013 Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119.
* Nettle, 1998 Nettle, D. (1998). Explaining global patterns of language diversity. Journal of anthropological archaeology, 17(4):354–374.
* Rücklé et al., 2018 Rücklé, A., Eger, S., Peyrard, M., and Gurevych, I. (2018). Concatenated power mean word embeddings as universal cross-lingual sentence representations. arXiv preprint arXiv:1803.01400.
* Sandhaus, 2008 Sandhaus, E. (2008). The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752.
* Silfverberg et al., 2017 Silfverberg, M., Wiemerslage, A., Liu, L., and Mao, L. J. (2017). Data augmentation for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 90–99.
* Strassel and Tracey, 2016 Strassel, S. and Tracey, J. (2016). Lorelei language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3273–3280.
* Wang and Yang, 2015 Wang, W. Y. and Yang, D. (2015). That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2557–2563.
* Wei and Zou, 2019 Wei, J. and Zou, K. (2019). Eda: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6383–6389.
* Yu et al., 2019 Yu, S., Yang, J., Liu, D., Li, R., Zhang, Y., and Zhao, S. (2019). Hierarchical data augmentation and the application in text classification. IEEE Access, 7:185476–185485.
* Zhang et al., 2015 Zhang, X., Zhao, J., and LeCun, Y. (2015). Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657.
* Zoph et al., 2016 Zoph, B., Yuret, D., May, J., and Knight, K. (2016). Transfer learning for low-resource neural machine translation. arXiv preprint arXiv:1604.02201.
|
2024-09-04T02:54:58.932182 | 2020-03-04T06:40:14 | 2003.04991 | {
"authors": "Jitin Krishnan, Hemant Purohit and Huzefa Rangwala",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26148",
"submitter": "Jitin Krishnan",
"url": "https://arxiv.org/abs/2003.04991"
} | arxiv-papers | # Unsupervised and Interpretable Domain Adaptation to Rapidly Filter Tweets
for Emergency Services
Jitin Krishnan Department of Computer Science
George Mason University
Fairfax, VA, USA
<EMAIL_ADDRESS>Hemant Purohit Department of Information
Sciences & Technology
George Mason University
Fairfax, VA, USA
<EMAIL_ADDRESS>Huzefa Rangwala Department of Computer Science
George Mason University
Fairfax, VA, USA
<EMAIL_ADDRESS>
###### Abstract
During the onset of a natural or man-made crisis event, public often share
relevant information for emergency services on social web platforms such as
Twitter. However, filtering such relevant data in real-time at scale using
social media mining is challenging due to the short noisy text, sparse
availability of relevant data, and also, practical limitations in collecting
large labeled data during an ongoing event. In this paper, we hypothesize that
unsupervised domain adaptation through multi-task learning can be a useful
framework to leverage data from past crisis events for training efficient
information filtering models during the sudden onset of a new crisis. We
present a novel method to classify relevant social posts during an ongoing
crisis without seeing any new data from this event (fully unsupervised domain
adaptation). Specifically, we construct a customized multi-task architecture
with a multi-domain discriminator for crisis analytics: multi-task domain
adversarial attention network (MT-DAAN). This model consists of dedicated
attention layers for each task to provide model interpretability; critical for
real-word applications. As deep networks struggle with sparse datasets, we
show that this can be improved by sharing a base layer for multi-task learning
and domain adversarial training. The framework is validated with the public
datasets of TREC incident streams that provide labeled Twitter posts (tweets)
with relevant classes (Priority, Factoid, Sentiment) across 10 different
crisis events such as floods and earthquakes. Evaluation of domain adaptation
for crisis events is performed by choosing one target event as the test set
and training on the rest. Our results show that the multi-task model
outperformed its single-task counterpart. For the qualitative evaluation of
interpretability, we show that the attention layer can be used as a guide to
explain the model predictions and empower emergency services for exploring
accountability of the model, by showcasing the words in a tweet that are
deemed important in the classification process. Finally, we show a practical
implication of our work by providing a use-case for the COVID-19 pandemic.
###### Index Terms:
Social Media, Crisis Analytics, Text Classification, Unsupervised Domain
Adaptation, Interpretability
Figure 1: Problem Statement: Interpretably predict labels for tweets collected
during an ongoing crisis using only the past crisis data, given a)
unavailability of labeled data in the ongoing event, and b) need for
interpretability of machine reasoning behind data filtering for emergency
managers.
## I Introduction
During the sudden onset of a crisis situation, social media platforms such as
Twitter provide valuable information to aid crisis response organizations in
gaining real-time situational awareness [1]. Effective analysis of important
information such as affected individuals, infrastructure damage, medical
emergencies, or food and shelter needs can help emergency responders make
time-critical decisions and allocate resources in the most effective manner
[2, 3, 4].
Several machine learning systems have been deployed to help towards this
humanitarian goal of converting real-time social media streams into actionable
knowledge. Classification being the most common task, researchers have
designed models [5, 6, 7, 3] that classify tweets into various crisis-
dependent categories such as priority, affected individuals, type of damage,
type of assistance needed, usefulness of the tweet, etc. Social media streams
contain short, informal, and abbreviated content; with potential linguistic
errors and sometimes contextually ambiguous. These inherently challenging
properties of tweets make their classification task and formulation less
trivial when compared to traditional text classification tasks.
In this paper, we address two practically important and underdeveloped aspects
of current research in social media mining for crisis analytics to classify
relevant social web posts: a) a fully unsupervised domain adaptation, and b)
interpretability of predictions. A fully unsupervised domain adaptation uses
no data from the ongoing crisis to train the model. Nguyen et al., 2016 [5]
showed that their convolutional neural network (CNN) model does not require
feature engineering and performed better than the state-of-the-art methods;
one of their models being completely unsupervised [5]. Similarly, Alam et al.,
2018 [6] designed a CNN architecture with adversarial training on graph
embeddings, but utilizing unlabeled target data. Our goal is to construct an
unsupervised model that does not require any unlabeled target data with the
capability of being interpretable. We specifically address the problem of data
sparsity and limited labels by designing a multi-task classification model
with domain adversarial training; which, to the best of our knowledge, is not
explored in social media mining for crisis analytics. Another crucial
component of our model is interpretability. In prior works, when a top
performing model produces an accuracy of $78\%$, for instance, it is unclear
how trustworthy it is and what features are deemed important in the model’s
decision-making process. An interpretable model like ours can present with a
convincing evidence of which words the classifier deems important when making
a certain prediction, and helps ensure reliability for domain users, e.g.,
emergency managers.
Contributions: a) To address the problems of data sparsity and limited labels,
we construct a customized multi-task learning architecture (MT-DAAN) to filter
tweets for crisis analytics by training four different classification tasks
(c.f. examples in Fig. 3) across ten different crisis events under domain
shift. This multi-task domain adversarial model consists of dedicated
attention layers for each task for interpretability and a domain classifier
branch to promote the model to be domain-agnostic. b) We demonstrate that the
attention layers provide interpretability for the predictions made by the
classifiers; with the goal to aid emergency services in a more meaningful way.
c) We empirically validate the performance of the underlying single-task
attention-based neural network architecture by comparing it to the state-of-
the-art methods, for improving generalizability and interpretability for
domain adaptation in unsupervised tweet classification tasks in general. d)
Additionally, through experiments, we show that deep networks struggle with
small datasets, and that this can be improved by sharing the base layer for
multi-task learning and domain adversarial training.
## II Related Work and Background
### II-A Domain Adaptation
Domain Adaptation in text classification tasks has a long line of fruitful
research [8, 9, 10] that try to minimize the difference between the domains so
that a model trained solely on one domain is generalizable to unseen test data
from a completely different domain. With the introduction of Domain-
Adversarial training of Neural Networks (DANN) [11], many state-of-the-art
models now utilize unlabeled target data to train classifiers that are
indiscriminate toward different domains. The speciality of this architecture
is that it consists of an extra branch, which performs domain classification
using unlabeled data from different domains. Thus, both task and domain
classifiers share some bottom layers but have separate layers towards the top.
A negative gradient (gradient reversal) from the domain classifier branch is
back-propagated to promote the features at the lower layers of the network
incapable of discriminating the domains. Recent works such as Adversarial
Memory Network (AMN) [12] utilizes attention, in addition to DANN, to bring
interpretability to capture the pivots for sentiment classification.
Hierarchical Attention Network (HATN) [13] expands upon AMN by first
extracting pivots and then jointly training networks for both pivots and non-
pivots.
For filtering social web data for crisis analytics, these models do not
suffice and need customized expansions due to the following reasons: a)
Collecting and using large unlabeled target data from the new/ongoing crisis
event may not be practically viable, thus, we aim for a fully unsupervised
modeling. b) Having access to unlabeled data from multiple crisis events can
alleviate the above problem to an extent by using it to train the domain
classifier branch to push the model to be domain independent. c) Due to the
low-resource nature of the dataset, binary classifiers may miss important
lower level features that can be potentially improved by a multi-task model
that shares the lower layers of the network for all the tasks. This is also
evident from our results in Table III and IV, which show that deep models that
perform much better than simple models on Amazon reviews do not significantly
outperform them on TREC tweet dataset for crises.
### II-B Multi-Task Learning
Multi-Task Learning (MTL) solves multiple tasks at the same time with a goal
to improve the overall generalization capability of the model [14]. Within the
context of Deep Learning, MTL is performed by sharing (or constraining) lower
level layers and using dedicated upper level layers for various tasks. A rich
overview of MTL in Deep Neural Networks is presented by Ruder (2017) [15]. MTL
has been a successful strategy over the past few years for many research
explorations such as relationship networks [16] in computer vision and Sluice
networks [17] in natural language processing. Similar problems in domain
adaptation of semantic classification and information retrieval were addressed
by jointly learning to leverage large amounts of cross-task data [18]. In low
resource datasets such as for crises, the chance of overfitting is very high.
Thus, it seems intuitively better for the model to find a shared
representation capturing different tasks and not just one, such that feature
commonalities across tasks can be exploited.
### II-C Attention Mechanism
Attention mechanism [19], originally designed for machine translation
problems, has become one of the most successful and widely used methods in
deep learning that can look at a part of a sentence at a time like humans.
This is particularly useful because of its ability to construct a context
vector by weighing on the entire input sequence unlike previous sequence-to-
sequence models [20] that used only the last hidden state of the encoder
network (typically BiLSTM [21], LSTM [22], or GRU [23]). For example, in a
sentence, the context vector is a dot product of the word activations and
weights associated with each word; thus leading to an improved contextual
memorization, especially for long sentences. Our method incorporates such
attention mechanisms to enhance interpretability of the classifier.
## III Methodology
### III-A Problem Statement: Unsupervised Domain Adaptation for Crisis Tweet
Classification
Using notations in Table I, consider a set $C$ of all crisis events such as
Guatemala Earthquake or Typhoon Yolanda. The task of unsupervised domain
adaptation for crisis analytics is to train a classifier for a specific target
crisis ($c_{t}$) using labeled ($L_{C-c_{t}}$) and unlabeled ($U_{C-c_{t}}$)
data from all other crises; where $C-c_{t}$ denotes the set of all crisis
events minus the target crisis. We assume that no data record from the target
crisis is available for training. Following the traditional domain adaptation
terminology, $X_{s}$ = $L_{C-c_{t}}$ represents the labeled data from the
source domain $S$ and $Y_{s}$ = $y_{C-c_{t}}$ represents the ground truth
labels on which the classifier is trained. And, $X_{t}$ = $L_{c_{t}}$
represents the labeled data from the target domain $T$ and $Y_{t}$ =
$y_{c_{t}}$ represents the ground truth labels; both of which are only used
for testing the classifier. $X_{d}$ = $U_{C-c_{t}}$ represents the unlabeled
data from different domains minus the target. To summarize:
Input: $X_{s}$, $Y_{s}$, $X_{d}$
Output: $Y_{t}^{pred}$ $\leftarrow$ $predict(X_{t})$
Notation | Definition
---|---
$C$ | Set of all crisis events $\\{c_{1},c_{2},...\\}$
$L_{c_{k}}$ | Set of labeled data from the event $c_{k}$
$y_{c_{k}}$ | Set of ground truth labels for $L_{c_{k}}$.
$m$ | Number of tasks (Number of bits in each label)
$U_{c_{k}}$ | Set of unlabeled data from the event $c_{k}$
$T_{x}$ | Number of words in a sentence
$x^{<k>}$ | $k$-th word of a sentence
$\alpha^{<k>}$ | attention from $k$-th word
$a^{<k>}$ | BiLSTM activation from $k$-th word
TABLE I: Notations
### III-B Overview
In the following sections, we describe three models: Single-Task Attention
Network (ST), Single-Task Domain Adversarial Attention Network (ST-DAAN), and
Multi-Task Domain Adversarial Attention Network (MT-DAAN). ST is the model we
adopt from [24] to build the single-task attention based baseline. ST-DAAN is
constructed on top of ST to make the model domain agnostic by performing
adversarial training using gradient reversal. Finally, MT-DAAN is constructed
on top of ST-DAAN with dedicated attention layers for each task on a shared
BiLSTM layer. This is shown in Figure 2.
Figure 2: Fully Unsupervised Domain Adaptation Set-up for Multi-Task Crisis
Tweet Classification.
### III-C Single-Task Attention Network (ST)
We first describe the single-task attention network [24] on top of which we
build our models. This model aligns with our goals of interpretability and
unsupervised domain adaptation. This BiLSTM based model with Attention gives
us three main advantages:
1. 1.
Unlike several existing domain adaptation methods that use unlabeled target
data to train the domain adversarial component via gradient reversal, this
method is a fully unsupervised baseline which also can be customized for
multi-task learning.
2. 2.
The method uses attention mechanism which in turn weighs each word in a
sentence based on its importance. This can be directly utilized for
interpretability.
3. 3.
The method also runs much faster (only a few minutes), i.e. highly useful in
crisis times, as compared to the top performing semi-supervised models such as
HATN [13] (hours).
This model [24] consists of a BiLSTM layer which produces $T_{x}$ activations,
each corresponding to a word in the sentence. These activations are passed
through dense and softmax layers and are combined by dot product to produce
the context vector $\sum_{k=1}^{T_{x}}\alpha^{<k>}a^{<k>}$, where $a^{<k>}$ is
the BiLSTM activation from $k$-th word and $\alpha^{<k>}$ is the attention
weight of $k$-th word. Sentences with words greater than $T_{x}$ are stripped
and those with words lower than $T_{x}$ are padded. This single-task ($m=1$)
attention network is the building block with which rest of the following
models are constructed. The single-task binary cross entropy loss function is
shown below.
$\footnotesize
L_{T}=-\frac{1}{N}\sum_{i=1}^{N}[y_{i}\log\hat{y_{i}}+(1-y_{i})\log(1-\hat{y_{i}})]$
(1)
where $T$ represents the task, $y$ is the true label, and $\hat{y}$ is the
predicted label.
### III-D Single-Task Domain Adversarial Attention Network
(ST-DAAN)
To study the specific contribution of domain adversarial training, we
construct a secondary baseline over the ST architecture by constructing an
additional branch with gradient reversal layer which is represented by the
green blocks in Figure 2. This is a single-task binary classifier with $m=1$.
Domain Adversarial Training of Neural Networks (DANN) [11] was introduced with
a goal to confuse the classifier by back-propagating a negative gradient from
a separate domain classifier branch (right-most branch, as shown in Figure 2).
This makes the classifier agnostic to difference in domains. This back-
propagation is implemented using a gradient reversal layer [11] which does
nothing during the forward pass but pushes a negative gradient
($-\lambda\frac{\partial L_{d}}{\partial\theta_{f}}$) during the backward
(gradient update) pass. $L_{d}$ is the domain classification loss, $\lambda$
is the strength of the reversal, and $f$ represents the lower level layers or
features over which the negative gradient update is performed. In our
architecture, the goal is to make the BiLSTM layer indiscriminate towards
various crisis domains such that the multi-task classification does not depend
on the domain from which the tweet/sentence is coming from. The ST-DAAN loss
function is shown below.
$\footnotesize L_{T}^{\prime}=L_{T}+w_{d}L_{d}$ (2)
where $w_{d}$ is the domain adversarial loss weight. $L_{d}$ represents the
categorical cross entropy loss for multi-domain discriminator shown below.
$\footnotesize
L_{d}=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{|C-c_{t}|}[y_{ij}\log\hat{y_{ij}}]$
(3)
where $C-c_{t}$ is the set of all crisis events without the target event.
### III-E Multi-Task Domain Adversarial Attention Network
(MT-DAAN)
Building on top of ST-DAAN, we construct MT-DAAN, which is intended to
classify problems with multiple tasks or labels. For each task, a dedicated
attention layer is allocated from which it predicts binary labels. The BiLSTM
layer remains exactly the same as in the single-task model but multiple
attention blocks are added for each task along with a domain classifier. In
the architecture decision process, we first investigated a multi-label
classifier where all layers are shared with the final softmax layer making
multi-label predictions. In low resource settings, constructing a multi-label
classifier using a shared architecture is challenging for two reasons: a)
jointly balancing positive and negative samples across all classes is not
trivial and potentially challenging to make it extensible when new classes
need to be added, and b) attention layer may not always produce class-specific
insights as the weights are assigned to train for the combination of labels.
On the other hand, in the multi-task architecture with separate attention
layers, it is easy to add more classes. If some classes require more training,
it is trivial to further tune a model specific to that class. More
importantly, $context^{<t_{j}>}$ vector for $j$-th task identifies the
influential words from each sentence for that specific task. The complete
architecture is shown in Figure 2. MT-DAAN loss function is shown below:
$\footnotesize L_{MT-DAAN}=\sum_{k=1}^{m}(w_{k}L_{T_{k}})+w_{d}L_{d}$ (4)
where $m$ is the number of tasks, $w_{k}$ is the loss weight and $L_{T_{k}}$
is the loss term for each task, $w_{d}$ is the domain adversarial loss weight,
and $L_{d}$ is the domain adversarial loss term.
### III-F Model Interpretability
The output ($\alpha$) of the attention layer ($ATT$) of each task, is a
$T_{x}$-dimensional vector; $T_{x}$ being the number of words in the sentence.
The context vector ($\sum_{k=1}^{T_{x}}\alpha^{<k>}a^{<k>}$) is the product of
these attention weights and the $T_{x}$-dimensional activation ($a$) from the
$BiLSTM$ layer. $\alpha$ essentially weighs how much each word in the sentence
contributes to the classification result. Thus, $\alpha$ is the component that
is evaluated for model interpretability.
## IV DATASETS
CRISIS EVENTS | Total Tweets | Vocab | Avg #words | P | F | S | I
---|---|---|---|---|---|---|---
2012 Guatemala Earthquake | 154 | 422 | 18.74 | 104 | 108 | 12 | 15
2013 Typhoon Yolanda | 564 | 1746 | 19.47 | 249 | 46 | 119 | 51
2013 Australia Bushfire | 677 | 2102 | 20.21 | 152 | 213 | 167 | 36
2013 Boston Bombings | 535 | 1755 | 19.30 | 147 | 28 | 234 | 198
2013 Queensland Floods | 713 | 2301 | 19.08 | 293 | 54 | 173 | 215
2014 Chile Earthquake | 311 | 919 | 16.54 | 48 | 26 | 50 | 10
2014 Typhoon Hagupit | 1470 | 2893 | 15.36 | 469 | 375 | 276 | 101
2015 Nepal Earthquake | 2048 | 4026 | 13.77 | 1067 | 377 | 741 | 133
2015 Paris Attacks | 2066 | 4152 | 18.62 | 306 | 183 | 782 | 429
2018 Florida School Shooting | 1118 | 2940 | 21.40 | 329 | 64 | 206 | 70
TABLE II: TREC Dataset Statistics; Showing the number of positive samples for
each of the 4 classes. $P$=Priority, $F$=Factoid, $S$=Sentiment, and
$I$=Irrelevant.
### IV-A TREC Dataset
TREC-IS111http://dcs.gla.ac.uk/~richardm/TREC_IS/ (Text Retrieval Conference -
Incident Streams) is a program that encourages research in information
retrieval from social media posts with the goal to improve the state-of-the-
art social media based crisis analytics solutions. We use the dataset from
2018 track proposal. Statistics of this curated dataset of Twitter downloaded
from TREC is shown in Table II. The original dataset consisted of 15 crisis
events. However, due to very low data, we trimmed the events and tasks such
that there are at least 10 positive samples for each task.
The four tasks used in our experiments are shown below:
1. 1.
Priority: Different priority levels are assigned for each tweet: low, medium,
high, critical. We convert this into a binary classification problem where
$low=0$ and $\\{medium$, $high$, $critical\\}=1$.
2. 2.
Factoid: ‘Factoid’ is a categorical label that represents if a tweet is
stating a fact. Eg: ‘death toll rises ...’
3. 3.
Sentiment: ‘Sentiment’ is a categorical label that represents if a tweet
represents a sentiment. Eg: ’Worried.. Thoughts and prayers.’
4. 4.
Irrelevant: ‘Irrelevant’ is a categorical label for tweets that do not provide
any relevant information.
### IV-B Amazon Reviews Dataset
The standard benchmark
dataset222http://www.cs.jhu.edu/~mdredze/datasets/sentiment/ of Amazon reviews
[25] is widely used for cross-domain sentiment analysis. We chose four
domains: Books (B), Kitchen (K), DVD (D), and Electronics (E). The raw
data333https://github.com/hsqmlzno1/HATN/tree/master/raw_data, a part of
Blitzer’s original raw dataset, used in this work is from HATN [13]. This
dataset consists of $3000$ positive and $3000$ negative samples for each of
the $4$ domains. This dataset is used for two purposes: 1) to validate the
performance of the state-of-the-art methods including the single-task baseline
and 2) to compare and contrast the performance of deep models when trained
with rich versus sparse datasets.
### IV-C COVID-19 Tweet Dataset
For the COVID-19 use-case, we use Twitter posts collected using CitizenHelper
[26] system in March 2020, for the geo-bounding box of the Washington D.C.
Metro region. These tweets were annotated by volunteers of regional Community
Emergency Response Teams (CERTs), with ‘Relevant’ label denoting how relevant
a tweet is for crisis response operations. The label values range on a scale
of $1$-$4$. We convert them into binary classes by considering values $1$ and
$2$ as $-$ve ($0$) class and values $3$ and $4$ as $+$ve ($1$) class. This
dataset consists of $4911$ tweets with $-$ve ($Relevant$=$0$) and $637$ tweets
with $+$ve ($Relevant$=$1$) classes. Following unsupervised domain adaptation
criteria, the filtering models are trained using only the TREC dataset and
evaluated on the COVID-19 tweets. For each independent run of the experiment,
a balanced subset of size $637$ for both classes is selected for testing.
S $\rightarrow$ T | LR | SVM | CNN | BiLSTM | AMN | HATN | ST
---|---|---|---|---|---|---|---
B $\rightarrow$ K | 76.40 | 75.95 | 81.20 | 84.45 | 81.88 | 87.03 | 87.22
B $\rightarrow$ E | 75.53 | 74.05 | 80.44 | 84.61 | 80.55 | 85.75 | 85.51
B $\rightarrow$ D | 81.08 | 81.43 | 82.94 | 83.52 | 85.62 | 87.07 | 86.32
K $\rightarrow$ B | 76.12 | 75.78 | 78.78 | 80.67 | 79.05 | 84.88 | 81.85
K $\rightarrow$ E | 80.37 | 81.20 | 85.17 | 87.37 | 86.68 | 89.00 | 87.09
K $\rightarrow$ D | 73.32 | 74.98 | 76.41 | 78.49 | 79.50 | 84.72 | 81.13
E $\rightarrow$ B | 74.85 | 74.18 | 78.08 | 81.18 | 77.52 | 84.03 | 81.50
E $\rightarrow$ K | 81.85 | 81.85 | 86.59 | 89.00 | 87.83 | 90.08 | 89.21
E $\rightarrow$ D | 75.82 | 75.83 | 78.35 | 78.46 | 85.03 | 84.32 | 81.37
D $\rightarrow$ B | 81.17 | 82.20 | 82.26 | 84.83 | 84.53 | 87.78 | 87.02
D $\rightarrow$ K | 76.42 | 77.58 | 81.09 | 85.21 | 81.67 | 87.47 | 86.37
D $\rightarrow$ E | 72.47 | 73.68 | 79.56 | 83.66 | 80.42 | 86.32 | 85.63
AVG | 77.12 | 77.39 | 80.91 | 83.45 | 82.52 | 86.54 | 85.02
TABLE III: Performance comparison (accuracy) of various models on the standard benchmark dataset of amazon reviews. Methods in blue do not use any unlabeled target data; hence relevant in our context. Each reported score is an average of 10 independent runs of each experiment. Target | LR | SVM | CNN | BiLSTM | ST
---|---|---|---|---|---
Guatemala Earthquake | 60.14 | 56.76 | 60.47 | 65.54 | 59.97
Typhoon Yolanda | 65.39 | 65.97 | 63.05 | 65.49 | 65.53
Australia Bushfire | 65.61 | 63.23 | 62.10 | 60.10 | 62.44
Boston Bombings | 71.47 | 75.45 | 69.72 | 71.43 | 72.08
Queensland Floods | 65.56 | 64.81 | 64.13 | 66.01 | 66.21
Chile Earthquake | 43.09 | 37.94 | 43.37 | 35.45 | 39.23
Typhoon Hagupit | 49.86 | 46.22 | 49.21 | 54.13 | 52.61
Nepal Earthquake | 57.11 | 55.39 | 58.61 | 60.49 | 61.35
Paris Attacks | 71.43 | 71.72 | 72.50 | 72.14 | 71.31
Florida School Shooting | 58.79 | 63.02 | 58.82 | 59.71 | 60.55
AVG | 60.85 | 60.05 | 60.20 | 61.05 | 61.13
TABLE IV: Performance comparison (accuracy) of unsupervised models on TREC-
Priority (tweet) dataset showing that deep models are not strictly superior
than simpler models due to data sparsity. Each reported score is an average of
10 independent runs of each experiment. $Source$ = $Everything$ \- $Target$.
TARGET | Priority | Factoid
---|---|---
| ST | ST-DAAN | MT-DAAN | ST | ST-DAAN | MT-DAAN
| Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1
Guatemala Earthquake | 59.97 | 62.39 | 69.07 | 69.66 | 69.05 | 69.34 | 68.92 | 68.47 | 79.90 | 80.76 | 84.05 | 97.01
Typhoon Yolanda | 65.53 | 65.47 | 66.07 | 63.73 | 67.42 | 67.30 | 80.50 | 84.42 | 82.71 | 85.61 | 84.36 | 86.93
Australia Bushfire | 62.44 | 66.69 | 61.07 | 63.42 | 61.93 | 64.28 | 64.58 | 60.69 | 65.64 | 60.53 | 65.04 | 60.13
Boston Bombings | 72.08 | 74.29 | 72.34 | 73.37 | 73.80 | 74.74 | 83.10 | 88.51 | 81.42 | 85.90 | 85.82 | 88.82
Queensland Floods | 66.21 | 65.94 | 67.19 | 66.97 | 66.74 | 66.46 | 37.56 | 48.90 | 50.46 | 59.82 | 49.52 | 59.21
Chile Earthquake | 39.23 | 40.92 | 38.91 | 42.37 | 41.80 | 46.33 | 30.38 | 33.97 | 39.87 | 48.68 | 45.28 | 54.58
Typhoon Hagupit | 52.61 | 50.59 | 58.97 | 58.94 | 57.50 | 57.52 | 68.98 | 70.79 | 71.42 | 72.44 | 69.49 | 70.08
Nepal Earthquake | 61.35 | 59.44 | 60.18 | 57.80 | 61.65 | 59.49 | 74.04 | 76.08 | 80.72 | 81.00 | 81.04 | 81.02
Paris Attacks | 71.31 | 76.26 | 70.42 | 74.08 | 74.44 | 77.21 | 75.78 | 80.35 | 82.35 | 84.89 | 82.52 | 85.63
Florida School Shooting | 60.55 | 61.75 | 65.47 | 64.07 | 62.51 | 63.24 | 76.73 | 82.67 | 84.55 | 87.51 | 85.80 | 88.15
AVG | 61.13 | 62.37 | 62.97 | 63.44 | 63.68 | 64.59 | 66.06 | 69.49 | 71.90 | 74.71 | 73.29 | 77.16
TARGET | Sentiment | Irrelevant
---|---|---
| ST | ST-DAAN | MT-DAAN | ST | ST-DAAN | MT-DAAN
| Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1
Guatemala Earthquake | 96.96 | 97.03 | 96.45 | 96.68 | 96.76 | 92.73 | 89.36 | 89.03 | 91.22 | 91.06 | 93.11 | 92.73
Typhoon Yolanda | 75.81 | 77.62 | 77.54 | 79.01 | 76.82 | 78.35 | 76.05 | 79.77 | 78.49 | 80.59 | 80.46 | 82.31
Australia Bushfire | 75.95 | 77.58 | 78.80 | 79.12 | 78.54 | 78.92 | 35.42 | 47.164 | 53.78 | 65.11 | 51.76 | 63.36
Boston Bombings | 81.39 | 81.11 | 80.73 | 80.70 | 82.13 | 82.10 | 58.15 | 55.73 | 58.15 | 57.43 | 61.49 | 61.45
Queensland Floods | 81.69 | 80.39 | 81.05 | 81.39 | 81.53 | 81.32 | 65.68 | 65.36 | 67.26 | 65.72 | 67.88 | 67.27
Chile Earthquake | 92.69 | 92.91 | 93.10 | 93.21 | 93.62 | 93.68 | 75.16 | 84.98 | 80.46 | 86.38 | 80.64 | 86.56
Typhoon Hagupit | 84.98 | 85.86 | 85.15 | 86.14 | 85.43 | 86.38 | 63.21 | 75.04 | 71.50 | 78.25 | 70.22 | 77.27
Nepal Earthquake | 67.75 | 68.42 | 70.20 | 70.51 | 69.96 | 70.31 | 31.79 | 42.10 | 36.97 | 47.41 | 41.49 | 52.87
Paris Attacks | 76.01 | 76.63 | 73.65 | 73.98 | 74.47 | 74.60 | 33.91 | 35.25 | 44.52 | 48.32 | 47.17 | 51.32
Florida School Shooting | 68.77 | 71.77 | 67.06 | 70.03 | 68.14 | 71.05 | 32.66 | 40.90 | 44.22 | 55.27 | 47.64 | 58.65
AVG | 80.20 | 80.93 | 80.37 | 81.08 | 80.74 | 80.94 | 56.14 | 61.53 | 62.66 | 67.55 | 64.19 | 69.38
TABLE V: Unsupervised domain adaptation results on TREC dataset showing performance boost for Priority, Factoid, and Irrelevant tasks. However, Sentiment task did not show a significant improvement. See performance evaluation section for details. Each reported score is an average of 10 independent runs of each experiment. TARGET | Relevant
---|---
| ST | ST-DAAN | MT-DAAN
| Acc | F1 | Acc | F1 | Acc | F1
COVID-19 | 73.25 | 77.36 | 74.55 | 77.51 | 77.00 | 78.09
TABLE VI: Unsupervised domain adaptation results for COVID-19 tweets using
only the TREC dataset for training. Each reported score is an average of 10
independent runs of each experiment.
## V Results & Discussion
We first validate the performance of the adopted unsupervised ST model [24] by
comparing it with the following standard neural network architectures and
state-of-the-art models used for domain adaption in text. We use the standard
benchmark dataset of Amazon reviews. Following the traditional domain
adaptation experimental setup, each experiment represented as S $\rightarrow$
T consists of a source domain (S) on which the model is trained and a target
domain (T) on which the model is tested. We use Keras deep learning library
for our implementations; with $T_{x}$=$200$ for Amazon reviews and $30$ for
Tweets. We use Adam optimizer with a dropout of $0.4$, maximum epoch of $50$,
early stopping patience of $3$, batch size of $32$, and validation split of
$0.15$.
1. 1.
Simple Baselines: We construct simple baseline classifiers [27]: Logistic
Regression (LR) and Support Vector Machines (SVM). The input to these models
are constructed by aggregating the $300$-dimensional word embeddings of words
in each review.
2. 2.
CNN: A standard Convolutional Neural Network inspired by Kim, 2014 [28] is
constructed with the following architecture:
$Word\ Embeddings(T_{x},300)\rightarrow Conv1D(128,5)$
$\rightarrow MaxPooling1D(5)$ $\rightarrow Conv1D(128,5)\\\ \rightarrow
MaxPooling1D(5)$ $\rightarrow Conv1D(128,5)\\\ \rightarrow
GlobalMaxPooling1D()\rightarrow Dense(128)$
$\rightarrow Dense(2)\rightarrow y$.
This is combined with dropouts, relu activations, and ending with softmax
activation producing labels for binary classification. State-of-the-art deep
learning methods for existing social media mining approaches of crisis
analytics [6, 5] use a similar architecture.
3. 3.
BiLSTM: This is the bottom-most layer in Figure 2 with the activation
$a^{<T_{x}>}$ passed through the following: $Dense(10)\rightarrow
Dense(2)\rightarrow y$ also including dropouts, relu activation, and ending
with softmax.
4. 4.
AMN and HATN: AMN [12] and HATN [13] are attention-based methods which use
gradient reversal to perform domain adversarial training on the unlabeled data
from source and target domains. HATN is an extension to AMN by adding the
hierarchical component and jointly training pivot and non-pivot networks.
Input to all the models are word
vectors444https://code.google.com/archive/p/word2vec/ [29]. The evaluation on
amazon reviews shows how well the single-task (ST) model perform when compared
to the existing top-performing domain adaptation models on benchmark dataset.
Table III shows accuracy scores on the Amazon cross-domain sentiment analysis
dataset. HATN uses unlabeled target data, gradient reversal, explicit pivot
extraction, and joint training making it a computationally expensive method.
As shown in the experimental evaluation, we use the same Amazon dataset and
GoogleNews word vectors for our experiments. ST, being unsupervised with no
need of unlabeled target data, performed competitively with an overall
accuracy of 85.02%; thus establishing a strong fully unsupervised building
block for us to build upon.
### V-A Crisis Tweets vs Amazon Reviews
Table III and IV show that deep models struggle with small datasets such as
TREC-IS tweets. When ST model outperformed Logistic Regression by $\sim 8\%$
on the Amazon reviews dataset, the difference was only less than $1\%$ with no
statistical significance on the TREC-Priority dataset. Note that we conduct
experiments with various parameter combinations on the deep models when using
tweets. For example, $T_{x}=200$ for amazon reviews and $T_{x}=30$ for tweets
due to the difference in their average word-length. Books domain of Amazon
reviews has $182$ average number of tokens per review with a vocab size of
$105920$. On the other hand, the event with highest number of tweets in the
TREC dataset (Paris Attacks) has only 18.62 average number of tokens per tweet
with a vocab size of $4152$. This difference makes it intuitively challenging
to train deep models with several parameters that may lead the model to
memorize the entire dataset resulting in poor generalization. Multi-task
learning and domain adversarial training try to alleviate this problem by
training the shared BiLSTM layer with much more data from different tasks and
unlabeled data.
### V-B MT-DAAN Performance Evaluation
The primary purpose of the MT-DAAN model is to show that sharing the bottom
layer of the model (i.e., shared representation) for different tasks along
with domain adversarial training can help improve the generalizability of some
of the tasks that are otherwise trained alone in the single-task model. The
experiments for MT-DAAN are setup in the same unsupervised way as for single-
task. No data from the test crisis is used for training. For example, if we
are testing our model for the event ‘Typhoon Yolanda’, no data from this
crisis is used for training. Note that the domain classifier component uses
unlabeled data only from rest of the crisis; making it a fully unsupervised
domain adaptation approach. Performance scores of the four tasks (Priority,
Factoid, Sentiment, and Irrelevant) are shown in Table V. The results show
clear performance improvement for Priority, Factoid, and Irrelevant tasks.
However, Sentiment task did not show significant improvement. We speculate
that this is because other tasks do not generalize the bottom layer enough to
boost the sentiment classification performance. These results show the
usefulness of multi-task learning as well as domain adversarial training where
different tasks in multiple domains help each other when the data is sparse
and labels are limited.
Figure 3: Examples of interpretable results using attention; darker the shade,
higher the attention. Recall that no data from the crisis-event for testing is
used for training the model. Even then, relevant keywords such as ‘police
urging’, ‘death toll rises’, ‘worried’, and ‘thoughts with people’ are
correctly picked up by the attention layers of their respective tasks.
### V-C Word Vectors
We use fastText[30] as our word embeddings for tweets because of its sub-word
usage and the ability to create vectors for arbitrary and out-of-vocabulary
words. Although there exists many alternatives, picking the one that works
well for a specific dataset is not trivial. We conducted experiments using
four choices of word embeddings: fastText [30], GoogleNews [29], Glove [31],
and CrisisNLP [32]. Averaging over 10 crises, we received the following
accuracy scores (in %) respectively for the above word embeddings: {$80.20$,
$81.82$, $81.88$, $80.73$}. Unlike fastText, we fine-tune these pre-trained
vectors to create vectors for out-of-vocabulary words. Vectors for words that
are already in the vocabulary are locked while tuning for consistency in
evaluation. The tweet-based embeddings such as Glove or CrisisNLP did not
significantly outperform other models. Glove vectors are 200-dimensional while
the rest are 300-dimensional which makes the experiment favoring Glove word
vectors. This experiment shows that the problem of finding a strictly superior
word vector model for tweets still remains a challenging task.
Figure 4: Examples of interpretable results using attention for relevancy
prediction of COVID-19 tweets. With $77\%$ accuracy, although the highly
attended words in the ‘Relevant’ tweets provide some intuitive sense of
interpretability, the highlighted words in the ‘Irrelevant’ tweets are
somewhat ambiguous because it is unclear if those words are chosen due to
their specific or generic nature. This shows both the benefits and challenges
of unsupervised and interpretable domain adaptation.
### V-D Interpretability: Attention Visualization
The attention weights used to create the context vector by the dot product
operation with word activations represent the interpretable layer in our
architecture. These weights represent the importance of each word in the
classification process. Some examples are shown in Figures 3 and 4. Stronger
the color intensity stronger the word attention. In the first example, ‘boston
police urging’ is the reason why the tweet is classified as $+$ve priority.
Similarly, ‘death toll rises’ in the Factoid example, ‘worried, prayers’ in
the Sentiment example, and ‘thoughts with people’ in the Irrelevant example
are clear intuitive indicators of +ve predictions. These examples show the
importance of having interpretability as a key criterion in crisis domain
adaptation tasks for social media.
To the best of our knowledge, in social media mining for crisis analytics,
there does not exist a ground truth dataset that highlights the words that
explain the labels for tweets. Using our model as a guide, we hope to build a
robust evaluation dataset as our immediate next step so that the models can be
quantitatively evaluated using robust trust-evaluation methods such as LIME
[33]. It is also crucial to note that binary classification tasks such as
sentiment analysis of Amazon reviews has a clear class divide that produces
intuitive keywords such as ‘good’, ‘excellent’, or ‘great’ for $+$ve reviews
and ‘bad’, ‘poor’, or ‘horrible’ for $-$ve reviews. However, for short texts
such as tweets shown in Figure 4, ‘relevancy’ can depend on the context and it
is unclear which keywords truly represent the examples in the ‘irrelevant’
class.
## VI COVID-19 Use-Case
We show a practical implication of our work by applying it to COVID-19 tweets
described in Section 4.3. Our goal is to interpretably predict if a COVID-19
tweet is relevant or not; a binary classification task. The models are trained
using only the TREC dataset and evaluated on the COVID-19 tweets (a balanced
subset of size $637$ for $+$ve and $-$ve labels). We found that a combination
of ‘Priority’ and ‘Irrelevant’ labels from TREC performs better to predict
COVID-19’s ‘Relevant’ label (this can be trivially verified by constructing
two binary classifiers). We augment all three methods (ST, ST-DAAN, and MT-
DAAN) with an additional condition before label prediction:
$R_{c}=P_{t}\cap\overline{I_{t}}$, which means that a COVID-19 tweet is
‘Relevant’ only if it is predicted both ‘Priority’ = $1$ and ‘Irrelevant’ =
$0$. The scores are reported in Table VI and the attention results are shown
in Figure 4, demonstrating the effectiveness of our proposed method.
## VII Conclusion
We presented a novel approach of unsupervised domain adaptation with multi-
task learning to classify relevant information from Twitter streams for crisis
management, while addressing the problems of data sparsity and limited labels.
We showed that a multi-task learning model that shares the lower layers of the
neural network with dedicated attention layers for each task along with a
domain classifier branch can help improve generalizability and performance of
deep models in the settings of limited data. Furthermore, we showed that using
an attention-based architecture can help in interpreting the classifier’s
predictions by highlighting the important words that justify the predictions.
We also presented an in-depth empirical analysis of the state-of-the-art
models on both benchmark dataset of Amazon reviews and TREC dataset of crisis
events. The application of our generic approach for interpretable and
unsupervised domain adaptation within a multi-task learning framework can
benefit social media mining systems in diverse domains beyond crisis
management.
Reproducibility: Source code and instructions for deployment are available at
- https://github.com/jitinkrishnan/Crisis-Tweet-Multi-Task-DA.
## References
* [1] C. Castillo, _Big crisis data: social media in disasters and time-critical situations_. Cambridge University Press, 2016.
* [2] M. Imran, P. Mitra, and C. Castillo, “Twitter as a lifeline: Human-annotated twitter corpora for nlp of crisis-related messages,” _arXiv preprint arXiv:1605.05894_ , 2016.
* [3] H. Li, D. Caragea, C. Caragea, and N. Herndon, “Disaster response aided by tweet classification with a domain adaptation approach,” _Journal of Contingencies and Crisis Management_ , vol. 26, no. 1, pp. 16–27, 2018.
* [4] S. Vieweg, C. Castillo, and M. Imran, “Integrating social media communications into the rapid assessment of sudden onset disasters,” in _International Conference on Social Informatics_. Springer, 2014, pp. 444–461.
* [5] D. T. Nguyen, K. A. A. Mannai, S. Joty, H. Sajjad, M. Imran, and P. Mitra, “Rapid classification of crisis-related data on social networks using convolutional neural networks,” _arXiv preprint arXiv:1608.03902_ , 2016\.
* [6] F. Alam, S. Joty, and M. Imran, “Domain adaptation with adversarial training and graph embeddings,” _arXiv preprint arXiv:1805.05151_ , 2018.
* [7] R. Mazloom, H. Li, D. Caragea, C. Caragea, and M. Imran, “A hybrid domain adaptation approach for identifying crisis-relevant tweets,” _International Journal of Information Systems for Crisis Response and Management (IJISCRAM)_ , vol. 11, no. 2, pp. 1–19, 2019.
* [8] J. Blitzer, R. McDonald, and F. Pereira, “Domain adaptation with structural correspondence learning,” in _Proceedings of the 2006 conference on empirical methods in natural language processing_ , 2006, pp. 120–128.
* [9] S. J. Pan, X. Ni, J.-T. Sun, Q. Yang, and Z. Chen, “Cross-domain sentiment classification via spectral feature alignment,” in _Proceedings of the 19th international conference on World wide web_. ACM, 2010, pp. 751–760.
* [10] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” _Journal of machine learning research_ , vol. 11, no. Dec, pp. 3371–3408, 2010.
* [11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” _The Journal of Machine Learning Research_ , vol. 17, no. 1, pp. 2096–2030, 2016.
* [12] Z. Li, Y. Zhang, Y. Wei, Y. Wu, and Q. Yang, “End-to-end adversarial memory network for cross-domain sentiment classification.” in _IJCAI_ , 2017, pp. 2237–2243.
* [13] Z. Li, Y. Wei, Y. Zhang, and Q. Yang, “Hierarchical attention transfer network for cross-domain sentiment classification,” in _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018.
* [14] R. Caruana, “Multitask learning,” _Machine learning_ , vol. 28, no. 1, pp. 41–75, 1997.
* [15] S. Ruder, “An overview of multi-task learning in deep neural networks,” _arXiv preprint arXiv:1706.05098_ , 2017.
* [16] M. Long and J. Wang, “Learning multiple tasks with deep relationship networks,” _arXiv preprint arXiv:1506.02117_ , vol. 2, p. 1, 2015.
* [17] S. Ruder12, J. Bingel, I. Augenstein, and A. Søgaard, “Sluice networks: Learning what to share between loosely related tasks,” _stat_ , vol. 1050, p. 23, 2017.
* [18] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y.-Y. Wang, “Representation learning using multi-task deep neural networks for semantic classification and information retrieval,” 2015.
* [19] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” _arXiv preprint arXiv:1409.0473_ , 2014\.
* [20] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in _Advances in neural information processing systems_ , 2014, pp. 3104–3112.
* [21] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” _IEEE Transactions on Signal Processing_ , vol. 45, no. 11, pp. 2673–2681, 1997.
* [22] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural computation_ , vol. 9, no. 8, pp. 1735–1780, 1997.
* [23] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Gated feedback recurrent neural networks,” in _International conference on machine learning_ , 2015, pp. 2067–2075.
* [24] J. Krishnan, H. Purohit, and H. Rangwala, “Diversity-based generalization for neural unsupervised text classification under domain shift,” _ECML-PKDD_ , 2020.
* [25] J. Blitzer, M. Dredze, and F. Pereira, “Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification,” in _Proceedings of the 45th annual meeting of the association of computational linguistics_ , 2007, pp. 440–447.
* [26] R. Pandey and H. Purohit, “Citizenhelper-adaptive: expert-augmented streaming analytics system for emergency services and humanitarian organizations,” in _2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)_. IEEE, 2018, pp. 630–633.
* [27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, “Scikit-learn: Machine Learning in Python ,” _Journal of Machine Learning Research_ , vol. 12, pp. 2825–2830, 2011.
* [28] Y. Kim, “Convolutional neural networks for sentence classification,” _arXiv preprint arXiv:1408.5882_ , 2014.
* [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in _Advances in neural information processing systems_ , 2013, pp. 3111–3119.
* [30] T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, and A. Joulin, “Advances in pre-training distributed word representations,” in _Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)_ , 2018\.
* [31] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in _Empirical Methods in Natural Language Processing (EMNLP)_ , 2014, pp. 1532–1543. [Online]. Available: http://www.aclweb.org/anthology/D14-1162
* [32] M. Imran, P. Mitra, and C. Castillo, “Twitter as a lifeline: Human-annotated twitter corpora for nlp of crisis-related messages,” in _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)_. Paris, France: European Language Resources Association (ELRA), may 2016.
* [33] M. T. Ribeiro, S. Singh, and C. Guestrin, “” why should i trust you?” explaining the predictions of any classifier,” in _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , 2016, pp. 1135–1144.
|
2024-09-04T02:54:58.992673 | 2020-03-11T04:29:14 | 2003.05106 | {
"authors": "Geovane Fedrecheski, Jan M. Rabaey, Laisa C. P. Costa, Pablo C.\n Calcina Ccori, William T. Pereira, Marcelo K. Zuffo",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26149",
"submitter": "Geovane Fedrecheski",
"url": "https://arxiv.org/abs/2003.05106"
} | arxiv-papers | # Self-Sovereign Identity for IoT environments:
A Perspective
Geovane Fedrecheski1, Jan M. Rabaey4, Laisa C. P. Costa1,
Pablo C. Calcina Ccori1, William T. Pereira1, Marcelo K. Zuffo1
{geovane, laisa<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>This research was partially
funded by CAPES. 1Interdisciplinary Center on Interactive Technologies,
Polytechnic School,
University of Sao Paulo, Brazil
4Berkeley Wireless Research Center, Electrical Engineering and Computer
Science Department,
University California, Berkeley, US
###### Abstract
This paper analyses the concept of Self-Sovereign Identity (SSI), an emerging
approach for establishing digital identity, in the context of the Internet of
Things (IoT). We contrast existing approaches for identity on the Internet,
such as cloud-based accounts and digital certificates, with SSI standards such
as Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs). To the
best of our knowledge, this is the first thorough comparison of these
approaches. The benefits and challenges of using DIDs and VCs to identify and
authenticate IoT devices and their respective users are discussed. In the end,
we establish that SSI, with its owner-centric, privacy-aware and
decentrailized approach, provides a viable and attractive option for secure
identification of IoT devices and users.
## I Introduction
The Internet was developed as a research project to interconnect computers
[1]. Protocols like TCP/IP, developed as open standards, allowed computers to
connect in a global scale. However, even after the world-changing impacts the
Internet had on society over the last decades, it has no pervasive, privacy-
preserving, and easy to use mechanism to manage digital identities.
Where human activity is involved, a common abstraction is to use accounts,
i.e. digital records, often containing personally identifiable information
(PII), that are protected by a password and saved on a webserver. Although
this method has been working for several decades, it has many security
drawbacks, such as the use of weak passwords [2] and the potential for privacy
violation. Furthermore, the manual approach of password-protected accounts
makes it unsuitable to machine-to-machine interactions, a common scenario in
the IoT.
More automated solutions can be achieved by using Public Key Certificates
(PKCs) that bind names to public keys [3]. Widespread use of PKC, however, is
limited to organizations, due to the complexity of current methods. For
instance, while websites usually prove their identities to web browsers using
certificates, users do not use certificates in the same way, i.e. to prove
their identity to the website. Moreover, existing standards were not designed
for privacy, as evidenced by the use of real names in known certificate
formats such as PGP [4] and X.509 [5]. To aggravate the situation, the
assignment of unique names often require centralized architectures, which is
inadequate for distributed IoT applications.
A recent development towards online identification of users, organizations,
and devices has been referred to as “Self-Sovereign Identity” (SSI). The basic
premise of SSI is that subjects should own and control their own identity,
instead of having it stored and managed by a third party. This approach brings
several benefits, including enhanced privacy, control, and decentralization.
Two new standards are being proposed to realize SSI, namely, Decentralized
Identifiers (DIDs) and Verifiable Credentials (VCs) [6, 7]. While DIDs focus
on cryptographic identification, VCs provide a means for privacy-aware and
authenticated attribute disclosure.
In this paper we analyze existing approaches to identity in the Internet, such
as X.509, PGP [4], and SSI. We present a detailed comparison focusing on the
data models used to represent identity across different standards. Finally, we
discuss what are the benefits of using SSI in the Internet of Things, and
identify challenges that must be overcome.
## II Self-Sovereign Identity
Self-Sovereign Identity is an approach in which subjects are in full control
of their own digital identities [8]. SSI is analogous to offline identifiers,
which are carried by the owner (within a physical wallet), but contrasts with
current digital identity solutions, which are either based on accounts or
digital certificates, and have privacy and centralization issues.
While initially proposed by members [8] of online communities, a formal
definition of SSI was released recently [9]. Considering an identity to be
composed of an identifier associated with a set of name-value attributes, the
full self-sovereign identity of an individual is the collection of all
identities (i.e. identifiers and attributes) that span a range of
decentralized domains, such that the individual is in full control of these
identities [9]. As digital privacy concerns have been growing in recent years,
interest in SSI has intensified. This led to the definition of a set of
technical specifications to implement SSI, which we describe below.
### II-A Decentralized Identifiers
Digital identifiers so far have been either centralized or non-resolvable. For
example, Uniform Resource Locators (URLs), which can be used to resolve HTML
documents, usually depend on domains names assigned by ICANN111Internet
Corporation for Assigned Names and Numbers - https://www.icann.org/, a
centralized authority. On the other hand, unique, user-generated identifiers
such as UUIDs cannot be used to resolve associated metadata.
To address this, a new specification for Decentralized Identifiers (DIDs) is
being developed with the support of the W3C [6]. The DID has the following
syntax: did:btcr:abcdefgh12345678. The did prefix is mandatory, and colons are
used to separate a method definition and a method-specific id. A method is a
specific set of rules for working with DIDs (the example above uses the
Bitcoin method), and the format of the id depends on that method. An open
directory of different DID methods is available for public access and open for
new submissions222https://w3c-ccg.github.io/did-method-registry/.
Each DID is associated with a DID Document (DDo) that contains the DID itself
along with public keys, service endpoints, and other metadata. The public key
is used to authenticate and encrypt messages, while the endpoint provides a
way to message the entity that controls that DID. To control a specific DID, a
subject just have to own a private key associated with public keys in the DDo.
A common storage mechanism for DDos are Blockchains, from which they can be
resolved using the referred DID. On the other hand, in some cases individuals
may not want to publish their DIDs, e.g. to avoid identity correlation. In
this case, the special peer DID method can be used. Thus, DIDs are unique
identifiers that can be resolved to DID Documents, and they allow the
establishment of an end-to-end secure channel. What DIDs do not provide,
however, is a means for entities to prove claims (attributes) about
themselves.
### II-B Verifiable Credentials
Verifiable Credentials (VCs) is a W3C recommendation for portable and provable
claims about a subject. For instance, a person may claim to have the name
Alice, and a device may claim to be of type Camera. The relationship among
DIDs and VCs is shown in Figure 1. All VCs refer to the DID of the subject to
which they have been assigned (e.g. an IoT device). VCs also contain the DID
of its issuer along with a cryptographic proof. This allows a subject to
present a VC to a verifier, which can then resolve the DDo of the issuer (and
therefore its public key) from a public ledger, e.g. a Blockchain, and check
the authenticity of the VC. Figure 2 shows a use case where a user issues a VC
to a device.
A major incentive for SSI is privacy, therefore VCs are expected to be private
and stored in a personal wallet, to be shared only when necessary. To further
improve privacy, the VC specification supports zero-knowledge proofs, i.e. a
cryptographic technique “where an entity can prove to another entity that they
know a certain value without disclosing the actual value” [7].
Figure 1: A DID is the link between a DDo and a set of VCs, much like a
primary key can link different tables in a database. This allows a subject
associated with a DID to prove its identity. Figure 2: An owner-centric
scenario using SSI. Each subject generates its own DDo, while the VC is issued
by the device owner.
### II-C Decentralization, privacy, and layered authentication
Public key cryptography can be used to derive a shared secret over an insecure
channel [10]. However, a known problem is how to trust the origin of the
public key. To solve this, a signed certificate that binds a name to a public
key was proposed [3]. Two common standards for digital certificates are X.509
[5, 11] and Pretty Good Privacy (PGP) [4]. Although they differ in details,
both follow the original definition in which names are tied to public keys and
signed by a third party [3].
A crucial challenge faced by certificate-based solutions was ensuring the
uniqueness of the names. The most common solution to this was to rely on
centralized architectures. For example, the name on the subject field in X.509
must be enforced by a global authority, and the PGP id uses the name of a
person plus her email address, which ultimately depends on DNS, which is
centralized as well.
More recently, the emergence of Blockchain technology allows decentralized
consensus for choosing unique names. One problem, however, is that solutions
based on certificates put sensitive information in the identifier, which
compromises the privacy of certificate holders, and therefore might not be
suitable for storage in public, immutable ledgers.
An approach to solve this is to limit the exposure of PII on the ledger by
only writing anonymous information to it, e.g., public keys. In particular,
this approach enables public key storage and lookup, which can be used to
create a confidential and non-repudiable channel. Higher-level abstractions
can then be used to implement authentication, since the attributes necessary
to authenticate users are usually application-specific.
This is the solution that results from combining the DID and VC
specifications. Containing only pseudonymous information, such as public keys
and service endpoints, DID Documents can be used to establish a
cryptographically secure channel between two entities. After the confidential
channel is created, the entities can exchange VCs, according to the levels of
trust necessary to each application. In other words, while DIDs are lower-
level and pseudonymous, VCs are application-specific and can be used to
authenticate attributes such as name or device type. Finally, it is worth
noting that as each DID is usually a high-entropy random string, name
collisions actually stop being a concern.
## III Data models for digital identity
This section provides analysis and comparisons of existing data models for
digital identity. We start by discussing the limitations of password-based
accounts, and then proceed to compare data models based on public key
cryptography.
### III-A Accounts
The most basic method to identify subjects in computer systems is the account:
a digital record, usually composed of at least a user name and a password,
that identifies a user. Accounts are commonly stored in a server controlled by
the service provider. For example, popular IoT vendors require that a device
owner have a cloud-based account, so that she can use this virtual identity to
configure her devices.
While accounts have been used for decades in a variety of systems, they are
among the most primitive solutions for digital identities. Among the problems
related to account-based authentication are privacy and the use of passwords.
With respect to privacy, issues arise because the user is forced to store
plaintext PII in a third-party system. Regarding passwords, the literature
indicates common problems such as password reuse and difficulty to enforce
strong passwords, and points that the most widespread solution is the use of
“recommendations” [2], which depends on human factors and are difficult to
enforce.
TABLE I: Comparison of standardized data models for digital identity. | PGP | X.509 | Self-Sovereign Identity
---|---|---|---
| PGP Key | | Public Key Certificate
---
(PKC)
| Attribute Certificate
---
(AC)
| DID Document
---
(DDo)
| Verifiable Credential
---
(VC)
Goal | | Prove control of public
---
keys and identifier
(plus optional attributes)
Publish public keys
| Prove control of public
---
keys and identifier
(plus optional attributes)
Publish public keys
| Prove possession of
---
attributes
| Prove control of identifier
---
Publish public keys and
service endpoints
| Prove possession of
---
attributes
Identifier | Name and Email | Qualified Name | Same as PKC | Method-specific DID | Same as DDo
| Uniqueness
---
of identifier
| Global
---
authority (DNS)
Global authority (CA) | Same as PKC | | Ledger consensus /
---
Random number gen.
Same as DDo
Public Key(s) | 1 primary, N subkeys | 1 | n/a (points to PKC) | N | n/a (points to DDo)
Attribute(s) | Attributes field | Extensions field | Attributes field | - | subjectCredential field
Endorsement | Signature of many peers | Signature of a CA | Signature of a CA | | Self-signed (optional)
---
Indirect through VC
Signature of an Issuer
| Service
---
endpoints
- | - | n/a | Yes | n/a
| Semantic
---
schemas
- | - | - | Yes | Yes
### III-B Models based on public key cryptography
Pretty Good Privacy (PGP) [4] was created to allow individuals to prove a
binding between a public key and an identifier, the latter being composed by a
real name and an email address. This binding, along with optional attributes
and signatures, is stored in a document called a PGP Key. Conceived as a
distributed solution, individuals in the PGP scheme can sign the keys of other
individuals, so as to give an endorsement that they are who they say they are,
i.e. they are not impersonating someone or using a fake id. This scheme of
peer signatures is often referred to as the Web of Trust.
X.509 Certificates, created by the X.500 working group, defines a format for
Public Key Certificates (PKC) that binds public keys to qualified names [5].
PKCs are widely used in the Internet to authenticate domain names and protect
communications. Although technically nothing prevents peer-to-peer signature
of X.509 certificates, the vast majority of its usage is under centralized
architectures, in which a trusted authority signs the certificate to make it
trustworthy. Finally, in certain cases it is useful to have a separate
document that, instead of having public key, contains only a name associated
with signed attributes. To meet this demand, X.509 proposed a new standard
called Attribute Certificate (AC), which contains no public key, but links to
a PKC through its subject field [11].
Finally, as previously mentioned, Self-Sovereign Identity is a novel approach
that uses Decentralized Identifiers [6] and Verifiable Credentials [7] to
prove possession of identifiers and attributes, respectively.
### III-C High-level comparison
The following paragraphs compares models used by the PGP, X.509, and SSI
standards, according to Table I.
#### Goal
Both PGP Keys and PKCs are used to publish and prove control of public keys
that are tied to identifiers. Also, in these approaches, attributes can be
provided either in the same document as the public keys (PGP Key and PKC), or,
in the case of X.509, in a separate document (AC). On the other hand,
documents in the SSI paradigm have decoupled goals: DDos are be used to prove
control of an identifier and to provide a means for establishing a secure
communication; and VCs are used to prove possession of attributes.
#### Identifier (and uniqueness)
While PGP and X.509 use names and other identifiers that depend on centralized
entities, in SSI the identifiers are completely decentralized and can be auto-
generated, for example by using strong random number generators. Not only this
enables easy global uniqueness, but the pseudonymous characteristic of DIDs
also enhances privacy, when compared to previous approaches based on real
names or email addresses. Pseudonymous identifiers are also more suited for
IoT, since devices do not have names or email addresses by default.
#### Public Key(s)
PKCs are limited to only one public key, while PGP Keys and DDos can have
many. PGP still differs from DDos as the former uses a primary key that is
tied to an identifier and allows more subkeys to be included, while the latter
support multiple public keys without assumptions other than the key type,
which usually encodes its purpose, e.g. sign or encrypt.
#### Attribute(s)
Both PGP Keys and X.509 certificates support arbitrary attributes, either via
PKC extensions or dedicated ACs. In self-sovereign identity, a DDo does not
support attributes in order to stay anonymous. Instead, all PII is handled
only by VCs, which are private by default.
#### Endorsement(s)
PGP Keys can be signed by one or more peers, but X.509 certificates and VCs
can only be signed by a single issuer. DDos are not signed by external
entities, and may be self-signed. When a DDo is written to a ledger, however,
the transaction will be signed, which can be used to attest the validity of
the DDo. Another way of proving endorsement over a DID is to check the
signature of a VC associated with that DID. If the VC is signed by a trusted
issuer, the DID can be trusted. Furthermore, with respect to who can sign the
endorsements, technically it can be anyone, but there are philosophical
differences. X.509, for example, was devised to work within a centralized
architecture, where only trusted authorities can sign certificates. On the
other end of the spectrum, PGP expects peer-to-peer signatures, which
ultimately creates a Web of Trust. Finally, VCs does not make strong
assumptions on the network structure, although decentralized approaches,
especially the ones based on Blockchain, may be favorable.
#### Service endpoints
a novelty introduced by DDos is the association of a built-in mechanism to
reach the owner of a public key. This facilitates the establishment of secure
interactions between peers, from web to IoT environments.
#### Semantic schemas
only SSI-based data models allow extensibility through semantic annotations
over JSON documents. The main reason for this is that these technologies only
became popular after X.509 and PGP were developed.
### III-D Public key distribution
TABLE II: Comparison of data models for key distribution. | Raw Pub Key | PKC | DDo
---|---|---|---
| Associates key material
---
to metadata
| X | X
Privacy: no PII disclosed | X | | X
| Key rotation does not
---
requires re-signing
n/a | | X
Serialization formats | | Binary
---
Base64
| DER
---
PEM
| JSON-LD
---
JWT
Semantic schemas | | | X
| Decentralized: user
---
generates the artifact
X | | X
| Decentralized: user
---
carries the artifact
X | X | X
Service endpoint | | | X
TABLE III: Comparison of data models for attributes. | PKC | AC | VC
---|---|---|---
| Signed attributes
---
about a subject
X | X | X
| Key rotation does not
---
requires re-signing
| X | X
| Identifier differs from
---
key material
X | X | X
| Attributes decoupled
---
from key material
| X | X
Selective disclosure | | | X
Zero-knowledge proofs | | | X
Delegation | | X |
Revocation | X | X | X
Serialization formats | | DER
---
PEM
| DER
---
PEM
| JSON-LD
---
JWT
Semantic schemas | | | X
| Decentralized: user
---
carries the artifact
X | X | X
| Decentralized: Verifier
---
decoupled from Issuer
| | X
An important aspect in the design of systems based on asymmetric encryption is
the data model used to support key distribution. In the following, we compare
three approaches, as shown in Table II: Raw Public Key, Public Key
Certificates, and DID Document.
#### Raw public key
this is the simplest approach, and consists in having a public key shared as a
raw array of bytes, often encoded in some ascii-compatible format, such as
base64. Although this approach is decentralized and discloses no personal
information, it does not allow associated metadata.
#### Public Key Certificate
as previously discussed, PKCs bind a name and other attributes to a public
key, which allows subjects prove their identity. Created before privacy was a
major concern, X.509 PKCs always carry PII in the main identifier, and may
carry PII in other attributes. Finally, other drawbacks of PKCs include the
imposition of specialized serialization formats (DER and PEM), tight coupling
of keys and data (which makes key rotation more difficult), and a centralized
architecture, i.e. the artifact is not self-generated.
#### DID Document
DDos associate public keys to pseudonymous metadata, while also allowing key
rotation without re-signing any associated metadata. The latter is possible
because all signed metadata actually only lives in associated VCs. An
important difference to highlight is that DDos are not signed by third
parties, thus they cannot authenticate the origin of a public key. If this is
necessary, DDos can be composed with VCs to increase security. DDos supports
JSON-based serialization formats, which are available in most programming
languages and platforms, and can benefit from publicly available semantic
schemas. As each user auto-generates their own DIDs and DDos, the management
of the identifier is decentralized. Finally, service endpoints in DDos provide
a novel way for peers to securely establish secure channels.
### III-E Attribute distribution
Four out of the five previously described formats can be used to prove control
over attributes: PGP Keys, Public Key Certificates, Attribute Certificates,
and Verifiable Credentials. Since PGP Keys are less widely used, we only
compare the latter, as shown in Table III.
#### Public Key Certificates
the encoding of attributes in PKCs leverages the X.509 PKC extension field.
Although the reuse of an existing format may seen advantageous in terms of
compatibility, the whole certificate must be re-signed when a key is rotated,
or when selective disclosure of attributes is necessary. An important drawback
not mentioned so far is that it is impossible to disclose only a subset of the
attributes in a PKC, without contacting the issuer for a new signature.
#### Attribute Certificates
differing from PKCs, ACs contain a name and a list of attributes, but no
public key, which simplifies key rotation. Finally, while ACs support
delegation, in general they have the same drawbacks as PKCs.
#### Verifiable Credentials
similar to an AC, a VC does not contain public keys, as it focus on binding
identifiers to attributes. Among the novelties in the VC standard is the
support for selective disclosure without contacting the issuer, which is
realized using zero-knowledge cryptography. VCs also leverage JSON, a
serialization format that is both human readable and lightweight to parse. VCs
and can be further specialized into two formats: JSON Linked Data (JSON-
LD)333https://json-ld.org/, a format to serialize linked data; and JSON Web
Token (JWT), a widely used format to express security
claims444https://jwt.io/.
## IV Benefits and Challenges of SSI for IoT
As the IoT continues to evolve, new paradigms that allow spontaneous machine-
to-machine interactions started to appear [12, 13]. Necessarily decentralized,
the future IoT will require users to be the root of trust of their devices,
leading to an owner-centric IoT. As privacy concerns raise in importance,
solutions that minimize personal data sharing become paramount. Full
realization of these and other features will require novel, open, and secure
standards for identity in the IoT. The next paragraphs discuss aspects of
self-sovereign identity that are likely to improve decentralized IoT security,
while also pointing the factors that will require innovation to bring SSI to
IoT, such as support constrained devices.
### IV-A Benefits
The benefits of SSI for IoT, such as privacy and decentralization, are
discussed below.
#### Owner-Centric
The user can be the root of trust of her devices. Once a user is the owner and
controller of her identity, it is straightforward to create a network of
devices that belong to her, for example by provisioning an “owner=Alice”
credential to each device. One interesting consequence of this is that no
third party is needed to enforce security and administration of devices, as
the user herself will be able to do it. Note that in this approach devices can
have their own identity as well, and may only use the owner attribute to
facilitate the creation of trust relationships, i.e. devices that share the
same owner can automatically trust each other.
#### Privacy-preserving
Personal information is protected. By having the identity of owners and things
stored locally, sensitive data that would otherwise be stored in a service
provider will now live closer to the owner (usually in a digital wallet).
While the user can choose to backup his data for various reasons, she will be
able to do so in an encrypted way, as only she will possess the decryption
keys. Users and devices will also get to choose with whom they share their
credentials, and even be able to do so employing selective disclosure and
zero-knowledge proofs techniques, further improving privacy.
#### Decentralized
No single-point of failure. While identity providers may have been a
convenient way to authenticate users and devices so far, it is not clear what
happens when a provider stops providing, e.g. when it goes out of business. In
the self-sovereign approach, the user decides when her identity starts or
stops being valid, and she will have similar controls over her devices.
Finally, data breaches, information sharing without user consent, and other
issues are minimized when identities are not stored in a high-value data silo
that acts as a honeypot for hackers.
#### End-to-end security
Communications between two endpoints are secure. By exchanging DID Documents
and applying asymmetric cryptography, IoT devices can mutually authenticate,
derive short-lived symmetric keys, send encrypted messages, and enforce non-
repudiation. This approach can also be implemented in a transport-agnostic
way, enabling secure communication even among different protocols.
#### Layered authentication
Separates cryptographic and application-specific authentication. In the
former, two devices prove to each other that they are in possession of
specific public keys, while in the latter the devices prove different
attributes about themselves. This approach allows endpoints to always be
cryptographically protected, and leaves higher-level trust requirements to be
handled at the application layer.
#### Standardized and open approach
Fosters interoperability and robustness. Since both DIDs and VCs are being
developed as open W3C specifications, companies and researchers are free to
build solutions that are interoperable and rely on well-tested data models.
#### JSON-based encoding
Using JSON enables more applications to handle data extracted from DID
Documents and credentials, even if not originally designed to work with SSI.
### IV-B Challenges
We now discuss some challenges to apply SSI in IoT environments.
#### Constrained devices
Fully adopting SSI means that devices need to be able to run asymmetric
cryptography and cope with communication overhead of transmitting metadata,
such as DID Documents and Verifiable Credentials.
#### Asymmetric Cryptography
SSI demands execution of encryption algorithms based on asymmetric keys, which
can be challenging in devices with limited processing and energy resources.
While authors points that constrained processors such as the 32-bits Cortex M0
are well equipped to execute Elliptic Curve Cryptography (ECC) [14], the
number of operations still must be controlled to avoid battery draining. A
common tactic is to use long lived session keys that are less frequently
updated, e.g. once a day.
#### Communication overhead
Depending on the communication protocol, the size of DDos and VCs may impose a
barrier. For example, low energy protocols such as LoRA and BLE have maximum
packet sizes of 222 and 244 bytes, respectively, while DDos and VCs easily
achieve 500 bytes or more. Therefore, strategies such as compression,
fragmentation, and infrequent document transmission, will be necessary. In
extreme cases, SSI may not be possible at all, which will require proxy
approaches [15].
#### DID Resolution
Higlhy constrained devices may not be able to connect to the Internet to
download DID Documents at all. A possible solution is to create a local cache
of known DIDs, either managed by the device itself or by its gateway. On the
other hand, if both devices use peer DIDs, they can simply exchange their DIDs
directly, which shifts the problem to securely delivering the DIDs in the
first place.
#### Software availability
The SSI ecosystem is new and there is limited software available for embedded
devices. Given the foundational importance of secure cryptographic algorithms
and protocols, applications based on SSI should rely on existing libraries
that encapsulate complexity and are well tested, which reduces the chances for
vulnerabilities. Although reference implementations exist [16], they are
focused on cloud and mobile use cases. To fully incorporate SSI into IoT,
portable and lightweight libraries tailored for constrained devices must be
created and made widely available.
## V Conclusion and perspective
As the primary motivation for the development of the Internet was to remotely
connect computers, the problem of secure identification of users and devices
was left aside. While identity solutions such as accounts and certificates
were eventually developed, they feature critical issues such as weak
passwords, lack of privacy, and centralization. As it is common for systems to
mature over time, as good (and bad) practices are learned, we argue that the
Self-Sovereign Identity approach represents an important step forward in the
area of digital identity. Particularly in the context of the IoT, this paper
showed how SSI can (1) empower owners to have full control over both their
identities and their devices, (2) improve privacy by decoupling pseudonymous
and sensitive identity records, and (3) allow decentralized identity
management by reducing the dependency on third parties. As for the next steps,
the realization of SSI in the IoT will demand implementations that are
optimized for constrained devices, both for cryptographic operations and low-
power communication. Furthermore, wide adoption of SSI will depend on the
availability of open software libraries to manipulate DIDs and VCs in IoT
devices. To conclude we argue that, if adopted, SSI may significantly benefit
security and privacy of IoT applications, and potentially enable new use
cases, such as those that involve cross-owner decentralized interactions.
## References
* [1] B. M. Leiner, V. G. Cerf, D. D. Clark, R. E. Kahn, L. Kleinrock, D. C. Lynch, J. Postel, L. G. Roberts, and S. Wolff, “A brief history of the internet,” _ACM SIGCOMM Computer Communication Review_ , vol. 39, no. 5, pp. 22–31, 2009\.
* [2] V. Taneski, M. Heričko, and B. Brumen, “Systematic overview of password security problems,” _Acta Polytechnica Hungarica_ , vol. 16, no. 3, 2019\.
* [3] L. M. Kohnfelder, “Towards a practical public-key cryptosystem.” Ph.D. dissertation, Massachusetts Institute of Technology, 1978.
* [4] J. Callas, L. Donnerhacke, H. Finney, D. Shaw, and R. Thayer, “Openpgp message format,” Internet Requests for Comments, RFC Editor, RFC 4880, November 2007, http://www.rfc-editor.org/rfc/rfc4880.txt. [Online]. Available: http://www.rfc-editor.org/rfc/rfc4880.txt
* [5] D. Cooper, S. Santesson, S. Farrell, S. Boeyen, R. Housley, and W. Polk, “Internet x.509 public key infrastructure certificate and certificate revocation list (crl) profile,” Internet Requests for Comments, RFC Editor, RFC 5280, May 2008, http://www.rfc-editor.org/rfc/rfc5280.txt. [Online]. Available: http://www.rfc-editor.org/rfc/rfc5280.txt
* [6] M. Sporny, D. Longley, C. Allen, M. Sabadello, and D. Reed, “Decentralized identifiers (DIDs) v1.0,” W3C, W3C Working Draft, Dec. 2019, https://www.w3.org/TR/2019/WD-did-core-20191209/.
* [7] M. Sporny, G. Noble, D. Burnett, B. Zundel, and D. Longley, “Verifiable credentials data model 1.0,” W3C, W3C Recommendation, Nov. 2019, https://www.w3.org/TR/2019/REC-vc-data-model-20191119/.
* [8] C. Allen, “The path for self-sovereign identity,” http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html, accessed: 2020-02-13.
* [9] M. S. Ferdous, F. Chowdhury, and M. O. Alassafi, “In search of self-sovereign identity leveraging blockchain technology,” _IEEE Access_ , vol. 7, pp. 103 059–103 079, 2019.
* [10] W. Diffie and M. Hellman, “New directions in cryptography,” _IEEE transactions on Information Theory_ , vol. 22, no. 6, pp. 644–654, 1976.
* [11] S. Farrell, R. Housley, and S. Turner, “An internet attribute certificate profile for authorization,” Internet Requests for Comments, RFC Editor, RFC 5755, January 2010.
* [12] J. M. Rabaey, “The swarm at the edge of the cloud-a new perspective on wireless,” in _VLSI Circuits (VLSIC), 2011 Symposium on_. IEEE, 2011, pp. 6–8.
* [13] L. C. Costa, J. Rabaey, A. Wolisz, M. Rosan, and M. K. Zuffo, “Swarm os control plane: an architecture proposal for heterogeneous and organic networks,” _IEEE Transactions on Consumer Electronics_ , vol. 61, no. 4, pp. 454–462, 2015.
* [14] Y. Kortesniemi, D. Lagutin, T. Elo, and N. Fotiou, “Improving the privacy of iot with decentralised identifiers (dids),” _Journal of Computer Networks and Communications_ , vol. 2019, 2019.
* [15] D. Lagutin, Y. Kortesniemi, N. Fotiou, and V. A. Siris, “Enabling decentralised identifiers and verifiable credentials for constrained internet-of-things devices using oauth-based delegation,” in _Workshop on Decentralized IoT Systems and Security (DISS)_ , 2019.
* [16] H. Foundation, “Hyperledger aries,” accessed: 2020-02-15. [Online]. Available: https://www.hyperledger.org/projects/aries
|
2024-09-04T02:54:59.007056 | 2020-03-11T04:51:07 | 2003.05110 | {
"authors": "Nicole F. Allard, John F. Kielkopf, Siyi Xu, Gr\\'egoire Guillon, Bilel\n Mehnen, Roberto Linguerri, Muneerah Mogren Al Mogren, Majdi Hochlaf, Ivan\n Hubeny",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26150",
"submitter": "John Kielkopf",
"url": "https://arxiv.org/abs/2003.05110"
} | arxiv-papers | # H–He collision-induced satellite in the Lyman-$\alpha$ profile of DBA white
dwarf stars
Nicole F. Allard 1,2, John F. Kielkopf 3, Siyi Xu 4, Grégoire Guillon 5, Bilel
Mehnen 6, Roberto Linguerri 6, Muneerah Mogren Al Mogren 7, Majdi Hochlaf 6,
Ivan Hubeny 8
1GEPI, Observatoire de Paris, Université PSL, UMR 8111, CNRS, 61, Avenue de
l’Observatoire, F-75014 Paris, France
2Sorbonne Université, CNRS, UMR7095, Institut d’Astrophysique de Paris, 98bis
Boulevard Arago, PARIS, France
3Department of Physics and Astronomy, University of Louisville, Louisville,
Kentucky 40292 USA
4Gemini Observatory, 670 N. Aóhoku Place, Hilo, HI 96720 HI, USA
5Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR6303, CNRS, Université
de Bourgogne Franche Comté, 21078 Dijon Cedex, France
6Université Gustave Eiffel, COSYS/LISIS, 5 Bd Descartes 77454, Champs sur
Marne, France
7Chemistry Department, Faculty of Science, King Saud University, PO Box 2455,
Riyadh 11451, Kingdom of Saudi Arabia.
8Department of Astronomy, University of Arizona, 933 N Cherry Ave, Tucson, AZ
85719 USA E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The spectra of helium-dominated white dwarf stars with hydrogen in their
atmosphere present a distinctive broad feature centered around 1160 Å in the
blue wing of the Lyman-$\alpha$ line. It is extremely apparent in WD 1425+540
recently observed with HST COS. With new theoretical line profiles based on ab
initio atomic interaction potentials we show that this feature is a signature
of a collision-induced satellite due to an asymptotically forbidden
transition. This quasi-molecular spectral satellite is crucial to
understanding the asymmetrical shape of Lyman-$\alpha$ seen in this and other
white dwarf spectra. Our previous work predicting this absorption feature was
limited by molecular potentials that were not adequate to follow the atomic
interactions with spectroscopic precision to the asymptotic limit of large
separation. A new set of potential energy curves and electronic dipole
transition moments for the lowest electronic states of the H–He system were
developed to account accurately for the behaviour of the atomic interactions
at all distances, from the chemical regime within 1 Å out to where the
radiating H atoms are not significantly perturbed by their neighbors. We use a
general unified theory of collision-broadened atomic spectral lines to
describe a rigorous treatment of hydrogen Lyman-$\alpha$ with these potentials
and present a new study of its broadening by radiative collisions of hydrogen
and neutral helium. These results enable ab initio modeling of radiative
transport in DBA white dwarf atmospheres.
###### keywords:
(stars:) white dwarfs < Stars - stars: atmospheres < Stars - atomic data <
Physical Data and Processes - atomic processes < Physical Data and Processes -
line: profiles < Physical Data and Processes - molecular data < Physical Data
and Processes
††pubyear: 2019††pagerange: H–He collision-induced satellite in the
Lyman-$\alpha$ profile of DBA white dwarf stars–7
## 1 Introduction
Theoretical studies of the effects of neutral atom collisions on atomic
spectral lines have often been hindered by our ignorance of the atomic
potentials. Even for systems as simple as H-H or H-He, the interactions and
the electric transition moments are quite difficult to compute with the
accuracy which is needed for evaluating a complete line profile. The
fundamental theory of calculating the spectral line profile (Allard et al.,
1999) requires knowledge of molecular potentials with high accuracy because
the shape and strength of the line profile are very sensitive to the details
of the molecular potential curves describing the atom-atom collisions. In
Allard & Christova (2009) we made an exhaustive study of the red wing of
Lyman-$\alpha$ line perturbed by H–He collisions, where we used the potentials
and electric dipole transition moments of Theodorakopoulos et al. (1984) and
Theodorakopoulos et al. (1987). We considered the high He densities met in
cool DZ white dwarfs and examined the range of validity of the one-perturber
approximation widely used to calculate the line wings. We have shown there
that the extension of the red wing of the Lyman-$\alpha$ line seen in DZ white
dwarf spectra depends strongly on the stellar temperature, while it is not
dependent on the helium density. We also predicted a blue satellite which only
very recently has been observed in Hubble Space Telescope Cosmic Origins
Spectrograph (HST COS) observations (Xu et al., 2017). The importance of a
correct determination of the blue wing of Lyman-$\alpha$ line to interpret the
asymmetrical shape of the Lyman-$\alpha$ line observed with COS is presented
in Sect. 2. An accurate prediction of the satellite and consequently the full
Lyman-$\alpha$ profile requires exacting new ab initio calculations to obtain
the ground and first excited potential energy curves and the corresponding
electric dipole transition moments for the H–He system. The new molecular data
in Sect. 3 corroborate the prediction of a line satellite in the
Lyman-$\alpha$ profile (Allard & Christova, 2009) that is described in Sect.
4. In Allard et al. (1999) we previously derived a classical path expression
for a pressure-broadened atomic spectral line shape that includes the effects
of a radiative electric dipole transition moment that is dependent on the
position of the radiating atom and its dynamic neighbors. Such a comprehensive
unified approach employing the precise molecular data is fundamentally
necessary to obtain an accurate absorption line profile that is valid over the
full breadth of spectral line for the range of densities and temperatures
found in stellar atmospheres.
Figure 1: COS observation of WD 1425+540. The broad distinctive collision-
induced satellite in the blue wing of the Lyman-$\alpha$ line about 1160 Å is
clearly visible (Xu et al., 2017). The strong emission at the center of
Lyman-$\alpha$ is from Earth’s geocoronal hydrogen above the HST orbit.
## 2 COS observation of WD 1425+540
WD 1425+540 (T=14,490 K, log g=7.95) is the prototype of DBA white dwarfs and
it is a helium-dominated white dwarf that also has a large amount of hydrogen
in its atmosphere (Bergeron et al., 2011). It was observed with HST COS under
program 13453, and the details of observation and data reduction strategy were
reported by Xu et al. (2017). Here, we focus on the spectrum of segment B of
the G130M grating, which covers 1130-1270 Å, as shown in Fig. 1. As described
in Xu et al. (2017), there are two unusual features of the Lyman-$\alpha$
profile in WD 1425+540. First, the line profile is very asymmetric exhibiting
an extend blue wing with the satellite feature as noted. Second, previous
white dwarf spectral models cannot reproduce the strength of Lyman-$\alpha$
and Balmer-$\alpha$ simultaneously. The derived hydrogen abundance is more
than a factor of 10 higher from the Lyman-$\alpha$ measurement than from
Balmer-$\alpha$. While WD 1425+540 is the most extreme case so far, these
peculiarities have been observed in other DBA white dwarfs as well, e.g. Jura
et al. (2012).
The asymmetry also could not be produced by white dwarf models of Xu et al.
(2017) because the opacity data used for the Lyman-$\alpha$ profile did not
take into account the quasi-molecular line satellite predicted in Allard &
Christova (2009). Once this feature is included, the observed asymmetry is
reproduced (Gänsicke et al., 2018). The need to have both accurate data for
Lyman-$\alpha$ and for Balmer-$\alpha$ is essential to determine the hydrogen
abundance correctly. The goal of this paper is to develop the foundation of
the atomic and molecular physics needed to compute a complete profile without
making ad hoc assumptions. We emphasize the importance of accurate potentials
and electric dipole transition moment data for this purpose, and here we
provide that data for Lyman-$\alpha$. With new potentials of H-He we also
compute a model DBA white dwarf spectrum that demonstrates their validity.
## 3 H–He diatomic potentials
### 3.1 Methodology and benchmarks
The lowest electronic excited states of hydrogen and helium are at unusually
high energies for neutral atoms (> 10 eV) with respect to their ground states,
and close to the corresponding ionization thresholds. Hydrogen with $n$= 2 or
greater is a Rydberg atom in this sense (Gallagher, 1994).
The electronic excited states of H–He diatomic system of interest in the
present work correlate adiabatically to those of these atoms. Therefore, for
the correct description of the electronic states of the H–He diatomic system
consistent with its isolated atomic fragments one needs the inclusion of
diffuse functions that can flexibly represent the states. In addition to this,
the computation of the possible interactions that may occur between these
electronic states and the subsequent mixing of their wavefunctions that
results in an apparent change in electric dipole transition moments, require
post Hartree-Fock multi-configurational approaches. More specifically, we used
the Complete Active Space Self Consistent Field (CASSCF) (Knowles & Werner,
1985; Werner & Knowles, 1985) followed by the internally contracted Multi-
Reference Configuration Interaction (MRCI) (Knowles & Werner, 1988; Werner &
Knowles, 1988; Shamasundar et al., 2011) methods as implemented in the MOLPRO
2015 package (Werner et al., 2015). In MRCI, the complete CASSCF wave
functions are used as a reference. Furthermore, the Davidson correction
(MRCI+Q) (Langhoff & Davidson, 1974) has been applied to the resulting
energies to account for the lack of size-consistency of the MRCI method. These
computations were performed in the $C_{2v}$ point group, where the $B_{1}$ and
$B_{2}$ representations were treated on equal footing.
Benchmarks on valence-Rydberg electronic states of other molecular systems
(Spelsberg & Meyer, 2001; Ndome et al., 2008; Hochlaf et al., 2010) showed the
need to use a CASSCF active space larger than the full-valence space. The
atomic basis set for the H and He atoms had to be optimized as well. Thus, we
performed a series of benchmark computations at different levels of accuracy
to find the appropriate states for convergence.
Firstly, at the lowest level of accuracy, we adopted a small active space of 3
electrons in 7 molecular orbitals in conjunction with the aug-cc-pV5Z
(Dunning, 1989; Kendall et al., 1992) basis set. With this approach, we found
inconsistencies in the calculated energies, especially in the asymptotic
region. Indeed, with this simplest choice there is a large energy gap of $\sim
0.45$ eV between the two equivalent dissociation limits H($2p\,^{2}P$) +
He($1s^{2}\,{}^{1}S$) and H($2s\,^{2}S$) + He($1s^{2}\,{}^{1}S$). Obviously,
this gap is unphysical since these two asymptotes should be strictly
degenerate because the two H ($n=2$) states have the same energy apart from
Lamb shift and negligibly small fine and hyperfine structure. Moreover, we
found a spurious second potential well ($D_{e}$ $\sim$ 660 cm-1) in the
$C\,\Sigma$ state of H–He at large internuclear separations (for
$R_{\mathrm{H-He}}$ $\sim$ 4.2 Å). Thus, at this level of accuracy, a rather
poor chemical description of the H–He molecule is obtained in spite of the
relatively large size of the MRCI computations with $\sim$ 4.3 x 104
uncontracted configuration state functions (CSFs) per $C_{2v}$ symmetry. This
may be linked to some missing correlation energy in the MRCI wavefunctions
that can be recovered by means of larger active spaces in the reference CASSCF
vector and by adopting more diffuse atomic basis sets.
Secondly, we tried an enlarged CASSCF active space of 3 electrons in 14
molecular orbitals in conjunction with the aug-cc-pV6Z (Dunning, 1989; Kendall
et al., 1992) basis set. In the subsequent MRCI treatment, the multi-
configuration wave functions included $\sim$ 2.1 x 105 uncontracted CSFs per
$C_{2v}$ symmetry. With this ansatz, the energy difference between the above
mentioned asymptotes was reduced to $\sim$ 0.33 eV but still did not vanish.
For modeling based on unified spectral line shape theory an error of this size
would be unacceptable.
Finally, using the same active space as in the second series of computations,
we added a set of diffuse functions to the aug-cc-pV6Z basis set for H and He.
Hereafter, this enlarged set will be denoted as aug-cc-pV6Z⋆. The exponents of
the added Gaussian primitives, which were left uncontracted, are listed in
Table 1 in the Appendix.
This approach, compared to the previous ones, solved all the inconsistencies
mentioned above. That is, it yielded degenerate H($2p\,^{2}P$) +
He($1s^{2}\,{}^{1}S)$ and H($2s\,^{2}S$) + He($1s^{2}\,{}^{1}S)$ dissociation
limits and no spurious potential well in the $C\,\Sigma$ state. We note that
convergence was reached at this step since a further expansion of the aug-cc-
pV6Z⋆ set by inclusion of more diffuse functions led to almost identical
results. In these calculations, the MRCI wave functions included more than
$7.5\times 10^{5}$ uncontracted CSFs per $C_{2v}$ symmetry species. These
relatively large computations for such a small molecular system were necessary
to obtain the precision needed to model the Lyman-$\alpha$ profile accurately.
Figure 2: Top: short-range part of the potential curves of the H–He molecule:
$A$ (red dotted), $B$ (green dashed line) and $C$ (blue solid). Bottom: $X$
(black solid). Note the agreement at short distance with data of
Theodorakopoulos et al. (1984) that are overplotted in dotted cyan.
Figure 3: Top: long range part of the $C\,\Sigma$ potential curve correlated
with $2s$ state. This work (full line), Theodorakopoulos et al. (1984) (dotted
line). Bottom: $\Delta V(R)$ (black) and $\tilde{d}(R)$ (blue) at 14500 K for
the $C-X$ transition. The atomic separation for the maximum in the $C-X$
difference potential is $R_{\mathrm{max}}\approx 2.2\,$Å as shown in Fig. 4
Note that the $C-X$ transition in this work is forbidden asymptotically as it
is a transition between the $2s$ and $1s$ states of the free hydrogen atom at
large $R$.
### 3.2 Potential energy curves and transition moments
The electronic states investigated in the present contribution correlate, at
large internuclear distances, to the H($1s\;^{2}S$) + He($2s^{2}\;{}^{1}S$),
H($2s\;^{2}P$) + He($2s^{2}\;{}^{1}S$), and H($2p\;^{2}P$) +
He($2s^{2}\;{}^{1}S$) dissociation limits (see Fig. 2 and Table 2 in the
Appendix). The MRCI+Q/aug-cc-pV6Z⋆ potential energy curves of the four lowest
electronic states of H–He, obtained with the largest active space and basis
set as described in the previous section, are represented in Fig. 2 as a
function of the internuclear distance, $R_{\mathrm{H-He}}$. This figure shows
that the ground state possesses a repulsive potential correlating to the
H($1s\,^{2}S$) + He($1s^{2}\,{}^{1}S$) isolated atom asymptote at large
distances.
The ground $X\,^{2}\Sigma^{+}$ state is repulsive at short range with a
shallow well at $4\,$Å. The excited $A\,^{2}\Sigma^{+}$, $B\,^{2}\Pi$ and
$C\,^{2}\Sigma^{+}$ states have rather deep potential wells in the molecular
region closer than 1 Å, and complex behavior at longer range that can affect
transition probabilities and difference potential energies in subtle ways. We
refer to these as the $X\,\Sigma$, $A\,\Sigma$, $B\,\Pi$, and $C\,\Sigma$
states, or more succinctly by the letter designations $X$, $A$, $B$, and $C$
in the following. They correlate adiabatically to the H($n=2$) +
He($1s^{2}\,{}^{1}S$) dissociation limits at large internuclear separations
(see Table 2 in the Appendix). The ordering of the assignments of labels for
the states is with $A\,\Sigma$ the lowest and $C\,\Sigma$ the highest inside
this close 1 Å region with wells in all the states of the order of
$15\,000\;\mathrm{cm}^{-1}$ deep, with minima located at $R_{\mathrm{H-He}}$ =
0.7407, 0.7686, and 0.8095 Å for the $A$, $B$ and $C$ states, respectively
(see Table 3 in the Appendix). While the $A$ and $B$ states have potentials
with a simple short-range well, the $C$ state also exhibits a potential
maximum of $\approx 0.666$ eV at $R_{\mathrm{H-He}}=2.098$ Å. Its presence
causes a related maximum in the $C-X$ transition difference potential energy
curve which affects the blue wing of Lyman-$\alpha$.
Although the $C\,\Sigma$ H-He molecular state shown in Fig. 2 is correlated
asymptotically with the $2s$ atomic state, we find that at
$R_{\mathrm{H-He}}<7\;$Å the transition probability to the $X\,\Sigma$ ground
state is not zero. Detailed electric dipole transition moments between the
$X\,\Sigma$ ground state and the $A\,\Sigma$, $B\,\Pi$ and $C\,\Sigma$ excited
states as a function of the internuclear distance have been calculated at the
MRCI/aug-cc-pV6Z⋆ level. In this calculation almost all the transition moments
are rather large, particularly for the $C\,\Sigma$ $\leftarrow$ $A\,\Sigma$
and $B\,\Pi$ $\leftarrow$ $A\,\Sigma$ transitions, where corresponding matrix
elements of around -9.2 and -7.5 debye (D or $10^{-18}$ statcoulomb-cm) are
calculated, respectively. Fig. 7 in the Appendix offers a detailed view. These
transition moments correlate to the correct atomic values at dissociation. In
particular, the $\langle X\,\Sigma|DM|C\,\Sigma\rangle$ matrix element of the
electric dipole transition moment (DM) vanishes at large $R_{\mathrm{H-He}}$
where the $1s-2s$ transition in the isolated hydrogen atom is forbidden to
one-photon electric dipole transitions by parity conservation.
## 4 Lyman-alpha opacity
The theory of spectral line shapes, especially the unified approach we
developed, determines the contributions of specific spectral lines to stellar
opacities and may be incorporated into stellar atmosphere models to make
accurate synthesis of stellar spectra possible. The line shape theory accounts
for neutral atom broadening and shift in both the centers of spectral lines
and their extreme wings with one consistent treatment without ad hoc
assumptions about the line shape or potentials. Complete details and the
derivation of the theory are provided by Allard et al. (1999). The spectrum,
$I(\Delta\omega)$, is the Fourier transform (FT) of a electric dipole
transition autocorrelation function, $\Phi(s)$. For a perturber density
$n_{p}$, we have
$\Phi(s)=e^{-n_{p}g(s)}\;,$ (1)
where the decay of the autocorrelation function with time leads to atomic line
broadening. (See Eq. (121) of Allard et al. (1999).) Our approach introduces
the concept of a modulated electric dipole transition moment
$\tilde{d}_{if}(R(t))$ into the line shape calculation.
$\tilde{d}_{if}[R(t)]=d_{if}[R(t)]e^{-\frac{V_{i}[R(t)]}{2kT}}\;\;,$ (2)
where the potential energy for the initial state is
$V_{i}(R)=E_{i}(R)-E_{i}^{\infty}\;\;.$ (3)
The difference potential energy $\Delta V(R)$ for a transition $if$ is
$\Delta V(R)=V_{if}(R)=V_{f}(R)-V_{i}(R)\;\;.$ (4)
The Boltzmann factor $e^{-\frac{V_{i}(R)}{2kT}}$ in Eq. (2) appears because
the perturbing atoms or ions are in thermal equilibrium with the radiating
atom which affects the probability of finding them initially at a given $R$.
This treatment results in Lyman series line wing profiles that exhibit a
sensitive dependence on temperature. We had to use electric dipole moments
modulated by the Boltzmann factor in the comparison of emission spectra of
Lyman-$\alpha$ (Kielkopf & Allard, 1998) and Balmer $\alpha$ (Kielkopf et al.,
2002) measured in laboratory.
### 4.1 Study of the characteristics of the line satellite
In Allard & Christova (2009) we predicted a line satellite at 1157 Å in
spectra computed for the temperature range of cool DZ white dwarfs with
potentials published in Theodorakopoulos et al. (1984). However, we noticed an
unexpected well of about 150 cm-1 (upper Fig. 3) in the potential energy of
the $C\,\Sigma$ state at $R\sim 8$ Å which may be related to the choice of
basis states and has no clear physical origin. In this work we use the new ab
initio calculations of the potentials over the full range of distances $R$
between the H and He atoms since convergence at large $R$ is now reached. The
long range well of the $C\,\Sigma$ state of Theodorakopoulos et al. (1984) and
Theodorakopoulos et al. (1987) potentials is not found in these new
calculations as we see in Fig. 3.
Figure 4: Top: variation with temperature of the line satellite. The He
density is $1\times 10^{20}$ cm-3, the temperatures are 14500 K (full black
line), $20\,000$ K (blue stars), and $5\,000$ K (red dashed line). Bottom: for
the $C-X$ transition, $\Delta V(R)$ (black solid) and $\tilde{d(R)}$ at 5000 K
(black solid), $10\,000$ K (red dotted), 14500 K (green dashed), and 20000 K
(blue solid). At the highest temperatures the He can reach the inner regions
of the lower state $X\,^{2}\Sigma$ potential and enhance the transition
probability.
The prediction of a satellite in the blue wing of the H–He line profile is
related to a potential maximum at $R=2.1$ Å (see Sect. 3.2) of the $C\,\Sigma$
state. This leads to a maximum of the potential energy difference $\Delta
V(R)$ in Eq. (4) for this transition shown in Fig. 3.
The unified theory predicts that line satellites will be centered periodically
at frequencies corresponding to integer multiples of the extrema of $\Delta
V(R)$. In the quasi-static limit the first satellite on the line would be at
$\Delta\omega=5\,000$ cm-1 corresponding to $\lambda\sim 1150$ Å on the blue
side of Lyman-$\alpha$. In this case the maximum in $\Delta V$ occurs at
rather small internuclear distance, and is quite sharp. The correspondingly
short duration of the close collision leads to a broad satellite centered at
$\lambda$ $\sim$ 1160 Å for T=$14\,500$ K (Fig. 4).
### 4.2 Temperature and density dependence
For a lower temperature, $T=5\,000$ K (Fig. 4), the duration of the collision
is longer, and the line satellite at $\lambda\sim 1153$ Å is sharper and
closer to the predicted quasi-static position than at higher temperatures. The
oscillations which appear on the red side of the quasi-molecular satellite are
due to interference effects described by Royer (1971) and Sando & Wormhoudt
(1973). They depend on the relative velocity and therefore on temperature.
Consequently velocity averaging would moderate their amplitude in observed
spectra. At temperatures below $10\,000$ K the blue wing of Lyman-$\alpha$
shortward of 1150 Å becomes significantly more transparent than at higher
temperature, an order of magnitude effect below 1120 Å. Thus this far blue
wing is a sensitive indicator of temperature in cool helium-rich WD
atmospheres.
The satellite amplitude depends on the value of the electric dipole transition
moment through the region of the potential extremum responsible for the
satellite and on the position of this extremum. The blue line wings shown in
Fig. 4 are unchanged in the range $14\,500$ to $20\,000$ K as there is no
change with $T$ of $\tilde{d}_{if}[R(t)]$ in the internuclear distance where
the potential difference goes through a maximum. $\tilde{d}_{if}[R(t)]$ at
$14\,500$ K for the $C-X$ transition is also plotted in Fig. 3. In the former
work we used electric dipole transition moments of Theodorakopoulos et al.
(1987) where the $C-X$ transition was allowed. Nevertheless the amplitude and
position of the line satellite are unchanged as they are due to a range of
internuclear distance where the potentials and the dipole moments are almost
identical as we see in Fig. 5. The main difference between the two potentials
concerns the red wing which is lowered using dipole moments of
Theodorakopoulos et al. (1987) where the $A-X$ transition was forbidden.
Figure 5: Comparison of the unified line profile using the dipole moments of
this work (black line) with the line profile using dipole moments of
Theodorakopoulos et al. (1987) (red dashed line) . The He density is $10^{20}$
cm-3 and the temperature is 14500 K.
In summary the unified line profile calculation leads to a flat blue wing due
to a line satellite. The resulting asymmetry of the Lyman-$\alpha$ line can be
easily appreciated in Fig. 5 the blue side of the line is wider than the red
side. Measured at the strength of the broad collision-induced 1160 Å
satellite, the asymmetry ratio of the width on the blue side to that on the
red is as large as 2.2. Consequently, the near wing is clearly far different
from a symmetric Lorentzian because the satellite is rather close to the
isolated atom line center. This was also the case for the Mg b triplet
perturbed by He (Allard et al., 2016). The existence of the asymmetrical shape
of these line profiles depends strongly on the maximum value of the potential
energy difference $\Delta V(R)$ which predicts the position of the line
satellite and on the atomic collision energies at the temperatures of
interest. These results enable computing atmosphere models and synthetic
spectra which we compare to an HST COS observation of WD 1425+540 in Section
5.
## 5 Model atmosphere and synthetic white dwarf spectrum
To demonstrate the importance of a proper treatment of He perturbers on
hydrogen lines, synthetic spectra of the white dwarf WD 1425+540 were computed
using the stellar atmosphere code TLUSTY (version 207) for computing the
atmospheric structure, and a companion program SYNSPEC (version 53) for
generating detailed synthetic spectra. For a description of the previous
versions (205 and 51) see the works of Hubeny & Lanz (2017) and Hubeny & Lanz
(2011a, b). This procedure allows us to study the effect of the H/He ratio on
the spectrum, and the development of line wings, though it is not fully self-
consistent with the stellar atmosphere model since that would require a
treatment of He I optical lines as well. We have computed a number of H-He
models, with the basic model parameters, $T_{\rm eff}=14,410$ K and $\log
g=7.89$, from Gänsicke et al. (2018), and with varying He/H ratio. For
treating the electron and proton broadening of the hydrogen lines we used
Tremblay & Bergeron (2009) data. The He/H ratio was adjusted to obtain a
reasonable agreement by eye with the observed spectrum, and we found a nominal
ratio of $4\times 10^{3}$ ($\log(N_{\mathrm{H}}/N_{\mathrm{He}})\approx-3.6$)
fitted the observed profile well. Liebert et al. (1979) found 3.7 from a
ground-based H$\beta$ profile, and recently Gänsicke et al. (2018) analyzed
the L$\alpha$ profile and adopted a somewhat larger
$\log(N_{\mathrm{H}}/N_{\mathrm{He}})\approx-4.0\pm 0.20$.
The potential energies for the $n=1$ and $n=2$ electronic states H-He that
were used in our models are the ones described in this paper. Stellar
opacities were computing using H-He electric dipole moments from the previous
work of Theodorakopoulos et al. (1987) in which the $A-X$ transition is
forbidden, and also using new dipole transition moments from this work in
which the $A-X$ transition is allowed. As shown in Fig. 6, the observed red
wing of Lyman-$\alpha$ is consistent with a suppressed $A-X$ transition
probability in the region of atomic separation with difference potential
energy that would contribute.
We conclude that the additional basis states used for the new ab initio
potentials improve the calculation of the potential energy curves, but may not
capture the dipole transition moments of the real H-He system correctly for
the $A-X$ transition. However the combination of this work’s potentials and
the dipole moments of Theodorakopoulos et al. (1987) achieve a remarkable fit
in Fig. 6 to the HST COS spectrum of WD 1425+540 when incorporated into the
unified line shape theory we described here.
Figure 6: The observed spectrum of WD 1425+540 (also see Fig. 1) compared with
a synthetic white dwarf spectrum in the Lyman-$\alpha$ region. The synthetic
spectrum is computed with TLUSTY and SYNSPEC for a temperature of 14 500 K and
a He/H ratio of $4\times 10^{3}$ using the unified line profile with the
potentials of this work. For the dipole moments of Theodorakopoulos et al.
(1987) (red solid line) the $A-X$ transition is forbidden and its
contributions to the opacity are suppressed. For the dipole moments of this
work (blue dashed line), the $A-X$ transition contributes in the red wing of
the model but is absent in the observed spectrum.
## 6 Conclusions
The Lyman-$\alpha$ region of the spectrum of a helium-rich white dwarf with
hydrogen in its atmosphere is determined by the changes in transition energy
and transition probability during the H-He collisions that broaden the atomic
spectral line. We developed new H-He potential energies and transition dipole
moments for the hydrogen $1s$, $2s$, and $2p$ states as input data for a
unified theory calculation of the profile of WD 1425+540 to test the
potentials and dipole moments, and to confirm the origin of the short-
wavelength “blue” satellite. We found that the spectral line profile from the
new molecular data has a satellite feature in the blue wing that agrees with
previous work. These results provide a benchmark implementation of ab initio
atomic and molecular potentials for the most basic neutral non-resonant atom-
atom pair relevant to stellar atmosphere models. The new calculations show how
the profile depends on the variation of the electric dipole transition moment
and interaction potential energy with atomic separation. A comparison with the
observed spectrum of WD 1425+540 was made by using these theoretical opacities
in a stellar atmosphere and spectrum synthesis code. While it was not our goal
to refine the stellar model based on the new theoretical data, the profiles
reproduce the observed spectrum with a reasonable He/H ratio. Further, the
absence of an extended red wing of Lyman-$\alpha$ in the observed spectrum
suggests that the states of the difference potential that could contribute to
that region have the reduced transition dipole moment that was found in
previous molecular models. The new work presented here shows clearly that
there is an opportunity to use stellar spectra to improve the atomic and
molecular physics, ultimately to yield better models for astrophysical
applications. For H–He, the $A-X$ transition dipole moment remains uncertain.
The blue wing of Lyman-$\alpha$ is sensitive to He density and the structure
and temperature of the stellar atmosphere, with a profile that for wavelengths
shortward of $1150\,$Å will have reduced opacity from regions with
temperatures under $10\,000\,$K. Profiles computed with a unified theory of
collision broadening based on accurate data from ab initio molecular physics
take into account the strong dependence of the amplitude of the electric
dipole transition moment on atom-atom separation ($R$) where the potential
energy change $\Delta V(R)$ is an extremum. Incorporated into model
atmospheres, this dependence may be used to probe white dwarf or stellar
atmospheres for density and temperature. This emphasizes the importance of the
accuracy of both the potential energies and the electric dipole transition
moments for the line shape calculations that have traditionally assumed
electric dipole transition moments are constant (Allard & Kielkopf, 1982;
Allard & Koester, 1992; Allard et al., 1994).
The effect of collision broadening is central to understanding the opacity of
stellar atmospheres, yet there have been only a few definitive comparisons
with experimental work for atomic H. (Kielkopf & Allard, 1995, 1998; Kielkopf
et al., 2004). This is because of the difficulty of creating an environment in
a laboratory experiment simulating a stellar atmosphere with accurate
diagnostics. On the theoretical side, the maturing capability of ab initio
methods now offers the possibility of accurately computing the interaction of
H with H (Drira, 1999; Spielfiedel, 2003; Spielfiedel et al., 2004) and H with
He atoms (this work). While an accurate determination of the broadening of
Balmer $\alpha$ with high density atomic hydrogen (that is H–H) has been done
by Allard et al. (2008), nothing comparable exists for H–He. Our calculations
reported in Allard et al. (2008) support the results of Barklem et al. (2000);
Barklem et al. (2002) that the Ali & Griem (1966) theory underestimates the
actual line width. Recent laboratory measurements show a similar result at
high density in environments comparable to white dwarf atmospheres (Kielkopf &
Allard, 2014). It would be possible now to similarly improve the calculation
of Balmer-$\alpha$ broadening and its contribution to the full white dwarf
opacity model. A major improvement to comprehensive theoretical models for DBA
white dwarf spectra is within reach that would determine H-He molecular data
for $n=3$ excited states, and use those to compute accurate Balmer-$\alpha$
profiles under white dwarf atmosphere conditions. Such results would help
understanding the differences in stellar parameters that are found from Balmer
and Lyman line profiles. In conclusion, complete unified line profiles based
on accurate atomic and molecular physics for both the Lyman-$\alpha$ and
Balmer-$\alpha$ lines should be incorporated into the analysis of DBA white
dwarf spectra to derive the hydrogen abundance.
## acknowledgements
The paper was based on observations made with the NASA/ESA Hubble Space
Telescope under program 13453, obtained from the data archive at the Space
Telescope Science Institute. STScI is operated by the Association of
Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555.
We thank the COST Action CM1405 MOLecules in Motion (MOLIM) of the European
Community for support. The authors would like to extend their sincere
appreciation to the Deanship of Scientific Research at King Saud University
for funding the research through the Research Group Project No. RGP-333. This
work was supported by the CNRS program Physique et Chimie du Milieu
Interstellaire (PCMI) co-funded by the Centre National d’Etudes Spatiales
(CNES).
## References
* Ali & Griem (1966) Ali A. W., Griem H. R., 1966, Physical Review, 144, 366
* Allard & Christova (2009) Allard N. F., Christova M., 2009, New Astron. Rev., 53, 252
* Allard & Kielkopf (1982) Allard N. F., Kielkopf J. F., 1982, Rev. Mod. Phys., 54, 1103
* Allard & Koester (1992) Allard N. F., Koester D., 1992, A&A, 258, 464
* Allard et al. (1994) Allard N. F., Koester D., Feautrier N., Spielfiedel A., 1994, A&A Suppl., 108, 417
* Allard et al. (1999) Allard N. F., Royer A., Kielkopf J. F., Feautrier N., 1999, Phys. Rev. A, 60, 1021
* Allard et al. (2008) Allard N. F., Kielkopf J. F., Cayrel R., van ’t Veer-Menneret C., 2008, A&A, 480, 581
* Allard et al. (2016) Allard N. F., Leininger T., Gadéa F. X., Brousseau-Couture V., Dufour P., 2016, A&A, 588, A142
* Barklem et al. (2000) Barklem P. S., Piskunov N., O’Mara B. J., 2000, A&A, 363, 1091
* Barklem et al. (2002) Barklem P. S., Stempels H. C., Allende Prieto C., Kochukhov O. P., Piskunov N., O’Mara B. J., 2002, A&A, 385, 951
* Bergeron et al. (2011) Bergeron P., et al., 2011, ApJ, 737, 28
* Drira (1999) Drira I., 1999, Journal of Molecular Spectroscopy, 198, 52
* Dunning (1989) Dunning Jr. T. H., 1989, J. Chem. Phys., 90, 1007
* Gallagher (1994) Gallagher T. F., 1994, Rydberg Atoms. Cambridge University Press, Cambridge, U.K.
* Gänsicke et al. (2018) Gänsicke B. T., Koester D., Farihi J., Toloza O., 2018, MNRAS, 481, 4323
* Hochlaf et al. (2010) Hochlaf M., Ndome H., Hammoutène D., Vervloet M., 2010, Journal of Physics B Atomic Molecular Physics, 43, 245101
* Hubeny & Lanz (2011a) Hubeny I., Lanz T., 2011a, TLUSTY: Stellar Atmospheres, Accretion Disks, and Spectroscopic Diagnostics (ascl:1109.021)
* Hubeny & Lanz (2011b) Hubeny I., Lanz T., 2011b, Synspec: General Spectrum Synthesis Program (ascl:1109.022)
* Hubeny & Lanz (2017) Hubeny I., Lanz T., 2017, A brief introductory guide to TLUSTY and SYNSPEC (arXiv:1706.01859)
* Jura et al. (2012) Jura M., Xu S., Klein B., Koester D., Zuckerman B., 2012, ApJ, 750, 69
* Kendall et al. (1992) Kendall R. A., Dunning Jr. T. H., Harrison R. J., 1992, J. Chem. Phys., 96, 6796
* Kielkopf & Allard (1995) Kielkopf J. F., Allard N. F., 1995, ApJ, 450, L75
* Kielkopf & Allard (1998) Kielkopf J. F., Allard N. F., 1998, Phys. Rev. A, 58, 4416
* Kielkopf & Allard (2014) Kielkopf J. F., Allard N. F., 2014, Journal of Physics B Atomic Molecular Physics, 47, 155701
* Kielkopf et al. (2002) Kielkopf J. F., Allard N. F., Decrette A., 2002, European Physical Journal D, 18, 51
* Kielkopf et al. (2004) Kielkopf J. F., Allard N. F., Huber J., 2004, ApJ, 611, L129
* Knowles & Werner (1985) Knowles P. J., Werner H.-J., 1985, Chemical Physics Letters, 115, 259
* Knowles & Werner (1988) Knowles P. J., Werner H.-J., 1988, Chemical Physics Letters, 145, 514
* Kramida (2010) Kramida A. E., 2010, Atomic Data and Nuclear Data Tables, 96, 586
* Langhoff & Davidson (1974) Langhoff S. R., Davidson E. R., 1974, J. Quant. Chem., 8, 61
* Liebert et al. (1979) Liebert J., Gresham M., Hege E. K., Strittmatter P. A., 1979, Astronomical Journal, 84, 1612
* Ndome et al. (2008) Ndome H., Hochlaf M., Lewis B. R., Heays A. N., Gibson S. T., Lefebvre-Brion H., 2008, J. Chem. Phys., 129, 164307
* Royer (1971) Royer A., 1971, Phys. Rev. A, 43, 499
* Sando & Wormhoudt (1973) Sando K. M., Wormhoudt J. G., 1973, Phys. Rev. A, 7, 1889
* Shamasundar et al. (2011) Shamasundar K. R., Knizia G., Werner H.-J., 2011, J. Chem. Phys., 135, 054101
* Spelsberg & Meyer (2001) Spelsberg D., Meyer W., 2001, J. Chem. Phys., 115, 6438
* Spielfiedel (2003) Spielfiedel A., 2003, J. Mol. Spectrosc., 217, 162
* Spielfiedel et al. (2004) Spielfiedel A., Palmieri P., Mitrushenkov A., 2004, Molec. Phys., 102, 2249
* Theodorakopoulos et al. (1984) Theodorakopoulos G., Farantos S. C., Buenker R. J., Peyerimhoff S. D., 1984, Journal of Physics B Atomic Molecular Physics, 17, 1453
* Theodorakopoulos et al. (1987) Theodorakopoulos G., Petsalakis I. D., Nicolaides C. A., R.J.Buenker 1987, J. Phys. B, 20, 2339
* Tremblay & Bergeron (2009) Tremblay P. E., Bergeron P., 2009, Astrophysical Journal, 696, 1755
* Werner & Knowles (1985) Werner H.-J., Knowles P. J., 1985, J. Chem. Phys., 82, 5053
* Werner & Knowles (1988) Werner H.-J., Knowles P. J., 1988, J. Chem. Phys., 89, 5803
* Werner et al. (2015) Werner H.-J., Knowles P. J., Knizia G., Manby F. R., Schütz M., et al., 2015, MOLPRO, version 2015.1, a package of ab initio programs
* Xu et al. (2017) Xu S., Zuckerman B., Dufour P., Young E. D., Klein B., Jura M., 2017, ApJ, 836, L7
## appendix
Parameters of the H–He molecular potentials are given in Tables 1 and 2.
Figure 7 shows the dependence on $R$ of the radiative transition moments
between the excited states and the perturbations of those states as the H and
He atoms approach from large $R$.
Table 1: Exponents of the diffuse uncontracted Gaussian primitives added to the aug-cc-pV6Z basis set to form the presently used aug-cc-pV6Z* basis sets for the H and He atoms. State | 1 | 2 | 3
---|---|---|---
H(_s_) | 0.00690204 | 0.002520537 | 0.000920468
H(_p_) | 0.026565598 | 0.010533298 | 0.004176468
H(_d_) | 0.055406537 | 0.024364162 | 0.010713761
H(_f_) | 0.106396067 | 0.046204584 | 0.020065249
H(_g_) | 0.168703345 | 0.069928301 | 0.028985598
H(_h_) | 0.175320015 | 0.045069073 | 0.011585793
He(_s_) | 0.017177900 | 0.006596920 | 0.002533450
He(_p_) | 0.050416903 | 0.019858313 | 0.007821833
He(_d_) | 0.094209988 | 0.036827891 | 0.014396494
He(_f_) | 0.151890237 | 0.056684629 | 0.021154402
He(_g_) | 0.232902520 | 0.079072280 | 0.026845675
He(_h_) | 0.248198125 | 0.060632194 | 0.014811808
Table 2: Dissociation fragments, experimental and calculated relative dissociation asymptotic energies, and molecular states for the four lowest electronic states of H–He. Experimental data are from Kramida (2010). Atomic | | Observed | Calculated | Molecular
---|---|---|---|---
H | He | (cm${}^{-}1$) | (cm${}^{-}1$) |
$1s\,^{2}S_{g}$ | $1s^{2}\,{}^{1}S_{g}$ | 0a | 0a | $X\,^{2}\Sigma^{+}$
$2p\,^{2}P_{u}$ | $1s^{2}\,{}^{1}S_{g}$ | 82259 | 82308 | $A\,^{2}\Sigma^{+}$, $B\,^{2}\Pi$
$2s\,^{2}S_{g}$ | $1s^{2}\,{}^{1}S_{g}$ | 82259 | 82308 | $C\,^{2}\Sigma^{+}$
aReference | | | |
Table 3: Spectroscopic constants and dissociation energies for the three lowest excited electronic states of H–He as deduced from the MRCI+Q /aug-cc-pV6Z* potential energy curves. $R_{e}$ corresponds to the equilibrium distance. $\omega_{e}$ and $\omega_{e}x_{e}$ are the vibrational constants. $\beta_{e}$ and $\alpha_{e}$ are the rotational constants. $D_{e}$ is the dissociation energy. State | $R_{e}$ | $\omega_{e}$ | $\omega_{e}x_{e}$ | $\beta_{e}$ | $\alpha_{e}$ | $D_{e}$
---|---|---|---|---|---|---
| Å | cm-1 | cm-1 | cm-1 | cm-1 | eV
$A\,^{2}\Sigma^{+}$ | 0.74074 | 3697.2 | 149.5 | 38.16 | 2.608 | 2.563
$B\,^{2}\Pi$ | 0.76863 | 3313.4 | 149.8 | 35.44 | 2.629 | 2.218
$C\,^{2}\Sigma^{+}$ | 0.80953 | 2906.3 | 144.0 | 31.95 | 2.551 | 1.638
Figure 7: Potential energy differences in cm-1 and electric dipole transition
moments in debye (D or $10^{-18}$ statcoulomb-cm) between the four lowest
electronic states of H–He calculated at the MRCI/aug-cc-pV6Z⋆ level. Note that
the $C\,\Sigma$ $\leftarrow$ $X\,\Sigma$ is asymptotically forbidden, while
transitions between excited states may occur. Upper panel: Energy differences
$A\Sigma-B\Sigma$ (blue) and $A\Sigma-C\Pi$ (red). Lower panel: Electric
dipole transition moments for H in the presence of He for states contributing
to H Lyman-$\alpha$.
|
2024-09-04T02:54:59.017865 | 2020-03-11T06:04:09 | 2003.05124 | {
"authors": "Yiying Yan, Zhiguo L\\\"u, JunYan Luo, Hang Zheng",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26151",
"submitter": "Yiying Yan",
"url": "https://arxiv.org/abs/2003.05124"
} | arxiv-papers | # Role of generalized parity in the symmetry of fluorescence spectrum from
two-level systems under periodic frequency modulation
Yiying Yan<EMAIL_ADDRESS>Department of Physics, School of Science,
Zhejiang University of Science and Technology, Hangzhou 310023, China Zhiguo
Lü<EMAIL_ADDRESS>Key Laboratory of Artificial Structures and Quantum
Control (Ministry of Education), Department of Physics and Astronomy, Shanghai
Jiao Tong University, Shanghai 200240, China Collaborative Innovation Center
of Advanced Microstructures, Nanjing 210093, China JunYan Luo Department of
Physics, School of Science, Zhejiang University of Science and Technology,
Hangzhou 310023, China Hang Zheng<EMAIL_ADDRESS>Key Laboratory of
Artificial Structures and Quantum Control (Ministry of Education), Department
of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240,
China Collaborative Innovation Center of Advanced Microstructures, Nanjing
210093, China
###### Abstract
We study the origin of the symmetry of the fluorescence spectrum from the two-
level system subjected to a low-frequency periodic modulation and a near-
resonant high-frequency monochromatic excitation by using the analytical and
numerical methods based on the Floquet theory. We find that the fundamental
origin of symmetry of the spectrum can be attributed to the presence of the
generalized parity of the Floquet states, which depends on the driving
parameters. The absence of the generalized parity can lead to the asymmetry of
the spectrum. Based on the generalized parity, the conditions for the symmetry
and asymmetry of the spectrum can be derived, which succeeds in predicting
symmetry and asymmetry of the spectrum for the harmonic, biharmonic, and
multiharmonic modulations. Moreover, we find that the secular approximation
widely used in the analytical calculation may lead to artifact symmetry of the
spectrum that vanishes when such approximation is avoided. The present study
provides a significant perspective on the origin of the symmetry of the
spectrum.
## I Introduction
Resonance fluorescence, arising from a quantum emitter driven by an external
field and coupled to a radiative reservoir Mollow (1969); Scully and Zubairy
(1997); Cohen-Tannoudji _et al._ (1998), is not only an important concept in
quantum optics but also has potential application in quantum information
technology, for instance, it plays an important role in realizing the single-
photon source He _et al._ (2013); Santana _et al._ (2017); Kiršanskė _et
al._ (2017). Particularly, the resonance fluorescence of two-level systems has
attracted much interest and been studied in various aspects such as spectrum
Ficek and Freedhoff (1993); Agarwal _et al._ (1991); Ficek and Freedhoff
(1996); Ficek and Rudolph (1999); Peiris _et al._ (2014); Konthasinghe _et
al._ (2014); He _et al._ (2015); Toyli _et al._ (2016), squeezing Carmichael
(1985); Grünwald and Vogel (2012, 2013), photon statistics Kimble _et al._
(1977); D’Souza _et al._ (1990); Nazir (2008); Pastukhov _et al._ (2014),
photon antibunching Itano _et al._ (1988); Ficek _et al._ (1984); Damanet
_et al._ (2018), and so on. The line shape of the spectrum is found to depend
strongly on the external field that interacts with the quantum emitters as
well as the reservoirs to which the quantum emitters are coupled. As is well-
known, for a sufficiently strong monochromatic field, the spectrum has a
symmetric three-peak structure, known as the Mollow triplet Mollow (1969).
More recently, the bi- and multi-chromatically driven quantum systems are of
interest Kryuchkyan _et al._ (2017); Antón _et al._ (2017); Yan _et al._
(2018); Saiko _et al._ (2018). In such systems, the spectrum turns out to
have a complicated multipeak structure Ficek and Freedhoff (1993); Agarwal
_et al._ (1991); Ficek and Freedhoff (1996); Ficek and Rudolph (1999); Peiris
_et al._ (2014); Konthasinghe _et al._ (2014); He _et al._ (2015), which can
be either symmetric or asymmetric. In principle, the physical origin of the
triplet and multipeak structures can be understood in terms of the transitions
between the quantum dressed states Cohen-Tannoudji _et al._ (1998) or in
terms of the transitions between the semiclassical Floquet states Breuer and
Petruccione (1997); Yan _et al._ (2016a). The studies on the resonance
fluorescence have enriched the physics concerning the light-matter
interaction.
The origin of the symmetry of the spectrum has been found in the case of the
monochromatic field. Specifically, it is the detailed balance condition that
guarantees the symmetry of the Mollow triplet Cohen-Tannoudji _et al._
(1998). As is well-known, the breakdown of such a condition leads to the
asymmetry of the spectrum, for instance, in the presence of a pure dephasing
reservoir Roy and Hughes (2012); McCutcheon and Nazir (2013) or the counter-
rotating terms of the external field under certain conditions Browne and
Keitel (2000); Yan _et al._ (2013, 2016a). The dephasing-induced asymmetric
Mollow triplet has been experimentally observed in the quantum dots (the pure
dephasing arises because of the interaction between the quantum dot and its
solid-state environment) Ulrich _et al._ (2011); Ulhaq _et al._ (2013). For
the bi- and multi-chromatic fields, the origin of the symmetry of the spectrum
is rarely discussed, owing to the fact that the physically transparent
spectrum is hardly analytically derived, and has not been comprehensively
understood.
Recent studies show that the fluorescence spectrum from a driven two-level
system with a modulated transition frequency is symmetrically multipeaked for
the vanishing detuning while asymmetrically multipeaked for the finite
detuning Yan _et al._ (2016b); Kryuchkyan _et al._ (2017); Antón _et al._
(2017); Yan _et al._ (2018). Such an exotic bichromatically driven two-level
system with coexistence of the longitudinal and transversal coupling between
the system and the applied fields has been experimentally studied in the
superconducting qubits Li _et al._ (2013); Pan _et al._ (2017), single
molecule Brunel _et al._ (1998), and nitrogen-vacancy spin qubits Rohr _et
al._ (2014). The quantum systems under frequency modulation are also of
interest in theoretical studies Kibis _et al._ (2009); Macovei and Keitel
(2014); Zhao _et al._ (2015); Silveri _et al._ (2013); Macovei _et al._
(2015), the intriguing phenomena of which were reviewed recently Silveri _et
al._ (2017). It is worthwhile to note that the bichromatically driven two-
level system with frequency modulation differs from those considered in Refs.
Agarwal _et al._ (1991); Ficek and Freedhoff (1993), where the two-level
systems are transversely driven by a bichromatic field. In such a case, the
symmetry of the fluorescence spectrum is found to depend on the average
detuning if the strengths of the two components of the bichromatic field are
the same; the pronounced asymmetry of the spectrum is revealed when the
average detuning is finite and/or the strengths of the two components of the
field are unequal Agarwal _et al._ (1991); Ficek and Freedhoff (1993). For a
bichromatically amplitude-modulated field, the spectrum is also found to be
symmetric and asymmetric for the vanishing and finite detuning, respectively
Wilkens and Rza¸ewski (1989). So far the fundamental origin of such a
detuning-dependent symmetry remains obscure.
In this work, we use both analytical and numerical methods based on the
Floquet theory to study the fundamental origin of the symmetry of the
fluorescence spectrum from the two-level system under a low-frequency periodic
modulation and a near-resonant monochromatic excitation. We address the
symmetry and asymmetry of the spectrum by considering the generalized parity
of Floquet states rather than the behaviors of the bare-state or dressed-state
populations as considered in Refs. Das and Macovei (2013); Macovei _et al._
(2015); Antón _et al._ (2017). The generalized parity is found to guarantee
the symmetry of the spectrum while the breaking of such a parity can yield
pronouncedly asymmetric spectrum even in the vanishing detuning case. Based on
the generalized parity, the conditions for the symmetric and asymmetric
spectra are derived, which are not given in the previous works and cannot be
derived from the behaviors of the bare or dressed state population. The
generalized-parity-induced symmetry of the spectrum is verified and
illustrated in the context of the biharmonic modulation by the comparison
between the analytical and numerical results. The analytical results are found
to be in agreement with the numerically exact results in the regimes where the
perturbation theory and secular approximation can be justified. In addition,
we find that the spectrum with the secular approximation may have artifact
symmetry under certain conditions, i.e., the spectrum with secular
approximation is symmetric while the numerically exact calculation shows
asymmetric spectra because of the broken parity. The present finding simply
interprets the detuning-dependent symmetry in the harmonic modulation case and
can also be extended to analyze the symmetry and asymmetry of the spectrum in
the multiharmonic modulation cases. Our results suggest that it is feasible to
control the symmetry and asymmetry of the spectrum via engineering the
generalized parity of the Floquet states.
The rest of the paper is organized as follows. In Sec. II, we first discuss
the generalized-parity-induced symmetry of the fluorescence spectrum without
the secular approximation and further elucidate the symmetry of the spectrum
with a physically transparent formal spectrum with the secular approximation.
In Sec. III, we analytically and numerically calculate the fluorescence
spectrum in the context of the biharmonic modulation to verify the symmetry
and asymmetry of the spectrum predicted based on the generalized parity. In
the last section, the conclusions are given.
## II Fluorescence spectrum and generalized parity
We consider that the transition frequency of the two-level system is modulated
periodically via a low-frequency external field $f(t)$ and the two-level
system is also excited by a near-resonant monochromatic field, which is
described by the following Hamiltonian ($\hbar=1$)
$H(t)=\frac{1}{2}[\omega_{0}+f(t)]\sigma_{z}+\frac{\Omega_{x}}{2}(\sigma_{+}e^{-i\omega_{x}t}+\sigma_{-}e^{i\omega_{x}t}),$
(1)
where $\sigma_{z(x,y)}$ is the usual Pauli matrix, $\omega_{0}+f(t)$ is the
modulated transition frequency, $\sigma_{\pm}=(\sigma_{x}\pm i\sigma_{y})/2$
are the raising and lowering operators, and $\Omega_{x}$ ($\omega_{x}$) is the
strength (frequency) of the monochromatic driving. Here we choose
$f(t)=f(t+T)$ with $T$ being the fundamental period of the modulation and much
greater than $2\pi/\omega_{x}$. This is a generalized model as compared with
the previous one considered in Refs. Yan _et al._ (2016b); Kryuchkyan _et
al._ (2017); Antón _et al._ (2017).
To study the emission processes, we need to take account of the spontaneous
decay. Thus, the time evolution of the driven two-level system under study is
modeled by the Lindblad master equation. In the frame rotating at the
frequency $\omega_{x}$, the Lindblad master equation takes the form
$\frac{d}{dt}\tilde{\rho}(t)={\cal L}(t)\tilde{\rho}(t),$ (2)
where $\tilde{\rho}(t)$ is the reduced density matrix in the rotating frame
and the superoperator ${\cal L}(t)$ is given by ${\cal
L}(t)\tilde{\rho}(t)=-i[\tilde{H}(t),\tilde{\rho}(t)]-\kappa/2[\\{\sigma_{+}\sigma_{-},\tilde{\rho}(t)\\}-2\sigma_{-}\tilde{\rho}(t)\sigma_{+}]$
with $\kappa$ being the radiative decay rate. $\tilde{H}(t)$ is the effective
Hamiltonian and reads
$\tilde{H}(t)=\frac{\Omega_{x}}{2}\sigma_{x}+\frac{1}{2}[\delta+f(t)]\sigma_{z},$
(3)
with $\delta=\omega_{0}-\omega_{x}$ being the detuning between the bare
transition frequency and monochromatic excitation frequency. This master
equation is actually a set of first-order differential equations with periodic
coefficients. It can be directly solved by the so-called Floquet-Liouville
(FL) approach with a desire accuracy Ho _et al._ (1986); Yan _et al._
(2016b). Although such a Floquet-theory-based numerical method is simple and
efficient, it is not physically transparent to analyze the role of generalized
parity of Floquet states in the symmetry of the fluorescence spectrum. We use
an alternative method which is developed in our previous works Yan _et al._
(2016a, 2018) to solve the master equation and calculate the fluorescence
spectrum. We first calculate the Floquet states for $\tilde{H}(t)$ and use
them as the bases to reformulate Eq. (2) and derive its analytical formal
solutions with the aid of the secular approximation in the Floquet picture.
### II.1 The symmetry of fluorescence spectrum without secular approximation
The steady-state fluorescence spectrum is given by the Fourier transform of
the time-averaged first-order correlation function Mollow (1969); Ho _et al._
(1986)
$S(\Delta)\propto{\rm
Re}\frac{1}{T}\int_{0}^{\infty}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\left\langle\tilde{\sigma}_{+}(t^{\prime}+\tau)\tilde{\sigma}_{-}(t^{\prime})\right\rangle
e^{-i\Delta\tau}dt^{\prime}d\tau,$ (4)
where $\Delta=\omega-\omega_{x}$ and
$\left\langle\tilde{\sigma}_{+}(t^{\prime}+\tau)\tilde{\sigma}_{-}(t^{\prime})\right\rangle$
is the first-order correlation function and the tilde indicates that it is
evaluated in the rotating frame. In general, it is difficult to derive an
exact analytical spectrum. Nevertheless, we find that it is possible to show
that the spectrum is exactly symmetric about $\Delta=0$ when
$\delta+f(t)=-[\delta+f(t+T/2)]$ by realizing the fact that the driven two-
level system possesses a generalized parity symmetry, i.e.,
$\sigma_{x}\tilde{H}(t+T/2)\sigma_{x}=\tilde{H}(t).$ (5)
Here, the generalized parity transformation consists of an exchange between
the up and down states of two-level system
($\sigma_{z}\rightarrow-\sigma_{z}$) and a time shift of half period of the
modulation ($t\rightarrow t+T/2$).
We state briefly how the generalized parity guarantees the symmetry of the
spectrum. Owing to Eq. (5), we can construct a generalized parity
transformation in the Liouville space, the details of which can be found in
Appendix A. When $\delta+f(t)=-[\delta+f(t+T/2)]$, the superoperator ${\cal
L}(t)$ is similarly found to be invariant under the generalized parity
transformation. Based on this property, it can be derived from the master
equation (2) without the secular approximation that in the steady-state limit,
the time-averaged first-order correlation function is a real-valued function
in the rotating frame. As a result, the fluorescence spectrum is symmetric
about $\Delta=0$. This finding shows that the symmetry of the spectrum occurs
when $\delta+f(t)=-[\delta+f(t+T/2)]$ and results from the generalized parity.
We will numerically verify the generalized-parity-induced symmetry in Sec.
III.
### II.2 The symmetry of fluorescence spectrum with secular approximation
To further elucidate the role of the generalized parity in determining the
symmetry of the spectrum, we calculate the spectrum in the Floquet picture
which allows us to derive a physically transparent formal spectrum with the
aid of the secular approximation.
According to the Floquet theory Shirley (1965); Sambe (1973), the time-
dependent Schrödinger equation governed by $\tilde{H}(t)$ possesses a set of
formal solutions
$|\tilde{\psi}_{\alpha}(t)\rangle=|\tilde{u}_{\alpha}(t)\rangle
e^{-i\tilde{\varepsilon}_{\alpha}t}$, where
$|\tilde{u}_{\alpha}(t)\rangle=|\tilde{u}_{\alpha}(t+T)\rangle$ is Floquet
state and $\tilde{\varepsilon}_{\alpha}$ is the corresponding real-valued
quasienergy. The index $\alpha$ labels independent Floquet states.
Substituting the formal solution into the Schrödinger equation, one readily
finds that
$[\tilde{H}(t)-i\partial_{t}]|\tilde{u}_{\alpha}(t)\rangle=\tilde{\varepsilon}_{\alpha}|\tilde{u}_{\alpha}(t)\rangle.$
(6)
On solving this equation, one obtains the Floquet states and quasienergies of
the driven two-level system.
We use $|\tilde{u}_{\alpha}(t)\rangle$ ($\alpha=\pm$) as the basis to
reformulate the master equation (2) and invoke the secular approximation Yan
_et al._ (2016a, 2018), yielding
$\displaystyle\frac{d}{dt}\tilde{\rho}_{++}(t)$ $\displaystyle=$
$\displaystyle-\Gamma_{{\rm rel}}\tilde{\rho}_{++}(t)+\Gamma_{{\rm s}},$ (7)
$\displaystyle\frac{d}{dt}\tilde{\rho}_{+-}(t)$ $\displaystyle=$
$\displaystyle-(i\Delta_{+-}+\Gamma_{{\rm deph}})\tilde{\rho}_{+-}(t),$ (8)
where
$\tilde{\rho}_{\alpha\beta}(t)=\langle\tilde{u}_{\alpha}(t)|\tilde{\rho}(t)|\tilde{u}_{\beta}(t)\rangle$
is the element of density operator,
$\Delta_{+-}=\tilde{\varepsilon}_{+}-\tilde{\varepsilon}_{-}$ is the
difference of two quasienergies, and $\Gamma_{{\rm
s}}=\kappa\sum_{l}|x_{-+,l}^{(+)}|^{2}$, where $x^{(+)}_{\alpha\beta,l}$ is a
time-averaged transition matrix element defined as follows:
$x^{(\pm)}_{\alpha\beta,l}=\frac{1}{T}\int^{T}_{0}\langle\tilde{u}_{\alpha}(t)|\sigma_{\pm}|\tilde{u}_{\beta}(t)\rangle
e^{-i2\pi lt/T}dt.$ (9)
The relaxation rate $\Gamma_{{\rm rel}}$ and dephasing rate $\Gamma_{{\rm
deph}}$ are given by
$\displaystyle\Gamma_{{\rm rel}}$ $\displaystyle=$
$\displaystyle\kappa\sum_{l}(|x_{+-,l}^{(+)}|^{2}+|x_{-+,l}^{(+)}|^{2}),$ (10)
$\displaystyle\Gamma_{{\rm deph}}$ $\displaystyle=$
$\displaystyle\frac{\kappa}{2}\sum_{l}(|x_{+-,l}^{(+)}|^{2}+|x_{-+,l}^{(+)}|^{2}+4|x_{++,l}^{(+)}|^{2}).$
(11)
The analytical formal solutions in the Floquet picture can be easily found as
follows:
$\displaystyle\tilde{\rho}_{++}(t)$ $\displaystyle=$
$\displaystyle\tilde{\rho}_{++}(0)e^{-\Gamma_{{\rm
rel}}t}+\tilde{\rho}_{++}^{{\rm ss}}(1-e^{-\Gamma_{{\rm rel}}t}),$ (12)
$\displaystyle\tilde{\rho}_{+-}(t)$ $\displaystyle=$
$\displaystyle\tilde{\rho}_{+-}(0)e^{-(\Gamma_{{\rm deph}}+i\Delta_{+-})t},$
(13)
where
$\tilde{\rho}_{++}^{{\rm ss}}=\frac{\Gamma_{{\rm s}}}{\Gamma_{{\rm
rel}}}=\frac{\sum_{l}|x_{-+,l}^{(+)}|^{2}}{\sum_{l}(|x_{+-,l}^{(+)}|^{2}+|x_{-+,l}^{(+)}|^{2})}$
(14)
is the steady-state population of the Floquet state. These solutions together
with the quantum regression theory enable us to derive a physically
transparent spectrum function Yan _et al._ (2016a, 2018)
$\displaystyle S(\Delta)$ $\displaystyle\propto$
$\displaystyle\sum_{l}\bigg{\\{}\pi|x_{++,l}^{(+)}|^{2}(\tilde{\rho}_{++}^{{\rm
ss}}-\tilde{\rho}_{--}^{{\rm ss}})^{2}\delta(\Delta-l\omega_{z})$
$\displaystyle+4|x_{++,l}^{(+)}|^{2}\tilde{\rho}_{++}^{{\rm
ss}}\tilde{\rho}_{--}^{{\rm ss}}\frac{\Gamma_{{\rm rel}}}{\Gamma_{{\rm
rel}}^{2}+(\Delta-l\omega_{z})^{2}}$
$\displaystyle+|x_{+-,l}^{(+)}|^{2}\tilde{\rho}_{++}^{{\rm
ss}}\frac{\Gamma_{{\rm deph}}}{\Gamma_{{\rm
deph}}^{2}+(\Delta-l\omega_{z}-\Delta_{+-})^{2}}$
$\displaystyle+|x_{-+,l}^{(+)}|^{2}\tilde{\rho}_{--}^{{\rm
ss}}\frac{\Gamma_{{\rm deph}}}{\Gamma_{{\rm
deph}}^{2}+(\Delta-l\omega_{z}+\Delta_{+-})^{2}}\bigg{\\}},$
It is evident that the accuracy of Eq. (LABEL:eq:sfunsa) is limited by the
secular approximation when the transition matrix elements
$x^{(+)}_{\alpha\beta,l}$ and quasienergies are exactly calculated. As is
well-known, the secular approximation can be justified under the strong
driving condition, i.e., $\Delta_{+-}\gg\kappa$. In general, we can calculate
the quasienergies and transition matrix elements based on both analytical and
numerical diagonalization (ND) of the Floquet Hamiltonian
$\tilde{H}(t)-i\partial_{t}$ in the Sambe space Shirley (1965); Sambe (1973) ,
yielding the analytical and semianalytical spectra, respectively.
Next, we discuss the parity phenomenon of the Floquet states resulting from
Eq. (5). We consider the behavior of the Floquet states under the generalized
parity transformation ${\cal P}_{G}$, which is defined as
${\cal
P}_{G}|\tilde{u}_{\alpha}(t)\rangle:=\sigma_{x}|\tilde{u}_{\alpha}(t+T/2)\rangle.$
(16)
By differentiating $\sigma_{x}|\tilde{u}_{\alpha}(t+T/2)\rangle$ with respect
to $t$, we readily obtain
$\left[\sigma_{x}\tilde{H}\left(t+T/2\right)\sigma_{x}-i\partial_{t}\right]\sigma_{x}\left|\tilde{u}_{\alpha}\left(t+T/2\right)\right\rangle=\tilde{\varepsilon}_{\alpha}\sigma_{x}\left|\tilde{u}_{\alpha}\left(t+T/2\right)\right\rangle.$
(17)
When $\delta+f(t)=-[\delta+f(t+T/2)]$,
$\sigma_{x}|\tilde{u}_{\alpha}(t+T/2)\rangle$ satisfies the same differential
equation as $|\tilde{u}_{\alpha}(t)\rangle$ because of Eq. (5). Recalling the
uniqueness of solutions of the differential equations, in such cases we must
have
$\sigma_{x}\left|\tilde{u}_{\alpha}\left(t+T/2\right)\right\rangle=\lambda_{\alpha}|\tilde{u}_{\alpha}(t)\rangle,$
(18)
where $\lambda_{\alpha}$ is a constant. Furthermore, we have
$\lambda_{\alpha}=\pm 1$ because of ${\cal
P}_{G}^{2}|\tilde{u}_{\alpha}(t)\rangle=\lambda_{\alpha}^{2}|\tilde{u}_{\alpha}(t)\rangle=|\tilde{u}_{\alpha}(t)\rangle.$
Specifically, when $\delta+f(t)=-[\delta+f(t+T/2)]$, the Floquet states may be
even or odd functions under the generalized parity transformation, which is
referred to as the generalized parity of the Floquet states. The generalized
parity has been previously investigated in other phenomena such as the
coherent destruction of tunneling Grossmann _et al._ (1991) and the laser-
induced electronic transport Lehmann _et al._ (2003).
Clearly, if $\delta+f(t)\neq-\left[\delta+f\left(t+T/2\right)\right]$, Eq.
(18) cannot hold as
$\sigma_{x}\tilde{H}\left(t+T/2\right)\sigma_{x}\neq\tilde{H}(t)$, i.e., the
effective Hamiltonian is no longer invariant under the generalized parity
transformation. Consequently, the Floquet states also do not have the
generalized parity.
Figure 1: The incoherent components of the fluorescence spectrum for $p=3$,
$\Omega_{x}=10\kappa$, $\delta=0$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$,
and various phase. “Ana.” and “Num.” denote the analytical and the FL
numerical results, respectively.
We show that the symmetry of the spectrum may be a consequence of the
generalized parity of the Floquet states. By using Eq. (18) and
$x_{\alpha\beta,l}^{(+)}=\left[x_{\beta\alpha,-l}^{(-)}\right]^{\ast}$, it is
straightforward to show the following identity for arbitrary integer $l$ from
the definition (9) of the transition matrix element:
$x_{\alpha\beta,l}^{(+)}=(-1)^{l}\lambda_{\alpha}\lambda_{\beta}\left[x_{\beta\alpha,-l}^{(+)}\right]^{\ast},$
(19)
provided $\delta+f(t)=-[\delta+f(t+T/2)]$. It follows that
$|x_{\alpha\beta,l}^{(+)}|=|x_{\beta\alpha,-l}^{(+)}|$ (20)
also holds for any integer $l$. We emphasize that the relation (20) can be
deduced from relation (19), however, the relation (19) cannot be derived from
relation (20). With the relation (20), it is straightforward to show that the
spectrum (LABEL:eq:sfunsa) is symmetric about $\Delta=0$ Yan _et al._ (2018).
Specifically, since $|x_{++,l}^{(+)}|=|x_{++,-l}^{(+)}|$, the emission lines
at $\Delta=\pm l\omega_{z}$ (the positions are symmetric about $\Delta=0$)
have the equal weights. Moreover, since $|x_{+-,l}^{(+)}|=|x_{-+,-l}^{(+)}|$,
we also have $\tilde{\rho}_{++}^{{\rm ss}}=\tilde{\rho}_{--}^{{\rm ss}}$
according to Eq. (14), leading to $|x_{+-,l}^{(+)}|^{2}\tilde{\rho}_{++}^{{\rm
ss}}=|x_{-+,-l}^{(+)}|^{2}\tilde{\rho}_{--}^{{\rm ss}}$. That is to say, the
emission lines at $\Delta=\pm(l\omega_{z}+\Delta_{+-})$ (the positions are
symmetric about $\Delta=0$) have the same weights. It turns out that the
symmetry of the spectrum fundamentally originates from the generalized parity
of the Floquet states when $\delta+f(t)=-[\delta+f(t+T/2)]$. Conversely, one
may expect that the symmetry of the spectrum may break when such a parity is
absent. However, it is a formidable task to analytically prove that the
spectrum is asymmetric in the absence of the generalized parity.
Let us discuss what happens to the formal spectrum if
$\delta+f(t)\neq-[\delta+f(t+T/2)]$. Under such a condition, the generalized
parity is absent, and thus we cannot have the relation (19). In principle, the
absence of the generalized parity will result in two possible situations. One
is that the spectrum becomes asymmetric about $\Delta=0$ because the relation
$|x^{(+)}_{\alpha\beta,l}|\neq|x^{(+)}_{\beta\alpha,-l}|$ can be derived at
least for a certain $l$. The other is that the spectrum is symmetric because
the equality $|x^{(+)}_{\alpha\beta,l}|=|x^{(+)}_{\beta\alpha,-l}|$ still
holds for any $l$, originating from other kinds of identities between the
transition matrix elements rather than the generalized-parity-induced identity
(19). Apparently the first situation is more trivial than the second one. Most
importantly, the present analysis suggests that the formal spectrum may be
symmetric even without the generalized parity. Consequently, we cannot
conclude from the formal spectrum (LABEL:eq:sfunsa) that the symmetry of the
spectrum breaks as long as the generalized parity is absent.
To end this section, we give some remarks on the above findings based on the
formal spectrum. First, we find that the symmetry of the spectrum may result
from the generalized parity and requires $\delta+f(t)=-[\delta+f(t+T/2)]$.
This is consistent with the analysis above without the secular approximation.
Moreover, the generalized parity is found to be an important underlying cause
of the relation (20), which was numerically found in harmonic modulation case
Yan _et al._ (2018). It turns out here that the relation (20) can be
established due to the generalized parity in the bi- and multi-harmonic cases.
Second, without the generalized parity, namely, when
$\delta+f(t)\neq-[\delta+f(t+T/2)]$, the formal spectrum can be either
trivially asymmetric or nontrivially symmetric. The symmetry requires the
relation (20) in the absence of the generalized parity, namely, Eq. (19).
Third, the formal spectrum is derived with the secular approximation and thus
the present analysis needs further verification. In what follows we consider a
concrete biharmonic modulation to verify whether the generalized parity
guarantees the symmetry of the spectrum when the secular approximation is not
invoked and we also check whether the relation (20) can be established without
the generalized parity and whether such relations lead to the symmetry of the
spectrum without the secular approximation.
Figure 2: The incoherent components of the fluorescence spectrum for $p=3$,
$\delta=5\kappa$, $\Omega_{x}=10\kappa$, $\Omega_{z}=\omega_{z}=40\kappa$,
$r=1$, and various phase.
## III Verification of symmetry and asymmetry of the spectrum
To calculate fluorescence spectrum, without loss of generality, we mainly
consider the biharmonic modulation in this work, namely, the modulation
consists of two harmonics
$f(t)=\Omega_{z}[\cos(\omega_{z}t)+r\cos(p\omega_{z}t+\phi)],$ (21)
where $\Omega_{z}$ and $\omega_{z}=2\pi/T$ are the amplitude and fundamental
frequency of the modulation, respectively, $p$ is a positive integer, $r$ is
the ratio of the amplitude of the second harmonic to that of the first one,
and $\phi$ is a relative phase. Since $\frac{1}{T}\int^{T}_{0}f(t)dt=0$, the
condition for the presence of the generalized parity
$\delta+f(t)=-[\delta+f(t+T/2)]$ is equivalent to $\delta=0$ and
$f(t)=-f(t+T/2)$. The condition for the absence of the generalized parity
$\delta+f(t)\neq-[\delta+f(t+T/2)]$ is simply divided into three cases:
$\left\\{\begin{array}[]{c}\delta\neq 0\,{\rm and}\,f(t)=-f(t+T/2);\\\
\delta=0\,{\rm and}\,f(t)\neq-f(t+T/2);\\\ \delta\neq 0\,{\rm
and}\,f(t)\neq-f(t+T/2).\end{array}\right.$ (22)
It is noted that for the biharmonic modulation (21), both $f(t)=-f(t+T/2)$ and
$f(t)\neq-f(t+T/2)$ can be realized by setting $p$ odd and even numbers,
respectively. To verify above analysis, we calculate the numerically exact
fluorescence spectrum from master equation (2) with the FL formalism Ho _et
al._ (1986); Yan _et al._ (2016b), which is compared with the analytical and
semianalytical results from Eq. (LABEL:eq:sfunsa). The analytical and
semianalytical results are obtained by using the transition matrix elements
and quasienergies calculated with the Van Vleck perturbation theory and the ND
of the Floquet Hamiltonian, respectively. The detailed analytical calculation
is presented in Appendix B. In addition, we just focus on the incoherent
components of the fluorescence spectrum, which is of interest in the
experiments. In principle, similar analysis is applicable to the coherent
components. In this work, we mainly consider the parameters regime
$\omega_{z}\sim\Omega_{z}\gg\Omega_{x}\gg\kappa$, in which case both the Van
Vleck perturbation theory (up to second order in $\Omega_{x}$) and secular
approximation can be justified. Importantly, this regime is experimentally
accessible in the artificial atoms, e.g., the transmon qubit Li _et al._
(2013). We should emphasize that if the perturbation theory is inapplicable,
we can obtain the transition matrix elements and quasienergies by the ND of
the Floquet Hamiltonian.
We first verify whether the generalized parity guarantees the symmetry of the
spectrum. In Fig. 1, we display the incoherent component of fluorescence
spectra obtained by the FL numerical method (solid line) and analytical result
(dashed line) for $p=3$, $\delta=0$, and various values of $\phi$. Apparently
the spectra are symmetric as expected. The analytical results are in agreement
with the FL results. These results also show that the spectrum depends weakly
on the relative phase $\phi$. In addition, it is straightforward to verify
that for other driving parameters, the spectrum is symmetric as well when $p$
is an odd number and $\delta=0$. In Appendix C, we show that when $\delta=0$
and $p$ is odd, the transition matrix elements indeed satisfy Eq. (19), which
guarantees the symmetry of the spectrum. The present results suggest that the
symmetry of the spectrum appears as long as $\delta=0$ and $f(t)=-f(t+T/2)$
and fundamentally originates from the generalized parity of the Floquet states
in such a situation.
Figure 3: The incoherent components of the fluorescence spectrum for $p=2$,
$\delta=0$, $\Omega_{x}=10\kappa$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$,
and various phases.
We move to examine whether the symmetry of the spectrum breaks when the
generalized parity is absent, namely, under the conditions
$\delta+f(t)\neq-[\delta+f(t+T/2)]$. We calculate the spectra with the
parameters being the same as in Fig. 1 except for the detuning
$\delta=5\kappa$, corresponding to the case of $\delta\neq 0$ and
$f(t)=-f(T+T/2)$. In Fig. 2, the analytical and FL numerical spectra agree
with each other and are found to be asymmetric for the finite detuning,
indicating that in spite of $f(t)=-f(t+T/2)$, the asymmetry of spectrum
appears when $\delta\neq 0$.
Let us consider the case of $\delta=0$ and $f(t)\neq-f(t+T/2)$ by setting $p$
being even. We calculate the spectrum for $p=2$ and the other parameters being
the same as in Fig. 1. Figure 3 displays that the analytical and numerical
spectra are pronouncedly asymmetric even though $\delta=0$ except for
$\phi=\pi/2$ in which case the analytical spectrum is found to be strictly
symmetric (see discussion below) while the numerical spectrum is slightly
asymmetric [in particular, the intensities of emission lines at
$\Delta=\pm\omega_{z}$ are unequal as shown in Fig. 6(a)]. These results
confirm that the formal spectrum (LABEL:eq:sfunsa) may be symmetric without
the generalized parity of the Floquet states. However, the numerically exact
spectrum is asymmetric in the absence of the generalized parity. This shows
that the generalized parity plays an important role in determining the
symmetry of the exact spectrum. We will further analyze such discrepancy
between the analytical and numerical results later. In addition, we find that
in contrast with $p=3$, the spectrum is found to depend strongly on relative
phase $\phi$ when $p=2$.
Finally we calculate the spectra for $\delta\neq 0$ and $f(t)\neq-f(t+T/2)$.
Figure 4 shows the spectra obtained for the detuning $\delta=5\kappa$ and the
other parameters being the same as in Fig. 3. The spectra are still
asymmetric. In general, it is straightforward to verify the asymmetry of the
spectrum under the condition that $\delta+f(t)\neq-[\delta+f(t+T/2)]$. All in
all, it turns out that the symmetry of the spectrum breaks in the absence of
the generalized parity. Conversely, we can say that the symmetry of the
spectrum can be fully attributed to the presence of the generalized parity. In
contrast to the previous studies, we ascribe the asymmetry to the breaking of
the generalized parity rather than the unequal populations of dressed states
Antón _et al._ (2017) or the breakdown of relation (20) Yan _et al._ (2018).
Figure 4: The incoherent components of fluorescence spectrum for $p=2$,
$\delta=5\kappa$, $\Omega_{x}=10\kappa$, $\Omega_{z}=\omega_{z}=40\kappa$,
$r=1$, and various phase.
Let us explore how the analytical spectrum becomes symmetric in the absence of
the generalized parity of the Floquet states. To this end, we show that the
relation (20) can originate from the identities different from Eq. (19). Based
on the results from the Van Vleck perturbation theory, we analytically derive
the identities for the transition matrix elements in the case of vanishing
detuning and even $p$. The derivation are given in Appendix C. When $p$ is
even, $\delta=0$, and $\phi=\left(1/2+n\right)\pi$ ($n=0,\pm 1,\pm 2,\ldots$),
we find that the following relations hold for arbitrary integer $l$:
$\displaystyle x^{(+)}_{++,-l}$ $\displaystyle=$
$\displaystyle(-1)^{l}x^{(+)}_{++,l},$ (23) $\displaystyle x^{(+)}_{-+,-l}$
$\displaystyle=$ $\displaystyle-(-1)^{l}e^{-i2\theta_{0}}x^{(+)}_{+-,l},$ (24)
where $\theta_{0}$ is a phase defined in Eq. (86). Although the relations (23)
and (24) are derived based on the perturbation theory, it is straightforward
to show that they hold in the nonperturbative regimes. In Fig. 5, we calculate
$x^{(+)}_{++,l}$ $(l=\pm 1,\pm 2)$ with the variation of $\Omega_{x}$ by using
the analytical and ND methods. We see that the deviation between the
analytical and numerical results becomes larger and larger as $\Omega_{x}$
increases, which is due to the breakdown of the perturbation calculation.
Nevertheless, $x^{(+)}_{++,l}$ obtained by the ND method still satisfies Eq.
(23). This suggests that the relations (23) and (24) are not limited to the
perturbative regimes. More importantly, it follows from the identities (23)
and (24) that $|x_{\alpha\beta,l}^{(+)}|=|x_{\beta\alpha,-l}^{(+)}|$, which
leads to the symmetry of the formal spectrum (LABEL:eq:sfunsa). That is to
say, without the generalized parity of the Floquet states, the relation (20)
can also be established from other kinds of the identities for the transition
matrix elements instead of the generalized-parity-induced identity (19) under
certain conditions.
The discrepancy in the symmetry predicted by the analytical and numerical
methods shown in Fig. 3(b) indicates that the relations (23) and (24) cannot
guarantee the symmetry of the spectrum without the secular approximation. To
further verify this, in Fig. 6, we use semianalytical and FL numerical methods
to calculate the weights of the emission lines at $\Delta=\pm\omega_{z}$ as
the increasing of $\Omega_{x}$ for $p=2$, $\delta=0$, and two values of
$\phi$. It is evident that the weights calculated from the semianalytical
method (solid and dashed lines) are the same while the weights from the
numerical method (dot-dashed and dotted lines) are unequal, indicating that
the semianalytical spectrum is symmetric but the numerical spectrum is not
symmetric. The present results illustrate that that provided the relation (20)
is established in the absence of the generalized parity, the secular
approximation can induce artifact symmetry that vanishes if such approximation
is not invoked.
Figure 5: Transition matrix elements $x^{(+)}_{++,l}$ versus driving strength
$\Omega_{x}$, calculated from the analytical method and the numerical method
based on the ND of the Floquet Hamiltonian for $p=2$, $\delta=0$,
$\Omega_{z}=\omega_{z}=40\kappa$, $\phi=\pi/2$, and $r=1$.
Apart from the biharmonic modulation, we find that the conditions for the
symmetry and asymmetry of the spectrum, which are derived based on the
generalized parity, are applicable to the simple harmonic and multiharmonic
modulation cases. For the simple harmonic modulation
$f(t)=\Omega_{z}\cos(\omega_{z}t)$, $f(t)=-f(t+T/2)$ is met. Therefore, the
symmetry and asymmetry of the spectrum is uniquely controlled by the detuning
$\delta$, which simply interprets the detuning-dependent symmetry of the
spectrum. Specifically, the spectrum is expected to be symmetric when
$\delta=0$ and asymmetric when $\delta\neq 0$. This is consistent with the
findings of previous studies Yan _et al._ (2016b); Antón _et al._ (2017);
Yan _et al._ (2018). For the multiharmonic modulation
$f(t)=\sum_{p=1}^{N}\Omega_{z,p}\cos(p\omega_{z}t+\phi_{p})]$, where
$\Omega_{z,p}$ and $\phi_{p}$ are the amplitude and phase of the $p$th
harmonic, respectively, either $f(t)=-f(t+T/2)$ or $f(t)\neq-f(t+T/2)$ can be
met, similarly to the biharmonic case. We have calculated the spectrum with
the FL and semianalytical methods for the cases of $N=3$, $N=4$, and $N=5$.
The results (not shown here) further confirm that the symmetry and asymmetry
of spectrum fundamentally originate from the presence and absence of the
generalized parity of the Floquet states, respectively.
Figure 6: Weights of emission lines at $\Delta=\pm\omega_{z}$ versus driving
strength $\Omega_{x}$, calculated from the semianalytical method and the FL
method, for $p=2$, $\delta=0$, $\Omega_{z}=\omega_{z}=40\kappa$, $r=1$, and
two values of $\phi$. “Semiana.” denotes the semianalytical result.
## IV Conclusions
In summary, we have studied the fundamental origin of the symmetry of the
resonance fluorescence from the two-level system subjected to a periodic
frequency modulation and a near-resonant high-frequency monochromatic
excitation by using both analytical and numerical methods based on the Floquet
theory. In such a driven two-level system, we have found that the generalized
parity of Floquet states plays a fundamental role in the symmetry of the
spectrum. Specifically, the generalized parity guarantees the symmetry of the
spectrum. On the other hand, when the generalized parity is broken, the
spectrum becomes asymmetric. This has been illustrated in the context of the
biharmonic modulation, the parameters of which can be tuned to induce or break
the generalized parity. For the biharmonic modulation, we find that when
$\delta=0$ and $f(t)=-f(t+T/2)$, the generalized parity exists and the
spectrum is symmetric. When $\delta+f(t)\neq-[\delta+f(t+T/2)]$, the
generalized parity is broken and the spectrum is found to be asymmetric.
Interestingly, we can obtain pronouncedly asymmetric spectrum by requiring the
modulation $f(t)\neq-f(t+T/2)$ even though $\delta=0$. Moreover, these
conditions for the symmetry and asymmetry of the spectrum are found to be
applicable to the simple harmonic and multiharmonic modulation cases. In
addition, we illustrated that the secular approximation may induce artifact
symmetry that vanishes if the secular approximation is avoided under certain
conditions. The present study gives a deep insight into the origin of the
symmetry of the spectrum and reveals a simple relation between the symmetry of
the spectrum and the generalized parity of the Floquet states.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China
(Grants No. 11647082, No. 11774311, No. 11774226, and No. 11874260).
## Appendix A Derivation of symmetry of the spectrum without the secular
approximation
The master equation can be rewritten in a matrix form
$\frac{d}{dt}\vec{\tilde{\rho}}(t)={\cal L}(t)\vec{\tilde{\rho}}(t).$ (25)
Here the vector is defined as
$\vec{\tilde{\rho}}(t)=(\langle\tilde{\sigma}_{+}(t)\rangle,\langle\tilde{\sigma}_{-}(t)\rangle,\langle\tilde{\pi}_{+}(t)\rangle,\langle\tilde{\pi}_{-}(t)\rangle)^{{\rm
T}},$ (26)
where $\pi_{\pm}=(1\pm\sigma_{z})/2$ and
$\langle\tilde{\hat{o}}(t)\rangle\equiv{\rm Tr}[\hat{o}\tilde{\rho}(t)]$. The
superoperator ${\cal L}(t)$ in the Liouville space spanned by the matrix bases
$\\{\sigma_{\pm},\pi_{\pm}\\}$ is given by
${\cal
L}(t)=\left(\begin{array}[]{cccc}i[\delta+f(t)]-\frac{\kappa}{2}&0&-\frac{i\Omega_{x}}{2}&\frac{i\Omega_{x}}{2}\\\
0&-i[\delta+f(t)]-\frac{\kappa}{2}&\frac{i\Omega_{x}}{2}&\frac{-i\Omega_{x}}{2}\\\
\frac{-i\Omega_{x}}{2}&\frac{i\Omega_{x}}{2}&-\kappa&0\\\
\frac{i\Omega_{x}}{2}&\frac{-i\Omega_{x}}{2}&\kappa&0\end{array}\right).$ (27)
If $\delta+f(t)=-[\delta+f(t+T/2)]$, in which case the Hamiltonian is
invariant under the generalized parity transformation, one readily finds that
${\cal T}{\cal L}(t+T/2){\cal T}={\cal L}(t),$ (28)
where the transformation matrix is given by
${\cal T}=\left(\begin{array}[]{cccc}0&1&0&0\\\ 1&0&0&0\\\ 0&0&-1&0\\\
0&0&0&-1\end{array}\right),$ (29)
and ${\cal T}^{2}=I$ with $I$ being the identity matrix. Similarly to the
Hamiltonian, the matrix ${\cal L}(t)$ is invariant under the transformation
defined in Eq. (28), which can be regarded as the generalized parity
transformation in the Liouville space, similarly to that defined in Eq. (16)
of the main text.
Let us derive the specific property of the steady state in the long-time limit
[as $\det{\cal L}(t)=0$, there exists a nontrivial steady state]. It follows
from Eq. (25) that
$\frac{d}{dt}\vec{\tilde{\rho}}(t+T/2)={\cal
L}(t+T/2)\vec{\tilde{\rho}}(t+T/2),$ (30)
which leads to
$\displaystyle\frac{d}{dt}{\cal T}\vec{\tilde{\rho}}(t+T/2)$ $\displaystyle=$
$\displaystyle{\cal T}{\cal L}(t+T/2){\cal T}{\cal
T}\vec{\tilde{\rho}}(t+T/2)={\cal L}(t){\cal T}\vec{\tilde{\rho}}(t+T/2),$
(31)
which means that ${\cal T}\vec{\tilde{\rho}}(t+T/2)=c\vec{\tilde{\rho}}(t)$,
owing to the uniqueness of solutions of the differential equation. On using
the fact that $\vec{\tilde{\rho}}(t)=\vec{\tilde{\rho}}(t+T)$ as
$t\rightarrow\infty$ because of ${\cal L}(t)={\cal L}(t+T)$, we find that $c$
may be either $+1$ or $-1$. It is easy to prove by contradiction that $c=-1$.
Suppose that $c=1$, yielding
$\langle\tilde{\pi}_{+}(t+T/2)\rangle=-\langle\tilde{\pi}_{+}(t)\rangle$.
However, if one considers $\delta+f(t)=0$ in which case ${\cal L}(t)$ is time
independent while Eq. (28) still holds, the steady state becomes time
independent and one gets
$\langle\tilde{\pi}_{+}(t)\rangle=\langle\tilde{\pi}_{+}(t+T/2)\rangle$. By
contradiction, one finds that $c=-1$. Consequently, in the steady-state limit,
we have
${\cal
T}\vec{\tilde{\rho}}(t+T/2)=-\vec{\tilde{\rho}}(t)\quad(t\rightarrow\infty).$
(32)
Next, let us derive the property of the principal matrix solution
$\Pi(t,t^{\prime})$ of the master equation, which solves the differential
equation
$\frac{d}{dt}\Pi(t,t^{\prime})={\cal L}(t)\Pi(t,t^{\prime}),$ (33)
with the initial condition $\Pi(t^{\prime},t^{\prime})=I$. It is
straightforward to show that
$\displaystyle\frac{d}{dt}{\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}$
$\displaystyle=$ $\displaystyle{\cal T}{\cal L}(t+T/2){\cal T}{\cal
T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}={\cal L}(t){\cal
T}\Pi(t+T/2,t^{\prime}+T/2){\cal T},$ (34)
namely, ${\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}$ satisfies the same
differential equation and the same initial condition as $\Pi(t,t^{\prime})$.
As a result, we simply have
${\cal T}\Pi(t+T/2,t^{\prime}+T/2){\cal T}=\Pi(t,t^{\prime}).$ (35)
According to the quantum regression theory Mollow (1969), the two-time
correlation functions
$\vec{\tilde{g}}(t,t^{\prime})=(\langle\tilde{\sigma}_{+}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle,\langle\tilde{\sigma}_{-}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle,\langle\tilde{\pi}_{+}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle,\langle\tilde{\pi}_{-}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle)^{{\rm
T}}$ (36)
satisfy the same equation as $\vec{\tilde{\rho}}(t)$, however, with a
different initial condition
$\vec{\tilde{g}}(t^{\prime},t^{\prime})=(\langle\tilde{\pi}_{+}(t^{\prime})\rangle,0,0,\langle\tilde{\sigma}_{-}(t^{\prime})\rangle)^{{\rm
T}}.$ (37)
Similarly, another set of two-time correlation functions
$\vec{\tilde{G}}(t,t^{\prime})=(\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\sigma}_{+}(t)\rangle,\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\sigma}_{-}(t)\rangle,\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\pi}_{+}(t)\rangle,\langle\tilde{\sigma}_{+}(t^{\prime})\tilde{\pi}_{-}(t)\rangle)^{{\rm
T}}$ (38)
also satisfy the same differential equation as $\vec{\tilde{g}}(t,t^{\prime})$
but with the initial condition
$\vec{\tilde{G}}(t^{\prime},t^{\prime})=(0,\langle\tilde{\pi}_{+}(t^{\prime})\rangle,0,\langle\tilde{\sigma}_{+}(t^{\prime})\rangle)^{{\rm
T}}.$ (39)
Using Eq. (32), we have
${\cal T}\vec{\tilde{g}}(t^{\prime},t^{\prime})=\left(\begin{array}[]{c}0\\\
\langle\tilde{\pi}_{+}(t^{\prime})\rangle\\\ 0\\\
-\langle\tilde{\sigma}_{-}(t^{\prime})\rangle\end{array}\right)=\left(\begin{array}[]{c}0\\\
\langle\tilde{\pi}_{+}(t^{\prime}+T/2)\rangle\\\ 0\\\
\langle\tilde{\sigma}_{+}(t^{\prime}+T/2)\rangle\end{array}\right)=\vec{\tilde{G}}\left(t^{\prime}+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)\quad(t^{\prime}\rightarrow\infty).$
(40)
In the steady-state limit, the correlation functions are found to have the
following relation
$\displaystyle\vec{\tilde{g}}(t,t^{\prime})$ $\displaystyle=$
$\displaystyle\Pi(t,t^{\prime})\vec{\tilde{g}}(t^{\prime},t^{\prime})$ (41)
$\displaystyle=$ $\displaystyle{\cal
T}\Pi\left(t+\frac{T}{2},t^{\prime}+\frac{T}{2}\right){\cal
T}\vec{\tilde{g}}(t^{\prime},t^{\prime})$ $\displaystyle=$ $\displaystyle{\cal
T}\Pi\left(t+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)\vec{\tilde{G}}\left(t^{\prime}+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)$
$\displaystyle=$ $\displaystyle{\cal
T}\vec{\tilde{G}}\left(t+\frac{T}{2},t^{\prime}+\frac{T}{2}\right)\quad(t^{\prime}\rightarrow\infty).$
It follows that as $t^{\prime}\rightarrow\infty$,
$\displaystyle\langle\tilde{\sigma}_{+}(t)\tilde{\sigma}_{-}(t^{\prime})\rangle$
$\displaystyle=$
$\displaystyle\langle\tilde{\sigma}_{+}(t^{\prime}+T/2)\tilde{\sigma}_{-}(t+T/2)\rangle$
(42) $\displaystyle=$
$\displaystyle\langle\tilde{\sigma}_{+}(t+T/2)\tilde{\sigma}_{-}(t^{\prime}+T/2)\rangle^{\ast}.$
In the steady-state limit, the first-order correlation function depends
explicitly on time $t^{\prime}$, however, the $t^{\prime}$ dependence can be
eliminated by setting $t=\tau+t^{\prime}$ and integrating over $t^{\prime}$
(because the contributions of $t^{\prime}$-dependent terms are negligible to a
long-time observation), yielding the $\tau$-dependent first-order correlation
function
$\displaystyle\bar{\tilde{g}}_{1}(\tau)$ $\displaystyle\equiv$
$\displaystyle\frac{1}{T}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle
dt^{\prime}$ (43) $\displaystyle=$
$\displaystyle\frac{1}{T}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime}+T/2)\tilde{\sigma}_{-}(t^{\prime}+T/2)\rangle^{\ast}dt^{\prime}$
$\displaystyle=$
$\displaystyle\frac{1}{T}\int_{T/2}^{T+T/2}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle^{\ast}dt^{\prime}$
$\displaystyle=$
$\displaystyle\frac{1}{T}\int_{0}^{T}\lim_{t^{\prime}\rightarrow\infty}\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle^{\ast}dt^{\prime}$
$\displaystyle=$ $\displaystyle\bar{\tilde{g}}_{1}^{\ast}(\tau),$
where we used relation (42) and the fact that
$\langle\tilde{\sigma}_{+}(\tau+t^{\prime}+T)\tilde{\sigma}_{-}(t^{\prime}+T)\rangle^{\ast}=\langle\tilde{\sigma}_{+}(\tau+t^{\prime})\tilde{\sigma}_{-}(t^{\prime})\rangle^{\ast}$
as $t^{\prime}\rightarrow\infty$. This means that the generalized parity
guarantees that the correlation function is a real-valued function of $\tau$
in the rotating frame and thus results in the symmetry of the spectrum when
$\delta+f(t)=-[\delta+f(t+T/2)]$. This is consistent with the prediction from
the spectrum (LABEL:eq:sfunsa).
In general, it is a formidable task to show that the spectrum is asymmetric
when $\delta+f(t)\neq-[\delta+f(t+T/2)]$ with or without the secular
approximation. Nevertheless, from the above derivation, one readily notes that
the generalized parity plays an important role in determining the symmetry of
the spectrum. Consequently, if such parity breaks, it is not difficult to
imagine that the symmetry of the spectrum also breaks trivially if there is no
other symmetry-inducing mechanism.
## Appendix B Analytical calculation of quasienergies and transition matrix
elements in the biharmonic modulation case
We use the Van Vleck perturbation theory Cohen-Tannoudji _et al._ (1998);
Hausinger and Grifoni (2010) to analytically calculate the quasienergies and
transition matrix elements $x_{\alpha\beta,l}^{(+)}$ for the biharmonic
modulation, which leads to the analytical fluorescence spectrum. Since we are
interested in the regime of $\Omega_{z},\,\omega_{z}\gg\Omega_{x}$, which is
accessible in the experiment Li _et al._ (2013), we use $\Omega_{x}$ as the
perturbation parameter. We first transform Eq. (6) with the unitary
transformation
$e^{S(t)}[\tilde{H}(t)-i\partial_{t}]e^{-S(t)}e^{S(t)}|\tilde{u}_{\alpha}(t)\rangle=\tilde{\varepsilon}_{\alpha}e^{S(t)}|\tilde{u}_{\alpha}(t)\rangle,$
(44)
where
$S(t)=i\frac{\Omega_{z}}{2\omega_{z}}\left\\{\sin(\omega_{z}t)+\frac{r}{p}[\sin(p\omega_{z}t+\phi)-\sin\phi]\right\\}\sigma_{z}.$
(45)
We can define the transformed Floquet states and transformed Hamiltonian as
follows:
$|u_{\alpha}^{\prime}(t)\rangle=e^{S(t)}|\tilde{u}_{\alpha}(t)\rangle,$ (46)
$\displaystyle H^{\prime}(t)$ $\displaystyle=$ $\displaystyle
e^{S(t)}[\tilde{H}(t)-i\partial_{t}]e^{-S(t)}$ (47) $\displaystyle=$
$\displaystyle\frac{1}{2}\delta\sigma_{z}+\frac{1}{2}\sum_{l}(f_{l}\sigma_{+}+f_{-l}^{\ast}\sigma_{-})e^{il\omega_{z}t},$
where
$f_{l}=\Omega_{x}F_{l},$ (48)
and
$\displaystyle F_{l}$ $\displaystyle=$
$\displaystyle\frac{1}{T}\int_{0}^{T}e^{i\frac{\Omega_{z}}{\omega_{z}}\left\\{\sin(\omega_{z}t)+\frac{r}{p}[\sin(p\omega_{z}t+\phi)-\sin\phi]\right\\}-il\omega_{z}t}dt$
(49) $\displaystyle=$ $\displaystyle
e^{-i\Theta}\sum_{k}J_{k}\left(\frac{r\Omega_{z}}{p\omega_{z}}\right)J_{l-kp}\left(\frac{\Omega_{z}}{\omega_{z}}\right)e^{ik\phi},$
with $\Theta=\frac{r\Omega_{z}}{p\omega_{z}}\sin\phi$ and $J_{k}(z)$ being the
Bessel function of the first kind. To proceed, we introduce an extended
Hilbert space in which the time-dependent Floquet Hamiltonian
$H^{\prime}(t)-i\partial_{t}$ becomes time independent Sambe (1973). One
readily introduces the Fourier basis $|l\rangle\equiv\exp(il\omega_{z}t)$ and
inner product $\langle
l|n\rangle\equiv\frac{1}{T}\int_{0}^{T}\exp[i(n-l)\omega_{z}t]dt=\delta_{l,n}$,
where $\delta_{l,n}$ is the Kronecker delta function. Denoting
$|\uparrow\rangle$ and $|\downarrow\rangle$ as the eigenstates for
$\sigma_{z}$ with the eigenvalues $+1$ and $-1$, respectively, one gets the
composite bases
$|\uparrow(\downarrow),l\rangle=|\uparrow(\downarrow)\rangle\otimes|l\rangle$.
In the extended Hilbert space spanned by such bases, we can obtain the
explicit form of the Floquet Hamiltonian, which is written as
$\displaystyle H_{{\cal F}}^{\prime}$ $\displaystyle=$ $\displaystyle
H^{\prime}(t)-i\partial_{t}$ (50) $\displaystyle=$
$\displaystyle\frac{1}{2}\delta\sigma_{z}+\sum_{n}n\omega_{z}|n\rangle\langle
n|+\frac{1}{2}\sum_{n,l}(f_{l}\sigma_{+}+f_{-l}^{\ast}\sigma_{-})$
$\displaystyle\otimes|n+l\rangle\langle n|.$
The Floquet Hamiltonian has an infinite size and is difficult to be
diagonalized exactly in analytical calculation. To carry out perturbation
calculation, we transform the Floquet Hamiltonian with a further unitary
transformation with the Hermitian generator $K$, leading to
$\displaystyle H_{{\cal F}}^{\prime\prime}$ $\displaystyle=$ $\displaystyle
e^{iK}H_{{\cal F}}^{\prime}e^{-iK}$ (51) $\displaystyle=$ $\displaystyle
H_{\cal F}^{\prime}+[iK,H_{{\cal F}}^{\prime}]+\frac{1}{2!}[iK,[iK,H_{{\cal
F}}^{\prime}]]+\ldots,$
where the explicit form of $K$ is to be determined by requiring $H_{{\cal
F}}^{\prime\prime}$ to be block diagonal. The generator is expanded as
$K=K^{(1)}+K^{(2)}+K^{(3)}+\ldots,$ (52)
where the superscripts indicate the orders in the perturbation. We use
$H_{0}=\frac{1}{2}\delta\sigma_{z}+\sum_{n}n\omega_{z}|n\rangle\langle n|$ and
$V=\frac{1}{2}\sum_{n,l}(f_{l}\sigma_{+}+f_{-l}^{\ast}\sigma_{-})\otimes|n+l\rangle\langle
n|$ as the dominate and perturbation components, respectively. Up to the
second order in $\Omega_{x}$, we have
$\displaystyle H_{{\cal F}}^{\prime\prime}$ $\displaystyle\simeq$
$\displaystyle H_{0}+V+[iK^{(1)},H_{0}]+[iK^{(1)},V]+[iK^{(2)},H_{0}]$ (53)
$\displaystyle+\frac{1}{2}[iK^{(1)},[iK^{(1)},H_{0}]].$
Next, we discuss under which condition the transformed Hamiltonian may
reasonably be block diagonal. For the dominate component $H_{0}$, we simply
have
$H_{0}|\uparrow(\downarrow),n\rangle=[+(-)\delta/2+n\omega_{z}]|\uparrow(\downarrow),n\rangle\equiv\tilde{\varepsilon}^{(0)}_{+(-),n}|\uparrow(\downarrow),n\rangle$.
Provided that
$\tilde{\varepsilon}^{(0)}_{+,n}-\tilde{\varepsilon}^{(0)}_{-,n+m}=\delta-m\omega_{z}\approx
0$, we have a subspace spanned by two almost degenerate unperturbed states
$|\uparrow,n\rangle$ and $|\downarrow,n+m\rangle$, where $n$ is an arbitrary
integer and $m$ is the integer nearest to $\delta/\omega_{z}$. The projection
onto such a subspace is realized by the operator:
$\Pi_{n}=|\uparrow,n\rangle\langle\uparrow,n|+|\downarrow,n+m\rangle\langle\downarrow,n+m|.$
(54)
The eigenvalues of the dominate component $H_{0}$ in the $n$th subspace are
well-separated from those in the $(n+l)$th subspace as long as
$|l\omega_{z}|\gg|\delta-m\omega_{z}|$ for any $l\neq 0$. Moreover, if we
assume that
$|\langle\uparrow,n|V|\downarrow,n+l+m\rangle|\ll|\tilde{\varepsilon}^{(0)}_{+,n}-\tilde{\varepsilon}^{(0)}_{-,n+l+m}|,$
(55)
which is simply $|f_{-l-m}/2|\ll|l\omega_{z}|$, the transitions between the
states in the different subspaces can be neglected up to a certain order in
the perturbation Cohen-Tannoudji _et al._ (1998), yielding the following
condition
$\Pi_{n}H_{{\cal F}}^{\prime\prime}\Pi_{l}=0,$ (56)
for $n\neq l$. Therefore, $H_{{\cal F}}^{\prime\prime}$ is block diagonal. The
second condition that $K$ cannot have matrix elements inside each subspace of
two almost degenerate states is also assumed, i.e.,
$\Pi_{n}K\Pi_{n}=0.$ (57)
The generator can now be fully determined via Eqs. (56) and (57). The
nonvanishing elements of $K^{(1)}$ and $K^{(2)}$are given by
$\langle\uparrow,n|iK^{(1)}|\downarrow,l\rangle=\frac{1}{2}\frac{f_{n-l}}{\delta+(n-l)\omega_{z}},$
(58)
$\langle\downarrow,l|iK^{(1)}|\uparrow,n\rangle=-\frac{1}{2}\frac{f_{n-l}^{\ast}}{\delta+(n-l)\omega_{z}},$
(59)
for $n-l\neq-m$, and
$\displaystyle\langle\uparrow,n|iK^{(2)}|\uparrow,l\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{4(n-l)\omega_{z}}\left\\{\sum_{k\neq
n+m,l+m}\frac{f_{n-k}f_{l-k}^{\ast}}{2}\left[\frac{1}{\delta+(n-k)\omega_{z}}+\frac{1}{\delta+(l-k)\omega_{z}}\right]\right.$
(60)
$\displaystyle\left.+\frac{f_{l-n-m}^{\ast}f_{-m}}{\delta+(l-n-m)\omega_{z}}+\frac{f_{n-l-m}f_{-m}^{\ast}}{\delta+(n-l-m)\omega_{z}}\right\\},$
$\displaystyle\langle\downarrow,n|iK^{(2)}|\downarrow,l\rangle$
$\displaystyle=$ $\displaystyle-\frac{1}{4(n-l)\omega_{z}}\left\\{\sum_{k\neq
l-m,n-m}\frac{f_{k-n}^{\ast}f_{k-l}}{2}\left[\frac{1}{\delta+(k-n)\omega_{z}}+\frac{1}{\delta+(k-l)\omega_{z}}\right]\right.$
(61)
$\displaystyle+\left.\frac{f_{l-n-m}^{\ast}f_{-m}}{\delta+(l-n-m)\omega_{z}}+\frac{f_{n-l-m}f_{-m}^{\ast}}{\delta+(n-l-m)\omega_{z}}\right\\},$
for $n\neq l$. The rest elements of $K^{(1)}$ and $K^{(2)}$ are vanishing.
The transformed Hamiltonian have the $2\times 2$ submatrix $H_{{\cal
F}}^{\prime\prime(n)}$ in the diagonal, which reads Cohen-Tannoudji _et al._
(1998)
$\displaystyle H_{{\cal F}}^{\prime\prime(n)}$ $\displaystyle=$ $\displaystyle
H_{0}\Pi_{n}+\Pi_{n}V\Pi_{n}+\frac{1}{2}\Pi_{n}[iK^{(1)},V]\Pi_{n}$ (64)
$\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}\frac{\delta}{2}+n\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{4(\delta+j\omega_{z})}&\frac{f_{-m}}{2}\\\
\frac{f_{-m}^{\ast}}{2}&-\frac{\delta}{2}+(n+m)\omega_{z}-\sum_{j\neq-m}\frac{|f_{j}|^{2}}{4(\delta+j\omega_{z})}\end{array}\right).$
One can diagonalize the submatrix $H_{{\cal F}}^{\prime\prime(n)}$
analytically. Its eigenvalues (quasienergies) are
$\tilde{\varepsilon}_{\pm,n}=\frac{1}{2}\left(m\omega_{z}\pm\Omega_{m}\right)+n\omega_{z},$
(65)
where
$\Omega_{m}=\sqrt{\left[\delta-m\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{2(\delta+j\omega_{z})}\right]^{2}+|f_{-m}|^{2}}.$
(66)
The eigenvectors are given by
$\displaystyle|\Psi_{+,n}^{\prime\prime}\rangle$ $\displaystyle=$
$\displaystyle u|\uparrow,n\rangle+v|\downarrow,n+m\rangle,$ (67)
$\displaystyle|\Psi_{-,n}^{\prime\prime}\rangle$ $\displaystyle=$
$\displaystyle v|\uparrow,n\rangle-u^{\ast}|\downarrow,n+m\rangle,$ (68)
with
$\displaystyle u$ $\displaystyle=$
$\displaystyle\frac{f_{-m}}{|f_{-m}|}\sqrt{\frac{1}{2}\left[1+\frac{1}{\Omega_{m}}\left(\delta-m\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{2(\delta+j\omega_{z})}\right)\right]},$
(69) $\displaystyle v$ $\displaystyle=$
$\displaystyle\sqrt{\frac{1}{2}\left[1-\frac{1}{\Omega_{m}}\left(\delta-m\omega_{z}+\sum_{j\neq-m}\frac{|f_{j}|^{2}}{2(\delta+j\omega_{z})}\right)\right]}.$
(70)
The eigenvectors for $H_{{\cal F}}^{\prime}$ can be derived as follows:
$|\Psi_{\pm,n}^{\prime}\rangle=e^{-iK}|\Psi_{\pm,n}^{\prime\prime}\rangle\simeq\left(1-iK^{(1)}-iK^{(2)}+\frac{1}{2!}iK^{(1)}iK^{(1)}\right)|\Psi_{\pm,n}^{\prime\prime}\rangle.$
(71)
It is straightforward to derive the explicit form of the eigenvectors, which
reads
$\displaystyle|\Psi_{+,n}^{\prime}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{{\cal{\cal N}}}\left\\{uB|\uparrow,n\rangle-\sum_{j\neq
0}P_{j}|\uparrow,n+j\rangle+vB|\downarrow,n+m\rangle+\sum_{j\neq
0}Q_{j}|\downarrow,n+m+j\rangle\right\\},$ (72)
$\displaystyle|\Psi_{-,n}^{\prime}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{{\cal{\cal N}}}\left\\{vB|\uparrow,n\rangle+\sum_{j\neq
0}Q_{-j}^{\ast}|\uparrow,n+j\rangle-u^{\ast}B|\downarrow,n+m\rangle+\sum_{j\neq
0}P_{-j}^{\ast}|\downarrow,n+m+j\rangle\right\\},$ (73)
where
$B=1-\frac{1}{8}\sum_{l\neq-m}\frac{|f_{l}|^{2}}{(\delta+l\omega_{z})^{2}},$
(74) $\displaystyle P_{j}$ $\displaystyle=$
$\displaystyle\frac{f_{j-m}}{2[\delta+(j-m)\omega_{z}]}\left(v+\frac{uf_{-m}^{\ast}}{2j\omega_{z}}\right)+\frac{u}{4j\omega_{z}}\sum_{k\neq-m}\frac{f_{k+j}f_{k}^{\ast}}{\delta+k\omega_{z}},$
(75) $\displaystyle Q_{j}$ $\displaystyle=$
$\displaystyle\frac{f_{-j-m}^{\ast}}{2[\delta-(j+m)\omega_{z}]}\left(u+\frac{vf_{-m}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq-m}\frac{f_{k-j}^{\ast}f_{k}}{\delta+k\omega_{z}},$
(76)
and ${\cal N}=\sqrt{B^{2}+\sum_{j\neq 0}(|P_{j}|^{2}+|Q_{j}|^{2})}$ is the
normalization factor. The Floquet states $|u_{\alpha,n}^{\prime}(t)\rangle$
with the quasienergy $\tilde{\varepsilon}_{\alpha,n}$ can be derived from
$|\Psi_{\alpha,n}^{\prime}\rangle$ by replacing $|n\rangle$ with
$e^{in\omega_{z}t}$.
With above results at hand, we can analytically calculate the transition
matrix element
$\displaystyle x_{\alpha\beta,l}^{(+)}$ $\displaystyle=$
$\displaystyle\frac{1}{T}\int_{0}^{T}\langle\tilde{u}_{\alpha}(t)|\sigma_{\pm}|\tilde{u}_{\beta}(t)\rangle
e^{-il\omega_{z}t}dt=\frac{1}{T}\int_{0}^{T}\langle
u_{\alpha}^{\prime}(t)|e^{S(t)}\sigma_{+}e^{-S(t)}|u_{\beta}^{\prime}(t)\rangle
e^{-il\omega_{z}t}dt$ (77) $\displaystyle=$
$\displaystyle\sum_{n}\frac{1}{T}\int_{0}^{T}F_{n}\langle
u_{\alpha}^{\prime}(t)|\sigma_{+}|u_{\beta}^{\prime}(t)\rangle
e^{i(n-l)\omega_{z}t}dt=\sum_{n}F_{n+l}\langle\Psi_{\alpha,0}^{\prime}|\sigma_{+}|\Psi_{\beta,n}^{\prime}\rangle,$
and
$\displaystyle\langle\Psi_{+,0}^{\prime}|\sigma_{+}|\Psi_{+,n}^{\prime}\rangle$
$\displaystyle=$ $\displaystyle\frac{1}{{\cal
N}^{2}}\left\\{u^{\ast}vB^{2}\delta_{n,-m}-\sum_{j\neq
0,n+m}P_{j}^{\ast}Q_{j-n-m}+(u^{\ast}Q_{-n-m}-vP_{n+m}^{\ast})B(1-\delta_{n,-m})\right\\},$
(78)
$\displaystyle\langle\Psi_{+,0}^{\prime}|\sigma_{+}|\Psi_{-,n}^{\prime}\rangle$
$\displaystyle=$ $\displaystyle\frac{1}{{\cal
N}^{2}}\left\\{-(u^{\ast})^{2}B^{2}\delta_{n,-m}-\sum_{j\neq
0,n+m}P_{j}^{\ast}P_{n+m-j}^{\ast}+2u^{\ast}P_{n+m}^{\ast}B(1-\delta_{n,-m})\right\\},$
(79)
$\displaystyle\langle\Psi_{-,0}^{\prime}|\sigma_{+}|\Psi_{+,n}^{\prime}\rangle$
$\displaystyle=$ $\displaystyle\frac{1}{{\cal
N}^{2}}\left\\{v^{2}B^{2}\delta_{n,-m}+\sum_{j\neq
0,n+m}Q_{-j}Q_{j-n-m}+2vQ_{-n-m}B(1-\delta_{n,-m})\right\\},$ (80)
$\displaystyle\langle\Psi_{-,0}^{\prime}|\sigma_{+}|\Psi_{-,n}^{\prime}\rangle$
$\displaystyle=$ $\displaystyle\frac{1}{{\cal
N}^{2}}\left\\{-u^{\ast}vB^{2}\delta_{n,-m}+\sum_{j\neq
0,n+m}P_{j}^{\ast}Q_{j-n-m}+(vP_{n+m}^{\ast}-u^{\ast}Q_{-n-m})B(1-\delta_{n,-m})\right\\},$
(81)
where $(1-\delta_{n,-m})$ indicates that the term vanishes for $n=-m$.
Clearly, the validity of the perturbation theory is limited to the condition
(55). For $\delta\approx 0$, roughly speaking, the above results can be
justified when $r\sim 1$ and $\omega_{z}\sim\Omega_{z}\gg\Omega_{x}$.
## Appendix C Equalities for transition matrix elements in the vanishing
detuning case
For the biharmonic modulation, we show the equalities that the transition
matrix elements satisfy under the vanishing detuning condition ($\delta=0$)
using the above analytical results, which helps us to understand the symmetry
of the spectrum in the main text. It follows from Eq. (49) that
$\displaystyle F_{-l}$ $\displaystyle=$ $\displaystyle
e^{-i\Theta}\sum_{k}J_{k}\left(\frac{r\Omega_{z}}{p\omega_{z}}\right)J_{-l-kp}\left(\frac{\Omega_{z}}{\omega_{z}}\right)e^{ik\phi}$
(82) $\displaystyle=$
$\displaystyle(-1)^{l}e^{-i\Theta}\sum_{k}J_{k}\left(\frac{r\Omega_{z}}{p\omega_{z}}\right)(-1)^{k(p+1)}$
$\displaystyle\times
J_{l-kp}\left(\frac{\Omega_{z}}{\omega_{z}}\right)e^{-ik\phi},$
where we used the relation $J_{-n}(z)=(-1)^{n}J_{n}(z)$. It is evident that
when $p$ is an odd number, $p+1$ is even and thus $(-1)^{k(p+1)}=1$, leading
to
$F_{-l}=(-1)^{l}e^{-i2\Theta}F_{l}^{\ast}.$ (83)
When $p$ is an even number, $(-1)^{k(p+1)}=(-1)^{k}$ may be either $+1$ or
$-1$. Nevertheless, we can obtain a simple relation between $F_{l}$ and
$F_{-l}$ by setting
$(-1)^{k}e^{-ik\phi}=e^{ik\phi},$ (84)
which yields that $\phi=\left(1/2+n\right)\pi$ $(n=0,\pm 1,\pm 2,\ldots)$.
With an even $p$ and such values of phase, we have
$F_{l}=(-1)^{l}F_{-l}.$ (85)
We should emphasize that Eqs. (83) and (85) hold under different conditions.
The former is available when $p$ is odd and regardless of $\phi$ while the
latter is established when $p$ is even and $\phi=(1/2+n)\pi$.
Provided that $\delta=0$, we get $m=\delta/\omega_{z}=0$. We define the phase
of $F_{0}$ via
$F_{0}=e^{-i\theta_{0}}|F_{0}|.$ (86)
Together with Eqs. (69) and (70), we simply have
$v=ue^{i\theta_{0}}$ (87)
with the aid of Eq. (83) or (85). Such an equality between $u$ and $v$ is
valid only for $\delta=0$ and in the valid regime of Eq. (83) or (85).
### C.1 Odd $p$
We consider that $p$ is an odd number. It follows from Eq. (49) that
$\theta_{0}=\Theta$. Using $\delta=0$ and Eqs. (83) and (87), one readily gets
from Eqs. (75) and (76) that
$\displaystyle Q_{j}$ $\displaystyle=$
$\displaystyle-\frac{f_{-j}^{\ast}}{2j\omega_{z}}\left(u+\frac{vf_{0}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq
0}\frac{f_{k-j}^{\ast}f_{k}}{k\omega_{z}}$ (88) $\displaystyle=$
$\displaystyle\frac{(-1)^{j+1}e^{i2\Theta}f_{j}}{2j\omega_{z}}\left(u+\frac{vf_{0}^{\ast}e^{-i2\Theta}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq
0}\frac{f_{-k-j}^{\ast}f_{-k}}{-k\omega_{z}}$ $\displaystyle=$
$\displaystyle\frac{(-1)^{j+1}e^{i\Theta}f_{j}}{2j\omega_{z}}\left(v+\frac{uf_{0}^{\ast}}{2j\omega_{z}}\right)+\frac{e^{i\Theta}u}{4j\omega_{z}}\sum_{k\neq
0}\frac{(-1)^{j+1}f_{k+j}f_{k}^{\ast}}{k\omega_{z}}$ $\displaystyle=$
$\displaystyle(-1)^{j+1}e^{i\Theta}P_{j}.$
From this relation and Eqs. (77)-(80), it is straightforward to show that
$\displaystyle\left[x_{-+,-l}^{(+)}\right]^{\ast}$ $\displaystyle=$
$\displaystyle\sum_{n}\frac{F_{n-l}^{\ast}}{{\cal
N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{n\neq
0,n}Q_{-j}^{\ast}Q_{j-n}^{\ast}+2vBQ_{-n}^{\ast}(1-\delta_{n,0})\right\\}$
(89) $\displaystyle=$ $\displaystyle\sum_{n}\frac{F_{-n-l}^{\ast}}{{\cal
N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{j\neq
0,-n}Q_{-j}^{\ast}Q_{j+n}^{\ast}+2vBQ_{n}^{\ast}(1-\delta_{n,0})\right\\}$
$\displaystyle=$
$\displaystyle\sum_{n}\frac{(-1)^{n+l}F_{n+l}e^{i2\Theta}}{{\cal
N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{j\neq
0,n}Q_{j}^{\ast}Q_{n-j}^{\ast}+2vBQ_{n}^{\ast}(1-\delta_{n,0})\right\\}$
$\displaystyle=$
$\displaystyle\sum_{n}\frac{(-1)^{n+l}F_{n+l}e^{i2\Theta}}{{\cal
N}^{2}}\left\\{v^{2}B^{2}\delta_{n,0}+\sum_{j\neq
0,n}(-1)^{n}e^{-i2\Theta}P_{j}^{\ast}P_{n-j}^{\ast}+2vB(-1)^{n+1}e^{-i\Theta}P_{n}^{\ast}(1-\delta_{n,0})\right\\}$
$\displaystyle=$ $\displaystyle(-1)^{l}\sum_{n}\frac{F_{n+l}}{{\cal
N}^{2}}\left\\{(u^{\ast})^{2}B^{2}\delta_{n,0}+\sum_{j\neq
0,n}P_{j}^{\ast}P_{n-j}^{\ast}-2u^{\ast}BP_{n}^{\ast}(1-\delta_{n,0})\right\\}$
$\displaystyle=$ $\displaystyle-(-1)^{l}x_{+-,l}^{(+)}.$
Similarly, we find that
$\left[x^{(+)}_{++,-l}\right]^{\ast}=(-1)^{l}x^{(+)}_{++,l}$. Not
surprisingly, due to the generalized parity of the Floquet states, the
transition matrix elements satisfy Eq. (19) as long as
$\delta+f(t)=-[\delta+f(t+T/2)]$. For the biharmonic modulation, such
equalities are established when $p$ is odd and $\delta=0$.
### C.2 Even $p$
We move to consider that $p$ is an even number. In such a case, the
generalized parity of the Floquet states is broken even if $\delta=0$. Thus,
we cannot expect that the transition matrix elements satisfy Eq. (19).
However, we have another type of equality. With Eqs. (85) and (87), one gets
$\displaystyle Q_{j}$ $\displaystyle=$
$\displaystyle\frac{f_{-j}^{\ast}}{-2j\omega_{z}}\left(u+\frac{vf_{0}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq
0}\frac{f_{k-j}^{\ast}f_{k}}{k\omega_{z}}$ (90) $\displaystyle=$
$\displaystyle\frac{(-1)^{j+1}f_{j}^{\ast}}{2j\omega_{z}}\left(u+\frac{vf_{0}}{2j\omega_{z}}\right)+\frac{v}{4j\omega_{z}}\sum_{k\neq
0}\frac{(-1)^{j+1}f_{j-k}^{\ast}f_{-k}}{-k\omega_{z}}$ $\displaystyle=$
$\displaystyle\frac{(-1)^{j+1}e^{-i\theta_{0}}f_{j}^{\ast}}{2j\omega_{z}}\left(v+\frac{u^{\ast}f_{0}}{2j\omega_{z}}\right)+\frac{e^{-i\theta_{0}}u^{\ast}}{4j\omega_{z}}\sum_{k\neq
0}\frac{(-1)^{j+1}f_{j+k}^{\ast}f_{k}}{k\omega_{z}}$ $\displaystyle=$
$\displaystyle(-1)^{j+1}e^{-i\theta_{0}}P_{j}^{\ast}.$
It is straightforward to derive Eqs. (23) and (24) in the main text via Eqs.
(77)-(80) and (90). We stress that the conditions for establishing such
relations require that $p$ is even, $\phi=(1/2+n)\pi$, and $\delta=0$.
## References
* Mollow (1969) B. R. Mollow, Phys. Rev. 188, 1969 (1969).
* Scully and Zubairy (1997) M. O. Scully and M. S. Zubairy, _Quantum optics_ (Cambridge University Press, 1997).
* Cohen-Tannoudji _et al._ (1998) C. Cohen-Tannoudji, J. Dupont-Roc, G. Grynberg, and P. Thickstun, _Atom-photon interactions: basic processes and applications_ (Wiley, 1998).
* He _et al._ (2013) Y.-M. He, Y. He, Y. Wei, D. Wu, M. Atature, C. Schneider, S. Hofling, M. Kamp, C. Lu, and J. Pan, Nat. Nanotech. 8, 213 (2013).
* Santana _et al._ (2017) T. S. Santana, Y. Ma, R. N. E. Malein, F. Bastiman, E. Clarke, and B. D. Gerardot, Phys. Rev. B 95, 201410(R) (2017).
* Kiršanskė _et al._ (2017) G. Kiršanskė, H. Thyrrestrup, R. S. Daveau, C. L. Dreeßen, T. Pregnolato, L. Midolo, P. Tighineanu, A. Javadi, S. Stobbe, R. Schott, A. Ludwig, A. D. Wieck, S. I. Park, J. D. Song, A. V. Kuhlmann, I. Söllner, M. C. Löbl, R. J. Warburton, and P. Lodahl, Phys. Rev. B 96, 165306 (2017).
* Ficek and Freedhoff (1993) Z. Ficek and H. S. Freedhoff, Phys. Rev. A 48, 3092 (1993).
* Agarwal _et al._ (1991) G. S. Agarwal, Y. Zhu, D. J. Gauthier, and T. W. Mossberg, J. Opt. Soc. Am. B 8, 1163 (1991).
* Ficek and Freedhoff (1996) Z. Ficek and H. S. Freedhoff, Phys. Rev. A 53, 4275 (1996).
* Ficek and Rudolph (1999) Z. Ficek and T. Rudolph, Phys. Rev. A 60, R4245 (1999).
* Peiris _et al._ (2014) M. Peiris, K. Konthasinghe, Y. Yu, Z. C. Niu, and A. Muller, Phys. Rev. B 89, 155305 (2014).
* Konthasinghe _et al._ (2014) K. Konthasinghe, M. Peiris, and A. Muller, Phys. Rev. A 90, 023810 (2014).
* He _et al._ (2015) Y. He, Y.-M. He, J. Liu, Y.-J. Wei, H. Y. Ramírez, M. Atatüre, C. Schneider, M. Kamp, S. Höfling, C.-Y. Lu, and J.-W. Pan, Phys. Rev. Lett. 114, 097402 (2015).
* Toyli _et al._ (2016) D. M. Toyli, A. W. Eddins, S. Boutin, S. Puri, D. Hover, V. Bolkhovsky, W. D. Oliver, A. Blais, and I. Siddiqi, Phys. Rev. X 6, 031004 (2016).
* Carmichael (1985) H. J. Carmichael, Phys. Rev. Lett. 55, 2790 (1985).
* Grünwald and Vogel (2012) P. Grünwald and W. Vogel, Phys. Rev. Lett. 109, 013601 (2012).
* Grünwald and Vogel (2013) P. Grünwald and W. Vogel, Phys. Rev. A 88, 023837 (2013).
* Kimble _et al._ (1977) H. J. Kimble, M. Dagenais, and L. Mandel, Phys. Rev. Lett. 39, 691 (1977).
* D’Souza _et al._ (1990) R. D’Souza, A. S. Jayarao, and S. V. Lawande, Phys. Rev. A 41, 4083 (1990).
* Nazir (2008) A. Nazir, Phys. Rev. B 78, 153309 (2008).
* Pastukhov _et al._ (2014) V. M. Pastukhov, Y. V. Vladimirova, and V. N. Zadkov, Phys. Rev. A 90, 063831 (2014).
* Itano _et al._ (1988) W. M. Itano, J. C. Bergquist, and D. J. Wineland, Phys. Rev. A 38, 559 (1988).
* Ficek _et al._ (1984) Z. Ficek, R. Tanaś, and S. Kielich, Phys. Rev. A 29, 2004 (1984).
* Damanet _et al._ (2018) F. Damanet, J. Kübler, J. Martin, and D. Braun, Phys. Rev. A 97, 023832 (2018).
* Kryuchkyan _et al._ (2017) G. Y. Kryuchkyan, V. Shahnazaryan, O. V. Kibis, and I. A. Shelykh, Phys. Rev. A 95, 013834 (2017).
* Antón _et al._ (2017) M. A. Antón, S. Maede-Razavi, F. Carreño, I. Thanopulos, and E. Paspalakis, Phys. Rev. A 96, 063812 (2017).
* Yan _et al._ (2018) Y. Yan, Z. Lü, J. Y. Luo, and H. Zheng, Phys. Rev. A 97, 033817 (2018).
* Saiko _et al._ (2018) A. P. Saiko, S. A. Markevich, and R. Fedaruk, Phys. Rev. A 98, 043814 (2018).
* Breuer and Petruccione (1997) H.-P. Breuer and F. Petruccione, Phys. Rev. A 55, 3101 (1997).
* Yan _et al._ (2016a) Y. Yan, Z. Lü, and H. Zheng, Ann. Phys. 371, 159 (2016a).
* Roy and Hughes (2012) C. Roy and S. Hughes, Phys. Rev. B 85, 115309 (2012).
* McCutcheon and Nazir (2013) D. P. S. McCutcheon and A. Nazir, Phys. Rev. Lett. 110, 217401 (2013).
* Browne and Keitel (2000) D. E. Browne and C. H. Keitel, J. Mod. Opt. 47, 1307 (2000).
* Yan _et al._ (2013) Y. Yan, Z. Lü, and H. Zheng, Phys. Rev. A 88, 053821 (2013).
* Ulrich _et al._ (2011) S. M. Ulrich, S. Ates, S. Reitzenstein, A. Löffler, A. Forchel, and P. Michler, Phys. Rev. Lett. 106, 247402 (2011).
* Ulhaq _et al._ (2013) A. Ulhaq, S. Weiler, C. Roy, S. M. Ulrich, M. Jetter, S. Hughes, and P. Michler, Opt. Express 21, 4382 (2013).
* Yan _et al._ (2016b) Y. Yan, Z. Lü, H. Zheng, and Y. Zhao, Phys. Rev. A 93, 033812 (2016b).
* Li _et al._ (2013) J. Li, M. P. Silveri, K. S. Kumar, J. M. Pirkkalainen, A. Vepsäläinen, W. C. Chien, J. Tuorila, M. A. Sillanpää, P. J. Hakonen, E. V. Thuneberg, _et al._ , Nat. Commun. 4, 1420 (2013).
* Pan _et al._ (2017) J. Pan, H. Z. Jooya, G. Sun, Y. Fan, P. Wu, D. A. Telnov, S.-I. Chu, and S. Han, Phys. Rev. B 96, 174518 (2017).
* Brunel _et al._ (1998) C. Brunel, B. Lounis, P. Tamarat, and M. Orrit, Phys. Rev. Lett. 81, 2679 (1998).
* Rohr _et al._ (2014) S. Rohr, E. Dupont-Ferrier, B. Pigeau, P. Verlot, V. Jacques, and O. Arcizet, Phys. Rev. Lett. 112, 010502 (2014).
* Kibis _et al._ (2009) O. V. Kibis, G. Y. Slepyan, S. A. Maksimenko, and A. Hoffmann, Phys. Rev. Lett. 102, 023601 (2009).
* Macovei and Keitel (2014) M. Macovei and C. H. Keitel, Phys. Rev. A 90, 043838 (2014).
* Zhao _et al._ (2015) Y.-J. Zhao, Y.-L. Liu, Y.-x. Liu, and F. Nori, Phys. Rev. A 91, 053820 (2015).
* Silveri _et al._ (2013) M. Silveri, J. Tuorila, M. Kemppainen, and E. Thuneberg, Phys. Rev. B 87, 134505 (2013).
* Macovei _et al._ (2015) M. Macovei, M. Mishra, and C. H. Keitel, Phys. Rev. A 92, 013846 (2015).
* Silveri _et al._ (2017) M. P. Silveri, J. A. Tuorila, E. V. Thuneberg, and G. S. Paraoanu, Rep. Prog. Phys. 80, 056002 (2017).
* Wilkens and Rza¸ewski (1989) M. Wilkens and K. Rza¸ewski, Phys. Rev. A 40, 3164 (1989).
* Das and Macovei (2013) S. Das and M. A. Macovei, Phys. Rev. B 88, 125306 (2013).
* Ho _et al._ (1986) T.-S. Ho, K. Wang, and S.-I. Chu, Phys. Rev. A 33, 1798 (1986).
* Shirley (1965) J. H. Shirley, Phys. Rev. 138, B979 (1965).
* Sambe (1973) H. Sambe, Phys. Rev. A 7, 2203 (1973).
* Grossmann _et al._ (1991) F. Grossmann, T. Dittrich, P. Jung, and P. Hänggi, Phys. Rev. Lett. 67, 516 (1991).
* Lehmann _et al._ (2003) J. Lehmann, S. Kohler, P. Hänggi, and A. Nitzan, J. Chem. Phys. 118, 3283 (2003).
* Hausinger and Grifoni (2010) J. Hausinger and M. Grifoni, Phys. Rev. A 81, 022117 (2010).
|
2024-09-04T02:54:59.030750 | 2020-03-11T06:05:47 | 2003.05126 | {
"authors": "Sergey P. Shary",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:26152",
"submitter": "Sergey Shary",
"url": "https://arxiv.org/abs/2003.05126"
} | arxiv-papers | # A variability measure for estimates
of parameters in interval data fitting
Sergey P. Shary
Institute of Computational Technologies SB RAS
and Novosibirsk State University,
Novosibirk, Russia
E-mail<EMAIL_ADDRESS>
###### Abstract
The paper presents a construction of a quantitative measure of variability for
parameter estimates in the data fitting problem under interval uncertainty. It
shows the degree of variability and ambiguity of the estimate, and the need
for its introduction is dictated by non-uniqueness of answers to the problems
with interval data. A substantiation of the new variability measure is given,
its application and motivations are discussed. Several examples and a series
of numerical tests are considered, showing the features of the new
characteristic and the specifics of its use.
Keywords: data fitting problem, linear regression, interval data uncertainty,
maximum compatibility method, strong compatibility, variability measure.
MSC 2010: 65G40, 62J10, 90C90
## 1 Introduction and problem statement
The purpose of this work is to present a quantitative variability measure for
estimates of parameters of functional dependencies in the statistics of
interval data. This is a relatively young branch of modern data science that
does not rely on the probability theory, but makes extensive use of interval
analysis methods (see, e. g., the surveys in [4, 7, 10]).
Fig. 1: A variability measure can be an estimate
of the size of the set of possible solutions.
By the term “variability”, we understand the degree of variation and ambiguity
of the estimate, and the need for its introduction is dictated by the fact
that, in processing interval data, the answer is typically not unique.
Usually, we get a whole set of different estimates that are equally consistent
(compatible) with the source data and, thus, suitable as solutions to the
problem. The extent to which this set is large or small is, partly,
characterized by the term “variability”. In traditional probabilistic
statistics, estimates of parameters are known to be random variables
themselves, and the measure of their variability can be the variance of the
estimates, mean absolute difference, median absolute deviation, average
absolute deviation, and such like. What could be their analogues in the
statistics of interval data?
At first glance, the answer to this question seems quite obvious: it can be
any value that characterizes the size of the set of solutions to the problem,
if it is non-empty. We can even take an enclosure of the solution set obtained
by an interval method. A certain disadvantage of this variant is the excessive
detailing of the answer given as a box in $\mathbb{R}^{n}$, a large amount of
information that still needs to be “digested” and reduced to a compact and
expressive form. Sometimes, an interval estimate in the form of an axes-
aligned box may inadequately represent the solution set. Another disadvantage
is the complexity of finding such an estimate.
It is desirable to have a relatively simple and efficiently computable
quantity, expressed in a single number, because it would give a general
aggregate view of the subject of interest. Similarly to variance and other
probabilistic measures, it can serve as an approximate characteristic of the
quality of parameter estimation. The greater the variability of an estimate,
the less its certainty and the worse its quality, and this can serve as a
basis for conclusions about the quality of the estimate.
At the same time, the introduced variability measure should not be simply the
“size of the solution set”. If this solution set, for example, is unstable and
changes abruptly with arbitrarily small changes in the data, then its size is,
to some extent, misleading and disorienting (see example in Section 4). A
practically useful variability measure should take into account this possible
instability of the solution set to the problem and give us a robust value.
Fig. 2: An illustration for the data fitting problem under interval
uncertainty.
In our article, we are within the framework of the data fitting problem (often
called regression analysis problem): given results of measurements or
observations, it is required to construct a functional dependence of a fixed
type that “best fit” these data. Specifically, we need to determine the
parameters $x_{1}$, $x_{2}$, …, $x_{n}$ of a linear function of the form
$b=x_{1}a_{1}+\ldots+x_{n}a_{n}$ (1)
from a number of values of the independent variables $a_{1}$, $a_{2}$, …,
$a_{n}$ (also called _exogenous_ , _explanatory_ , _predictor_ or _input_
variables), and the corresponding values of the dependent variable $b$ (also
called _endogenous_ , _response_ , _criterion_ or _output_ variable). Both
$a_{1}$, $a_{2}$, …, $a_{n}$ and $b$ are not known precisely, and we only have
intervals of their possible values (see Fig. 2). To find estimates of the
coefficients $x_{1}$, $x_{2}$, …, $x_{n}$, we use the so-called maximum
compatibility method (previously called “maximum consistency method”), which
was proposed and developed in the works [6, 16, 17, 19] and others. After the
estimates for $x_{1}$, $x_{2}$, …, $x_{n}$ are found, we need to somehow
evaluate their variability. Our article presents a construction of the
variability measure in the above data fitting problem.
Note that traditional methods of data fitting and regression analysis, such as
the least squares method and its modifications, the least modulus method,
etc., cannot be applied to the solution of our problem, since they are
unsuitable for situations where the source data are intervals rather than
points.
## 2 Formulation of the main results
### 2.1 Maximum compatibility method and tolerable solution set
The initial data for our problem is a set of values of independent and
dependent variables for function (1), which are obtained as a result of $m$
measurements (observations):
$\begin{array}[]{ccccc}\text{\boldmath$a$}_{11},&\text{\boldmath$a$}_{12},&\ldots&\text{\boldmath$a$}_{1n},&\text{\boldmath$b$}_{1},\\\
\text{\boldmath$a$}_{21},&\text{\boldmath$a$}_{22},&\ldots&\text{\boldmath$a$}_{2n},&\text{\boldmath$b$}_{2},\\\
\vdots&\vdots&\ddots&\vdots&\vdots\\\
\text{\boldmath$a$}_{m1},&\text{\boldmath$a$}_{m2},&\ldots&\text{\boldmath$a$}_{mn},&\text{\boldmath$b$}_{m}.\end{array}$
(2)
These are intervals as we assume that these data are inaccurate and have
interval uncertainty due to measurement errors, etc. Both the data (2) and
other interval values throughout the text are highlighted in bold mathematical
font according to the informal international standard [5]. The first index of
the interval values from (2) means the measurement number, and the second one,
at $\text{\boldmath$a$}_{ij}$’s, is the number of the independent variable
that takes the corresponding value in this measurement.
To find an estimate $(\hat{x}_{1},\hat{x}_{2},\ldots,\hat{x}_{n})$ of the
parameters of the linear function (1), we “substitute” data (2) into equality
(1), thus getting an interval system of linear algebraic equations
$\left\\{\
\begin{array}[]{ccccccccc}\text{\boldmath$a$}_{11}x_{1}&+&\text{\boldmath$a$}_{12}x_{2}&+&\ldots&+&\text{\boldmath$a$}_{1n}x_{n}&=&\text{\boldmath$b$}_{1},\\\\[1.0pt]
\text{\boldmath$a$}_{21}x_{1}&+&\text{\boldmath$a$}_{22}x_{2}&+&\ldots&+&\text{\boldmath$a$}_{2n}x_{n}&=&\text{\boldmath$b$}_{2},\\\\[1.0pt]
\vdots&&\vdots&&\ddots&&\vdots&&\vdots\\\\[1.0pt]
\text{\boldmath$a$}_{m1}x_{1}&+&\text{\boldmath$a$}_{m2}x_{2}&+&\ldots&+&\text{\boldmath$a$}_{mn}x_{n}&=&\text{\boldmath$b$}_{m},\end{array}\right.$
(3)
or, briefly,
$\text{\boldmath$A$}x=\text{\boldmath$b$}$ (4)
with an interval $m\times n$-matrix
$\text{\boldmath$A$}=(\text{\boldmath$a$}_{ij})$ and interval $m$-vector
$\text{\boldmath$b$}=(\text{\boldmath$b$}_{i})$ in the right-hand side. The
sets of parameters which are compatible, in this or that sense, with the
measurement data (2) form various solution sets for the equations system (3).
The most popular of them are the _united solution set_ and _tolerable solution
set_. The united solution set, defined as
$\varXi_{uni}(\text{\boldmath$A$},\text{\boldmath$b$})=\bigl{\\{}\,x\in\mathbb{R}^{n}\mid\text{
$Ax=b\,$ for some $A\in\text{\boldmath$A$}$ and
$b\in\text{\boldmath$b$}$}\,\bigr{\\}},$
corresponds to the so-called weak compatibility between the parameters of
function (1) and data (2) (see [6, 16, 17]). The tolerable solution set,
defined as
$\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})=\bigl{\\{}\,x\in\mathbb{R}^{n}\mid\text{
$Ax\in\text{\boldmath$b$}\,$ for each matrix
$A\in\text{\boldmath$A$}$}\,\bigr{\\}},$
corresponds to the so-called strong compatibility between the parameters of
function (1) and data (2) (see [19]).
Fig. 3: An illustration of the strong compatibility between interval data and
a linear function.
Further, we assume that the solution to the data fitting problem for function
(1) is found by the maximum compatibility method (see [16, 17, 19]). As an
estimate of the parameters of function (1), it takes the maximum point of the
_recognizing functional_ , a special function that gives a quantitative
“compatibility measure” of this estimate with empirical data (2).
The maximum compatibility method has two versions, “weak” and “strong”, that
differ in understanding how exactly the interval data should be “compatible”
with the function that we construct on them. Weak and strong compatibility
reflect two different situations that may occur in data processing. In the
weak version, it is required that the graph of the constructed function just
intersects the measurement uncertainty boxes (see [16, 17]). The strong
version implies more stringent condition: it requires that the function graph
passes within the “corridors” specified by the intervals
$\text{\boldmath$b$}_{i}$, $i=1,2,\ldots,m$, for _any_ values of the
independent variables $a_{1}$, $a_{2}$, …, $a_{n}$ from the respective
intervals $\text{\boldmath$a$}_{i1}$, $\text{\boldmath$a$}_{i2}$, …,
$\text{\boldmath$a$}_{in}$ obtained in the $i$-th measurement (see [19]). This
is illustrated in Fig. 3, where the straight line of the function graph goes
through the vertical faces of the measurement uncertainty boxes. The weak
compatibility is shown in Fig. 2 by two upper straight lines. The lower line
in Fig. 2 does not satisfy compatibility condition at all, neither weak nor
strong, since it does not intersect some boxes.
The “strong version” of the maximum compatibility method has a number of
theoretical and practical advantages over the “weak version”. These are
polynomial complexity, robustness of estimates and their finite variability,
the fact that the strong compatibility partially overcomes the so-called
Demidenko paradox, etc. (see details in [19]). Hence, we consider below a
strong version of the maximum compatibility method, which corresponds to the
tolerable solution set $\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$
for the interval system of equations (4). Its recognizing functional is
usually denoted by “Tol”,
$\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})\;=\,\min_{1\leq
i\leq
m}\left\\{\,\mathrm{rad}\,\text{\boldmath$b$}_{i}-\left|\;\mathrm{mid}\,\text{\boldmath$b$}_{i}-\sum_{j=1}^{n}\,\text{\boldmath$a$}_{ij}x_{j}\,\right|\,\right\\},$
(5)
where
$\mathrm{rad}\,\text{\boldmath$b$}_{i}=\tfrac{1}{2}(\overline{\text{\boldmath$b$}}_{i}-\underline{\text{\boldmath$b$}}_{i}),\hskip
65.44133pt\mathrm{mid}\,\text{\boldmath$b$}_{i}=\tfrac{1}{2}(\overline{\text{\boldmath$b$}}_{i}+\underline{\text{\boldmath$b$}}_{i})$
are radii and midpoints of the components of the right-hand side $b$, the
arithmetic operations inside the modulus in (5) are those of the classical
interval arithmetic (see, e. g., [4, 8, 9]), and the modulus is understood as
the maximum absolute value of the points from the interval,
$|\text{\boldmath$a$}|=\max\,\\{\,|a|\mid
a\in\text{\boldmath$a$}\,\\}=\max\,\bigl{\\{}\,|\underline{\text{\boldmath$a$}}|,|\overline{\text{\boldmath$a$}}|\,\bigr{\\}}.$
Typical graphs of the functional Tol for the one-dimensional case are shown in
Fig. 4 and Fig. 5.
To solve the data fitting problem for the linear function (1) and data set
(2), it is necessary to find the unconstrained maximum, over all
$x\in\mathbb{R}^{n}$, of the functional
$\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})$,
$\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})\rightarrow\max,$
and the vector
$\hat{x}=\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})$
at which this maximum is attained provides an estimate of the parameters of
function (1).
If $\max\,\mathrm{Tol}\,\geq 0$, then the solution set
$\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$, i. e., the set of
parameters strongly compatible with the data is non-empty, and
$\hat{x}\in\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$. If
$\max\,\mathrm{Tol}\,<0$, then the solution set
$\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$ is empty and there do
not exist parameters that are strongly compatible with data (2). However, the
argument $\hat{x}$ of $\max\,\mathrm{Tol}\,$ still provides the best
compatibility of the constructed linear function with data (2) (more
precisely, the least incompatibility).
To conclude this subsection, we give a useful result on the tolerable solution
set that allows us to investigate whether it is bounded or unbounded, i. e.,
whether the tolerable solution sets is finite in size or extends infinitely.
Irene Sharaya’s boundedness criterion [13] Let the tolerable solution set to
an interval linear system $\text{\boldmath$A$}x=\text{\boldmath$b$}$ be
nonempty. It is unbounded if and only if the matrix $A$ has linearly dependent
noninterval columns.
The criterion of boundedness shows that the tolerable solution set is
unbounded, in fact, under exceptional circumstances, which are almost never
fulfilled in practice, when working with actual interval data. That is, the
tolerable solution set is mostly bounded, and the estimates obtained by the
strong version of the maximum compatibility method almost always has finite
variability.
### 2.2 Variability measures
As a quantity characterizing the variability of the estimate of the parameter
vector $\hat{x}=(\hat{x}_{1},\hat{x}_{2},\ldots,\hat{x}_{n})$ in the linear
function (1), which is obtained by the maximum compatibility method from data
(2), we propose
$\mathrm{IVE}\,(\text{\boldmath$A$},\text{\boldmath$b$})\;=\;\sqrt{n}\;\max_{\mathbb{R}^{n}}\,\mathrm{Tol}\,\cdot\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A\,\Bigr{)}\cdot\frac{\displaystyle\bigl{\|}\,\arg\max_{\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$
(6)
In this formula,
$n$ is the dimension of the parameter vector of function (1) under
construction,
$\|\cdot\|_{2}$ is the Euclidean norm (2-norm) of vectors from
$\mathbb{R}^{n}$, defined as
$\|x\|_{2}\;=\;\left(\;\sum_{i=1}^{n}|x_{i}|^{2}\,\right)^{1/2},$
$\mathrm{cond}_{2}\,A$ is the spectral condition number of the matrix $A$,
defined as
$\mathrm{cond}_{2}\,A\;=\;\frac{\sigma_{\max}(A)}{\sigma_{\min}(A)},$
i. e., the ratio of the maximal $\sigma_{\max}(A)$ and minimal
$\sigma_{\min}(A)$ singular values of $A$; it is an extension, to the
rectangular case, of the concept of the condition number from computational
linear algebra (see e. g. [2, 24]);
$\hat{\text{\boldmath$b$}}$ is a certain “most representative” point from the
interval vector $b$, which is taken as
$\hat{\text{\boldmath$b$}}\;=\;\tfrac{1}{2}(|\mathrm{mid}\,\text{\boldmath$b$}+\mathrm{rad}\,\text{\boldmath$b$}|+|\mathrm{mid}\,\text{\boldmath$b$}-\mathrm{rad}\,\text{\boldmath$b$}|),$
(7)
where the operations “mid” and “rad” are applied in componentwise manner.
$\varXi_{\mathit{tol}}$
Fig. 4: The maximum value of the recognizing functional
gives an idea of the size of the tolerable solution set $\varXi_{tol}$.
Despite the definite formula (7) for $\hat{\text{\boldmath$b$}}$, it should be
noted that the introduction of this point is, to a large extent, a matter of
common sense. The general approach to the definition of
$\hat{\text{\boldmath$b$}}$ is that it must be a kind of “most representative”
point from the right-hand side vector $b$, and in some situations this choice
may be different from formula (7). For example, $\hat{\text{\boldmath$b$}}$
can be a point result of the measurement, around which the uncertainty
interval is built later, based on information about the accuracy of the
measuring device.
Apart from (6), as a measure of relative variability of the parameter
estimate, the value
$n\;\Bigl{(}\,\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A\,\Bigr{)}\cdot\frac{\max_{\mathbb{R}^{n}}\,\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}},$
(8)
can have a certain significance. Both IVE and value (8) are defined for
interval linear systems (4) with nonzero right-hand sides. They can take
either positive real values or be infinite. The latter occurs in the only case
of $\,\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A=\infty$, when all the
point matrices $A\in\text{\boldmath$A$}$ have incomplete rank, i. e., when
$\sigma_{\min}(A)=0$ for every $A\in\text{\boldmath$A$}$. Then the variability
measures are set to be infinite.
The symbol IVE is built as an abbreviation of the phrase “interval variability
of the estimate”. Below, we show that the value IVE adequately characterizes
the size of non-empty tolerable solution set for a large class of practically
important situations. But it is useful to discuss informal motivations that
lead to the estimate IVE and to demonstrate that IVE has an intuitive, clear
and even visual meaning.
$\varXi_{\mathit{tol}}$$\varXi_{\mathit{tol}}$
Fig. 5: In addition to the maximum of the recognizing functional, the size
of the tolerable solution set is also affected by “steepness” of the graph.
The tolerable solution set of an interval system of linear algebraic equations
is the set of zero level of the recognizing functional Tol (see details in
[15]), or, in other words, the intersection of the hypograph of this
functional with the coordinate plane $\mathrm{Tol}\,=0$ (this is illustrated
in Fig. 4). As a consequence, the magnitude of the maximum of the recognizing
functional can, with other things being equal, be a measure of how extensive
or narrow the tolerable solution set is. The more $\max\,\mathrm{Tol}\,$, the
larger the size of the tolerable solution set, and vice versa. An additional
factor that provides “other things being equal” is the slope (steepness) of
pieces of hyperplanes of which the polyhedral graph of the functional Tol is
compiled (these are straight lines in the 1D case in Fig. 4 and Fig. 5). The
slope of the hyperplanes is determined by the coefficients of the equations
that define them, which are the endpoints of the data intervals (2). The value
of this slope is summarized in terms of the condition number of point matrices
from the interval data matrix $A$. Finally, the multiplier
$\frac{\|\arg\max\,\mathrm{Tol}\,\|_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}\
=\ \frac{\|\hat{x}\|_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}$
is a scaling coefficient that helps to provide the commensurability of the
final value with magnitudes of the solution, $\arg\max\,\mathrm{Tol}\,$, and
the right-hand side vector of the equations system. Thus, formula (6) is
obtained.
## 3 A justification of the variability measure
Considering the most general case, we should assume that the number of
measurements $m$ may not coincide with the number $n$ of unknown parameters of
the linear function (1). In this section, we consider only the case $m\geq n$.
In other words, the number of measurements (observations) made is not less
than the number of function parameters. Then the interval system of linear
equations (4) is either square or tall (overdetermined). Of course, the data
fitting problem makes sense for $m<n$ too, the maximum compatibility method
also works for this case, and the variability measure IVE is then also
applicable (see Section 4), but the latter still needs a separate
substantiation.
### 3.1 Estimates of perturbations of the solution
to rectangular linear systems
The starting point of our constructions justifying the choice of (6) exactly
in the form described above is the well-known inequality that estimates
perturbation $\Delta x$ of a nonzero solution $x$ to the system of linear
algebraic equations $Ax=b$ depending on the change $\Delta b$ of the right-
hand side $b$ (see, e. g., [2, 24]):
$\frac{\|\Delta x\|_{2}}{\|x\|_{2}}\ \leq\
\mathrm{cond}_{2}\,A\,\cdot\frac{\|\Delta b\|_{2}}{\|b\|_{2}}.$ (9)
It is usually considered for square systems of linear equations, when $m=n$,
but in the case of the Euclidean vector norm and the spectral condition number
of matrices, this inequality holds true in the more general case with $m\geq
n$. Naturally, estimate (9) makes sense only for $\sigma_{\min}(A)\neq 0$,
when $\mathrm{cond}_{2}A<\infty$, i. e., when the matrix $A$ has full column
rank. Let us briefly recall its derivation for this case.
Given
$Ax=b\quad\text{ и }\quad A(x+\Delta x)=b+\Delta b,$
we have
$A\Delta x=\Delta b.$
Further,
$\displaystyle\displaystyle\frac{\displaystyle\phantom{M}\frac{\|\Delta
x\|_{2}}{\|x\|_{2}}\phantom{M}}{\displaystyle\frac{\|\Delta
b\|_{2}}{\|b\|_{2}}}\ $ $\displaystyle=\ \frac{\|\Delta
x\|_{2}\,\|b\|_{2}\phantom{I}}{\phantom{I}\|x\|_{2}\,\|\Delta b\|_{2}}=\
\frac{\|\Delta x\|_{2}\,\|Ax\|_{2}\phantom{I}}{\phantom{I}\|x\|_{2}\,\|A\Delta
x\|_{2}}\ =\ \frac{\|\Delta x\|_{2}}{\|A\Delta
x\|_{2}}\;\frac{\|Ax\|_{2}}{\|x\|_{2}}$ $\displaystyle\leq\;\max_{\Delta x\neq
0}\frac{\|\Delta x\|_{2}}{\|A\Delta x\|_{2}}\ \max_{x\neq
0}\frac{\|Ax\|_{2}}{\|x\|_{2}}\ =\ \left(\min_{\Delta x\neq 0}\frac{\|A\Delta
x\|_{2}}{\|\Delta x\|_{2}}\right)^{-1}\ \max_{x\neq
0}\frac{\|Ax\|_{2}}{\|x\|_{2}}$ $\displaystyle=\
\bigl{(}\sigma_{\min}(A)\bigr{)}^{-1}\,\sigma_{\max}(A)\,=\
\mathrm{cond}_{2}(A)$
by virtue of the properties of the singular values (see e. g. [3, 24]). A
comparison of the beginning and the end of this calculation leads to the
inequality (9), which, as is easy to understand, is attainable for some $x$
and $\Delta x$, or, equivalently, for some right-hand sides of $b$ and their
perturbations $\Delta b$. Naturally, the above calculations and the resulting
estimate make sense only for $\sigma_{\min}(A)\neq 0$.
### 3.2 Interval systems with point matrices
Let us consider an interval system of linear algebraic equations
$Ax=\text{\boldmath$b$}$ (10)
with a point (noninterval) $m\times n$-matrix $A$, $m\geq n$, and an interval
$m$-vector $b$ in the right-hand side. We assume that $A$ has full column rank
and, therefore, $\mathrm{cond}_{2}\,A<\infty$.
Suppose also that the tolerable solution set for system (10) is non-empty, i.
e. $\varXi_{tol}(A,\text{\boldmath$b$})=\bigl{\\{}\,x\in\mathbb{R}^{n}\mid
Ax\in\text{\boldmath$b$}\,\bigr{\\}}\neq\varnothing$. We need to quickly and
with little effort estimate the size of this solution set, and our answer will
be a “radius type” estimate for $\varXi_{tol}(A,\text{\boldmath$b$})$. More
precisely, we are going to evaluate $\max\|x^{\prime}-\hat{x}\|_{2}$ over all
$x^{\prime}\in\varXi_{tol}(A,\text{\boldmath$b$})$ and for a special fixed
point $\hat{x}\in\varXi_{tol}(A,\text{\boldmath$b$})$, which is taken as
$\hat{x}\ =\
\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\text{\boldmath$b$}).$
Recall that the argument $\hat{x}$ of the maximum of the recognizing
functional for system (10) is an estimate of parameters of linear function (1)
from empirical data. Strictly speaking, this point can be determined non-
uniquely, but then let $\hat{x}$ be any one of the points at which the maximum
is reached.
Let $x^{\prime}$ be a point in the tolerable solution set
$\varXi_{tol}(A,\text{\boldmath$b$})$. How to evaluate
$\|x^{\prime}-\hat{x}\|_{2}$? It is clear that $x^{\prime}$ and $\hat{x}$ are
solutions of systems of linear algebraic equations with the matrix $A$ and
some right-hand sides $b^{\prime}$ and $\hat{b}$, respectively, from the
interval vector $b$. If $\hat{x}\neq 0$ and $\hat{b}\neq 0$, then we can apply
inequality (9), considering a perturbation of the solution $\hat{x}$ to the
system of linear algebraic equations $Ax=\hat{b}$. Then $\Delta
x=x^{\prime}-\hat{x}$, $\Delta b=b^{\prime}-\hat{b}$, and we get
$\frac{\|x^{\prime}-\hat{x}\|_{2}}{\|\hat{x}\|_{2}}\
\leq\;\mathrm{cond}_{2}\,A\cdot\frac{\|b^{\prime}-\hat{b}\|_{2}}{\|\hat{b}\|_{2}},$
from where the absolute estimate is obtained
$\|x^{\prime}-\hat{x}\|_{2}\
\leq\;\mathrm{cond}_{2}\,A\cdot\|\hat{x}\|_{2}\cdot\frac{\|b^{\prime}-\hat{b}\|_{2}}{\|\hat{b}\|_{2}}.$
(11)
The point $\hat{x}$ is found as the result of maximization of the recognizing
functional Tol, the point $\hat{b}$ coincides with $A\hat{x}$, the condition
number $\mathrm{cond}_{2}\,A$ can be computed by well-developed standard
procedures. Therefore, for practical work with inequality (11), one need
somehow evaluate $\|b^{\prime}-\hat{b}\|_{2}$.
But first, bearing in mind the further application of the deduced estimate in
a situation where the matrix $A$ may vary, we somewhat roughen (11) by taking
approximately $\|\hat{b}\|_{2}\approx\|\hat{\text{\boldmath$b$}}\|_{2}$, that
is, as the norm of the “most representative” point $\hat{\text{\boldmath$b$}}$
of the interval vector $b$, which we defined in Section 2.2:
$\|\hat{b}\|_{2}\,\approx\,\|\hat{\text{\boldmath$b$}}\|_{2},\qquad\text{
where }\
\hat{\text{\boldmath$b$}}\,=\,\tfrac{1}{2}\,\bigl{(}\,|\mathrm{mid}\,\text{\boldmath$b$}+\mathrm{rad}\,\text{\boldmath$b$}|+|\mathrm{mid}\,\text{\boldmath$b$}-\mathrm{rad}\,\text{\boldmath$b$}|\,\bigr{)}.$
In doing this, some coarsening is allowed, so instead of (11) we write
$\|x^{\prime}-\hat{x}\|_{2}\
\lessapprox\;\mathrm{cond}_{2}\,A\cdot\|\hat{x}\|_{2}\cdot\frac{\|\Delta
b\|_{2}}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$ (12)
Now it is necessary to determine the increment of the right-hand side $\Delta
b=b^{\prime}-\hat{b}$. Its obvious upper bound is
$2\,\mathrm{rad}\,\text{\boldmath$b$}$, but it is too crude. To get a more
accurate estimate of $\Delta b$, we also consider, along with system (10), a
system of linear algebraic equations
$Ax=\tilde{\text{\boldmath$b$}},$ (13)
for which the right-hand side is obtained by uniform “compressing” the
interval vector $b$:
$\tilde{\text{\boldmath$b$}}\,:=\,\bigl{[}\,\underline{\text{\boldmath$b$}}+M,\overline{\text{\boldmath$b$}}-M\,\bigr{]},$
(14)
where
$M\;:=\;\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\text{\boldmath$b$})\
\geq\ 0.$
Since the maximum $M$ is reached for a certain value of the argument,
$\hat{x}$, then
$M=\,\min_{1\leq i\leq
m}\left\\{\,\mathrm{rad}\,\text{\boldmath$b$}_{i}-\left|\;\mathrm{mid}\,\text{\boldmath$b$}_{i}-\sum_{j=1}^{n}\,\text{\boldmath$a$}_{ij}\hat{x}_{j}\,\right|\,\right\\}\
\leq\,\min_{1\leq i\leq m}\,\mathrm{rad}\,\text{\boldmath$b$}_{i}.$
As a result,
$\underline{\text{\boldmath$b$}}+M\leq\overline{\text{\boldmath$b$}}-M$ in
componentwise sense, and the endpoints in the interval vector (14) do not
“overlap” each other.
But the properties of the recognizing functional imply that, for the interval
system of linear algebraic equations (13) with the right-hand side (14), the
maximum of the recognizing functional is zero:
$\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\tilde{\text{\boldmath$b$}})\
=\ 0.$
Indeed, the values of $\mathrm{rad}\,\text{\boldmath$b$}_{i}$ are summands in
all expressions in (5), for which we take the minimum over $i=1,2,\ldots,m$.
Hence, if we simultaneously increase or decrease all
$\mathrm{rad}\,\text{\boldmath$b$}_{i}$ by the same value, keeping the
midpoints $\mathrm{mid}\,\text{\boldmath$b$}_{i}$ unchanged, then the total
value of the recognizing functional will increase or decrease by exactly same
value. In other words, if we take a constant $C\geq 0$ and the interval
$m$-vector $\text{\boldmath$e$}=([-1,1],\ldots,[-1,1])^{\top}$, then, for the
system $Ax=\text{\boldmath$b$}+C\text{\boldmath$e$}\,$ with all the right-hand
sides expanded by $[-C,C]$, we have
$\mathrm{Tol}\,(x,A,\text{\boldmath$b$}+C\text{\boldmath$e$})\ =\
\mathrm{Tol}\,(x,A,\text{\boldmath$b$})+C.$ (15)
Therefore,
$\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\text{\boldmath$b$}+C\text{\boldmath$e$})\
=\ \max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,A,\text{\boldmath$b$})+C.$ (16)
The uniform narrowing of the right-hand side vector acts on the tolerable
solution set and the recognizing functional in a completely similar way. If we
narrow down all the components by the same value $M$, then the maximum of the
recognizing functional of the new interval system also decreases by $M$.
By virtue of the properties of the recognizing functional, the tolerable
solution set $\varXi_{tol}(A,\tilde{\text{\boldmath$b$}})$ for system (13) has
empty interior (such sets are often called “non-solid” or “meager”), which we
will consider equivalent to “having zero size”. Naturally, this is a
simplifying assumption, since in reality the tolerable solution set
corresponding to the zero maximum of the recognizing functional may be not a
single-point set. But we still accept that. This implication is also supported
by the fact that the situation with the zero maximum of the recognizing
functional is unstable: the corresponding tolerable solution set can become
empty with an arbitrarily small data perturbation (see Section 4).
Another fact concerning the auxiliary system (13) with the narrowed right-hand
side, which follows from (15)–(16), is that the point $\hat{x}$ remains to be
the argument of the maximum of the recognizing functional:
$\hat{x}\ =\
\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\tilde{\text{\boldmath$b$}}).$
For this reason, the point $\hat{b}=A\hat{x}$ lies in the interval vector
$\tilde{\text{\boldmath$b$}}$ defined by (14).
From what has been said, it follows that the solution set for the system
$Ax=\text{\boldmath$b$}$ is obtained from the solution set of the system
$Ax=\tilde{\text{\boldmath$b$}}$, which has “negligible size” and for which
$\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,(x,\text{\boldmath$A$},\tilde{\text{\boldmath$b$}})=0$,
through expanding the right-hand side vector $\tilde{\text{\boldmath$b$}}$ in
each component simultaneously by $[-M,M]$, where
$M=\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$}).$
The interval vector $\tilde{\text{\boldmath$b$}}\ni b$ may have non-zero size,
but we put
$[-\Delta b,\Delta b]=([-M,M],\ldots,[-M,M])^{\top}$
in order to make our estimate (12) attainable. Accordingly, in inequality (12)
we take
$\|\Delta
b\|=\max_{x\in\mathbb{R}^{n}}\;\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$}),$
if the Chebyshev norm ($\infty$-norm) is considered, or a value that differs
from it by a corrective factor from the equivalence inequality for vector
norms, if we take any other norm. As is known, for any vector
$y\in\mathbb{R}^{n}$ (see [2])
$\|y\|_{\infty}\leq\|y\|_{2}\leq\sqrt{n}\;\|y\|_{\infty}.$ (17)
Then
$\|x^{\prime}-\hat{x}\|_{2}\ \lessapprox\,\sqrt{n}\
\,\mathrm{cond}_{2}A\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$
(18)
What happens if the matrix $A$ does not have a full column rank? Then, by
virtue of the Irene Sharaya criterion, the nonempty tolerable solution set to
the system (10) is unbounded. This is completely consistent with the fact that
then $\mathrm{cond}_{2}A=\infty$ and the value of the variability measure IVE
is infinite too.
### 3.3 General interval systems
Finally, we consider a general interval system of linear equations
$\text{\boldmath$A$}x=\text{\boldmath$b$}$, with an essentially interval
matrix, i. e., when $\mathrm{rad}\,\text{\boldmath$A$}\neq 0$. In view of the
properties of the tolerable solution set (see, e. g., [15]), it can be
represented as
$\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})\ =\
\bigcap_{A\in\text{\boldmath$A$}}\;\bigl{\\{}\,x\in\mathbb{R}^{n}\mid
Ax\in\text{\boldmath$b$}\,\bigr{\\}}\ =\
\bigcap_{A\in\text{\boldmath$A$}}\varXi_{tol}(A,\text{\boldmath$b$}),$ (19)
i. e., as the intersection of the solution sets to the individual systems
$Ax=b$ with point matrices $A\in\text{\boldmath$A$}$.
For each interval linear system $Ax=\text{\boldmath$b$}$ with
$A\in\text{\boldmath$A$}$, we have estimate (18), if $A$ has full column rank.
Otherwise, if the point matrix $A$ has incomplete column rank and the
corresponding solution set $\varXi_{tol}(A,\text{\boldmath$b$})$ is unbounded,
then we do not take it into account. Consequently, for the tolerable solution
set of the system $\text{\boldmath$A$}x=\text{\boldmath$b$}$, which is the
intersection of the solution sets $\varXi_{tol}(A,\text{\boldmath$b$})$ for
all $A\in\text{\boldmath$A$}$, the following should be true:
$\|x^{\prime}-\hat{x}\|_{2}\ \lessapprox\
\min_{A\in\text{\boldmath$A$}}\,\left\\{\;\sqrt{n}\;\,\mathrm{cond}_{2}A\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}\,\right\\}.$
(20)
The transition from representation (19) to inequality (20) can be both very
accurate and rather crude (as can be seen from considering the intersection of
two 1D intervals). It all depends on the size of the intersection of the
solution sets of individual systems $Ax=\text{\boldmath$b$}$. On the other
hand, the amount of this intersection is indirectly characterized by the
magnitude of $\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,$.
Taking the above facts into account, we perform approximate estimation of the
right-hand side of inequality (20) by moving the minimum over
$A\in\text{\boldmath$A$}$ through the curly brackets. First of all, we
evaluate the factor $\|\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\|_{2}$,
which changes to the smallest extent, by the constant available to us after
the numerical solution of the data fitting problem:
$\bigl{\|}\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\text{\boldmath$b$})\bigr{\|}_{2}\,\approx\;\mathrm{const}\;=\;\bigl{\|}\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})\bigr{\|}_{2}.$
(21)
Next, the minimum of $\mathrm{cond}_{2}A$ naturally turns to
$\min\mathrm{cond}_{2}A$, and the most important factor
$\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,(x,A,\text{\boldmath$b$})$ will be
changed to
$\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,(x,\text{\boldmath$A$},\text{\boldmath$b$})$.
This choice (as well as (21)) is rather rigidly determined by the following
reasons.
The expression for our variability measure should preserve its simplicity and
be uniform for all cases and situations. In particular, if the interval matrix
$A$ squeezes to a point matrix $A$, then our measure should turn to the
estimate (18) for the point case. Finally, if $\max\,\mathrm{Tol}\,=0$, then
our measure must be zero too, since the size of the (stable) tolerable
solution set is also zero, and our variability measure should reliably detect
such situations. All this taken together leads to the estimate
$\|x^{\prime}-\hat{x}\|_{2}\;\,\lessapprox\ \sqrt{n}\
\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A\,\Bigr{)}\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$
(22)
The same estimate as (22), by virtue of the equivalence inequality (17), is
also true for the Chebyshev norm:
$\max_{x^{\prime}\in\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})}\|x^{\prime}-\hat{x}\|_{\infty}\
\lessapprox\
\sqrt{n}\;\,\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A\,\Bigr{)}\cdot\bigl{\|}\,\arg\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,\bigr{\|}_{2}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$
This completes the rationale for (6).
If we want to evaluate the relative size of the tolerable solution set,
expressing it in ratio to the norm of its points, then it is reasonable to
take $\hat{x}=\arg\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,$ as the “most
typical” point from the tolerable solution set
$\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})$. Using (17) again, we
obtain
$\frac{\max_{x^{\prime}\in\varXi_{tol}(\text{\boldmath$A$},\text{\boldmath$b$})}\|x^{\prime}-x^{\prime\prime}\|_{\infty}}{\|\hat{x}\|_{\infty}}\
\lessapprox\
n\;\Bigl{(}\;\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}A\,\Bigr{)}\cdot\frac{\max_{x\in\mathbb{R}^{n}}\,\mathrm{Tol}\,}{\|\hat{\text{\boldmath$b$}}\|_{2}}.$
This gives value (8).
## 4 Numerical examples and some tests
First of all, we consider an example of unstable tolerable solution set that
changes abruptly with small perturbations in the system of equations. For all
interval $2\times 2$-systems of linear algebraic equations of the form
$\begin{pmatrix}[-1,1]&[-1,1]\\\\[2.0pt] 1&-1\\\\[2.0pt]
\end{pmatrix}\begin{pmatrix}x_{1}\\\\[2.0pt]
x_{2}\end{pmatrix}=\begin{pmatrix}{[-1,1]}\\\\[2.0pt]
{[1,1+\eta]}\end{pmatrix},\qquad\eta\geq 0,$ (23)
the tolerable solution sets are the same: this is the straight line segment
joining the points $(0,-1)$ and $(1,0)$ and depicted in Fig. 6. The diameter
of the solution set is essentially non-zero (namely, $\sqrt{2}$), but the
unconstrained maximum of the recognizing functional Tol for all such systems
is zero, and it is attained at the point $(0.5,-0.5)$.
$\varXi_{\mathit{tol}}$ Fig. 6: The tolerable solution set for the interval
equations systems (23).
At the same time, any arbitrarily small increase in the lower endpoint of the
interval $[1,1+\eta]$ in the right-hand side of the second equation makes the
tolerable solution set empty. An arbitrarily small reduction of the upper
endpoint of the interval $[-1,1]$, located in the first component of the
right-hand side vector, produces a similar effect. It turns out that the
maximum value of the recognizing functional Tol characterizes very precisely
the instability of the original solution set and the zero size of the solution
sets of perturbed systems.
As the second example, we consider the problem of constructing a linear
function of two variables $a_{1}$ and $a_{2}$,
$b=x_{1}a_{1}+x_{2}a_{2},$ (24)
from the interval data obtained in 3 measurements:
$\begin{array}[]{c|ccc}&\text{\boldmath$a$}_{1}&\text{\boldmath$a$}_{2}&\text{\boldmath$b$}\\\
\hline\cr\\\\[-8.53581pt] 1&[98,100]&[99,101]&[190,210]\\\\[3.0pt]
2&[97,99]&[98,100]&[200,220]\\\\[3.0pt]
3&[96,98]&[97,99]&[190,210]\end{array}$
Note that in these data the three-dimensional uncertainty boxes of
measurements 1 and 2, as well as 2 and 3, substantially “overlap” each other:
their intersections are boxes with non-empty interiors, the sizes of which are
comparable to the sizes of the original data boxes.
$x_{1}$$x_{2}$$\hat{\text{\boldmath$x$}}$
Fig. 7: The tolerable solution set of the system of equations (25)
in comparison with the box constructed by using the estimate IVE.
To determine the coefficients $x_{1}$ and $x_{2}$, we compose an interval
$3\times 2$-system of linear algebraic equations
$\begin{pmatrix}[98,100]&[99,101]\\\\[2.0pt] [97,99]&[98,100]\\\\[2.0pt]
[96,98]&[97,99]\end{pmatrix}\begin{pmatrix}x_{1}\\\\[2.0pt]
x_{2}\end{pmatrix}=\begin{pmatrix}{[190,210]}\\\\[2.0pt]
{[200,220]}\\\\[2.0pt] {[190,210]}\end{pmatrix}.$ (25)
Its matrix has incomplete rank, since its member is a point matrix with the
rank 1:
$\begin{pmatrix}98&99\\\ 98&99\\\ 98&99\end{pmatrix}.$ (26)
The united solution set for system (25) is unbounded, therefore it is hardly
possible to determine, with certainty, the coefficients of the linear function
(24) satisfying the weak compatibility between parameters and data (see
Section 2). However, the interval matrix of system (25) does not contain
linearly dependent point columns, and therefore, according to the Irene
Sharaya criterion [13] (see Section 2.1), the tolerable solution set is
bounded. It is depicted in Fig. 7, which is drawn by the procedure EqnTol2D
from the package IntLinInc2D [14]. The minimum spectral condition number of
the point matrices contained in the interval matrix of (25) is $103.83$, and
it is reached on the matrix
$\begin{pmatrix}100&99\\\ 97&100\\\ 96&99\end{pmatrix}.$
This result can be obtained, for example, using the simulated annealing
algorithm on the set of point matrices contained in the interval matrix of
(25).
Numerical solution of the maximization problem for the recognizing functional
Tol can be carried out within MATLAB environment, using the free program
tolsolvty2.m (available from the author’s page at ResearchGate [21]). It
implements a modified version of the so-called $r$-algorithms for non-
differentiable optimization proposed and developed by N.Z. Shor and N.G.
Zhurbenko [20]. Using the precision settings specified in it “by default”, we
get $\max\,\mathrm{Tol}\,=1.9095$, which is reached at $\hat{x}=(5.1857\cdot
10^{-7},2.0603)^{\top}$. Then,
$\mathrm{IVE}\,=\sqrt{2}\cdot 1.9095\cdot
103.83\cdot\frac{\|\hat{x}\|_{2}}{\sqrt{200^{2}+210^{2}+200^{2}}}=1.6399.$
Interval hull of the tolerable solution set for system (25) (that is, its
optimal interval enclosure) is the box
$\begin{pmatrix}[-0.9620,3.0227]\\\\[2.0pt] [-0.9320,3.0127]\end{pmatrix},$
which can also be found by the procedure EqnTol2D. We see that the value of
IVE is in satisfactory agreement with the radii of the components of the
optimal estimate of the solution set, equal to $1.9924$ and $1.9724$
respectively.
In the maximum compatibility method, the argument
$\hat{x}=\arg\max_{x\in\mathbb{R}^{n}}\mathrm{Tol}\,$ of the unconstrained
maximum of the recognizing functional plays a crucial role, and, in fact, our
variability estimate IVE relies heavily on it. This is why it makes sense to
look at the box $\hat{\text{\boldmath$x$}}$ with the components
$[\hat{x}_{i}-\mathrm{IVE}\,,\hat{x}_{i}+\mathrm{IVE}\,]$, $i=1,2$. It is also
depicted in Fig. 7, and the substantial asymmetry of its location relative to
the solution set is, of course, explained by the specific position of the
center, the point $\hat{x}$, as well as the ill-conditioning of the point
systems from (25). With other data, the box $\hat{\text{\boldmath$x$}}$
estimates the tolerable solution sets significantly better (see further).
Next, we give an example of the opposite type (in a sense, dual to the
previous example), where a linear function of three variables
$b=x_{1}a_{1}+x_{2}a_{2}+x_{2}a_{3}$ (27)
is to be constructed from the data of two experiments summarized below:
$\begin{array}[]{c|cccc}&\text{\boldmath$a$}_{1}&\text{\boldmath$a$}_{2}&\text{\boldmath$a$}_{3}&\text{\boldmath$b$}\\\
\hline\cr\\\\[-8.53581pt] 1&[98,100]&[97,99]&[96,98]&[190,210]\\\\[3.0pt]
2&[99,101]&[98,100]&[97,99]&[200,220]\end{array}$
To find the parameters of function (27), we come to an underdetermined
interval system of linear algebraic equations
$\begin{pmatrix}[98,100]&[97,99]&[96,98]\\\\[2.0pt]
[99,101]&[98,100]&[97,99]\end{pmatrix}\begin{pmatrix}x_{1}\\\\[1.0pt]
x_{2}\\\\[1.0pt] x_{3}\end{pmatrix}=\begin{pmatrix}{[190,210]}\\\\[2.0pt]
{[200,220]}\end{pmatrix}.$ (28)
Its matrix is the transposed matrix of system (25), so
$\min_{A\in\text{\boldmath$A$}}\mathrm{cond}_{2}\,A$ is the same for it. Also,
the matrix of system (28) contains a point matrix of the incomplete rank 1,
which is transposed for (26) (and many more such matrices).
$x_{1}$$x_{2}$$x_{3}$ Fig. 8: The tolerable solution set for the interval
equations system (28).
Again, the united solution set for system (28) is unbounded, and it is
difficult (if at all possible) to determine the coefficients of the linear
function (27), relying on the weak compatibility between parameters and data,
due to “infinite variability” of the resulting estimate. Nevertheless, in
these adverse conditions, the nonempty tolerable solution set to the interval
system of equations (28) is bounded by virtue of the Irene Sharaya criterion
[13] (see Section 2.1). In Fig. 8, the tolerable solution set is depicted as a
thin hexagonal plate. Computation of the maximum of the recognizing functional
for this system using the code tolsolvty2.m gives the value
$\max\mathrm{Tol}\,=3.9698$, which is reached at the point
$\hat{x}=\arg\max\mathrm{Tol}\,=\,\bigl{(}\,2.0603,3\cdot 10^{-6},2.1\cdot
10^{-6}\,\bigr{)}^{\top}.$
It can be taken as an estimate of the coefficients in (27). Then the
varibility measure of the above estimate is
$\mathrm{IVE}\,=\sqrt{2}\cdot 3.9698\cdot
103.83\cdot\frac{\|\hat{x}\|_{2}}{\sqrt{200^{2}+210^{2}}}=4.1413.$
Interval hull (optimal interval enclosure) of the tolerable solution set for
system (28) is the box
$\begin{pmatrix}[-1.9747,4.0302]\\\\[2.0pt] [-1.9899,4.0759]\\\\[2.0pt]
[-1.9949,4.1071]\end{pmatrix},$
which can also be computed by the procedure EqnTolR3. The radii of the
components of this interval vector are $3.0024$, $3.0329$, $3.0510$
respectively, which is also not very different from the value of IVE. The
example shows that the value IVE works even in the case of $m<n$, when the
number of measurements is less than the number of parameters to be determined.
But a rigorous justification of this fact is waiting for its study.
To conclude the section, we present, in Table 1, the results of numerical
tests for the interval linear $n\times n$-system
$\left(\begin{array}[]{cccc}\theta&{[0,2]}&\cdots&{[0,2]}\\\\[1.0pt]
{[0,2]}&\theta&\cdots&{[0,2]}\\\\[1.0pt]
\vdots&\vdots&\ddots&\vdots\\\\[1.0pt]
{[0,2]}&{[0,2]}&\cdots&\theta\end{array}\right)\;\left(\begin{array}[]{@{\,}c@{\,}}x_{1}\\\\[1.0pt]
x_{2}\\\\[1.0pt] \vdots\\\\[1.0pt]
x_{n}\end{array}\right)=\left(\begin{array}[]{@{\;}c@{\;}}{[1,K]}\\\\[1.0pt]
{[1,K]}\\\\[1.0pt] \vdots\\\\[1.0pt] {[1,K]}\end{array}\right),$ (29)
with various $n$ and $K$. System (29) resembles that proposed in [9], having
exactly same matrix. But the right-hand sides were taken as positive intervals
$[1,K]$, since the original balanced intervals $[-1,1]$ in the system from [9]
make the tolerable solution set “too symmetric”.
Table 1: Results of the computational tests with system (29) $\theta\rule[-8.53581pt]{0.0pt}{22.76219pt}$ | $\mathrm{IVE}\,$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{\infty}$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{2}$ | $\theta\rule[-8.53581pt]{0.0pt}{22.76219pt}$ | $\mathrm{IVE}\,$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{\infty}$ | $\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip 1.0pt}\varXi_{tol})\|_{2}$
---|---|---|---|---|---|---|---
$n=5$, $K=10$ | $n=10$, $K=10$
2.0 | 1.019 | 1.25 | 2.795 | 6.0 | 0.894 | 0.5 | 1.581
4.0 | 1.081 | 0.875 | 1.957 | 9.0 | 1.491 | 0.389 | 1.230
6.0 | 0.786 | 0.639 | 1.429 | 12.0 | 0.582 | 0.313 | 0.988
8.0 | 0.681 | 0.5 | 1.118 | 15.0 | 0.495 | 0.26 | 0.822
10.0 | 0.534 | 0.41 | 0.917 | 20.0 | 0.396 | 0.203 | 0.640
$n=5$, $K=20$ | $n=10$, $K=20$
2.0 | 2.953 | 3.75 | 8.385 | 6.0 | 2.489 | 1.333 | 4.216
4.0 | 2.698 | 2.125 | 4.752 | 9.0 | 1.831 | 0.944 | 2.987
6.0 | 2.015 | 1.472 | 3.292 | 12.0 | 1.432 | 0.729 | 2.306
8.0 | 1.591 | 1.125 | 2.516 | 15.0 | 1.255 | 0.593 | 1.876
10.0 | 1.378 | 0.91 | 2.035 | 20.0 | 0.985 | 0.453 | 1.431
The interval matrix of system (29) is known to be regular if and only if
$\theta>n$ for even $n$ and $\theta>\sqrt{n^{2}-1}$ for odd $n$ [9].
Consequently, in Table 1, the first two rows that correspond to each separate
case of $n$ and $K$ refer to systems with singular matrices. As the parameter
$\theta$ grows, the matrix of the system becomes regular and better
conditioned.
The values of IVE are compared with
$\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip
1.0pt}\varXi_{tol})\|_{\infty}$ and
$\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip
1.0pt}\varXi_{tol})\|_{2}$, that is, with the Chebyshev norm (max-norm) and
the Euclidean norm of the radius of the interval hull of the tolerable
solution set (denoted as $\scalebox{0.67}[0.87]{$\Box$\hskip
1.0pt}\varXi_{tol}$). We can see that, with the exception of two cases
corresponding to $n=5$ and $K=10,20$, the values of IVE are always within the
two-sided bounds given by
$\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip
1.0pt}\varXi_{tol})\|_{\infty}$ (lower bound) and
$\|\,\mathrm{rad}\,(\,\scalebox{0.67}[0.87]{$\Box$\hskip
1.0pt}\varXi_{tol})\|_{2}$ (upper bound). And that is reasonable. Overall, our
numerical experiments confirm the adequacy of the new measure of variability,
which gives quite satisfactory approximate estimates of the size of the
tolerable solution sets in various situations.
## 5 Discussion
IVE is the first measure of variability proposed in the statistics of interval
data, for estimation using the maximum compatibility method, and we can not
compare IVE with similar other measures, since they simply do not exist.
However, it is useful to correlate the estimate IVE with the ideal
mathematical characteristics of the solution set, such as its diameter, in
terms of computational convenience and laboriousness.
First of all, IVE reflects instabilities in the solution set better than the
diameter (see the first example in Section 4). An instability of the tolerable
solution set for an interval linear system arises in the case when the maximum
value of the recognizing functional is zero, $\max\mathrm{Tol}\,=0$. Then the
tolerable solution set can be either a single-point or an extended set with
non-zero diameter and empty interior [15]. After an arbitrarily small
perturbation of data, the latter situation can abruptly turn into the empty
solution set. In any case, this phenomenon is signaled by the zero value of
the maximum of the recognizing functional. The corresponding variability
measure IVE is also zero, which is quite natural: it makes sense to show only
“stable size” of the solution set. The equality of IVE to zero or “almost
zero” thus allows us to diagnose unstable cases.
Next, the problem of computing the diameter, in 2-norm, of the tolerable
solution set to an interval linear system of equations is NP-hard in general.
This follows from its reducibility to the quadratic programming problem with
linear constraints (see [22]). Indeed, the membership of a point in the
tolerable solution set to an interval $m\times n$-system of equations is
determined by a system of linear inequalities of the size $2m\times 2n$ (the
Rohn theorem [12]). To compute the diameter of the tolerable solution set in
2-norm, we have to maximize the quadratic objective function
$\|x^{\prime}-x^{\prime\prime}\|^{2}_{2}$ over all pairs of points
$x^{\prime}$, $x^{\prime\prime}$ from the tolerable solution set, i. e.
satisfying $2m\times 2n$-systems of linear inequalities. So, computing the
diameter of the tolerable solution set is not easy.
The diameter of the interval hull of the tolerable solution set can be
computed more simply, but it is not better than IVE in any case, since the
interval hull is not the solution set itself, and the coarsening resulted from
such a replacement may be large.
Calculation of IVE by formula (6) requires solving the data fitting problem,
that is, finding $\max\,\mathrm{Tol}\,$ and $\arg\max\,\mathrm{Tol}\,$, and
then we need to evaluate the minimum of the condition number of the matrices
from the interval data matrix. In turn, the recognizing functional Tol is a
concave piecewise linear function [15], so computing its maximum is
polynomially complex. The author efficiently solves it by nonsmooth
optimization methods developed in recent decades, in particular, using
$r$-algorithms proposed by N.Z. Shor [20], or using separating plane
algorithms (see, e. g., [11, 23]). The most difficult part in calculating IVE
is thus evaluating the minimum condition number of point matrices from a given
interval matrix.
Computing the exact minimum of the condition number is not simple, but to
address practical problems which will apply the value IVE, it is sufficient to
have an approximate estimate for
$\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A$ from above. This
follows from our considerations in Section 3.3. Sometimes, it is not necessary
to compute $\min\,\mathrm{cond}_{2}\,A$ at all, if we have to compare, with
each other, the variability of the estimates obtained for the same data matrix
$A$.
If the interval matrix is “sufficiently narrow”, being not very different from
a point matrix, then we can approximate
$\min_{A\in\text{\boldmath$A$}}\,\mathrm{cond}_{2}\,A\;\approx\;\mathrm{cond}_{2}(\mathrm{mid}\,\text{\boldmath$A$}).$
(30)
But in general, this recipe may work poorly, since the left and right sides of
the approximate equality (30) can be quite different. In the examples with
systems (25) and (28) from Section 4, the condition number of the midpoint
matrix is $2.38\cdot 10^{4}$, and, using the simplified formula (30), we are
mistaken in evaluating the variability measure IVE by more than 20 times.
In the general case, for a more accurate evaluation of
$\min\,\mathrm{cond}_{2}\,A$, we can use popular evolutionary optimization
methods, such as a genetic algorithm, simulated annealing, particle swarm
optimization, etc., within the interval matrix $A$. In the numerical
experiments from Section 4, the minimum of the condition number was found
using the standard program of simulated annealing from free computer math
system Scilab.
Note that there is a fundamental difference between computing the variability
measure IVE and computing the diameter of the tolerable solution set: in the
first case, we calculate a minimum, while in the second we have to find a
maximum. Using traditional optimization methods and various heuristics, in the
first case we compute an approximation to the minimum from above, and in the
second case we find an approximation to the maximum from below. If we want to
get, with our variability measure, a guaranteed outer estimation of the
solution set, then the upper estimate, which is obtained by calculating the
minimum in IVE, is more preferable.
There exists one more viewpoint at the variability measure IVE.
In traditional probabilistic statistics, the phenomenon of collinearity of
data (also called “multicollinearity”) plays a large role. It is the presence
of a linear dependence between the input (predictor) variables of the
regression model. The $k$ variables of the model in question are usually
called _collinear_ if the vectors representing them lie in a linear space of
dimension less than $k$ [1], so that one of these vectors is a linear
combination of the others. In practice, such exact collinearity of data is
rare, but real computational problems in data fitting often starts when the
data vectors are “almost linearly dependent”. Then the parameter estimates are
unstable, which leads to increased statistical uncertainty, i. e., an increase
in the variance of the estimates.
According to modern views, the collinearity of data is most adequately
described by the condition number of the matrix made up of these data (see, e.
g., [1], Chapter 3). In this sense, our IVE is, in fact, a measure of the
collinearity of the data, corrected with the help of the actual value of the
estimate and compatibility of this estimate with the data (which is indicated
by the maximal value of the recognizing functional). The minimum over all
$\mathrm{cond}_{2}A$ for $A\in\text{\boldmath$A$}$ is taken due to the
specifics of the strong compatibility of parameters and data, and it agrees
well with the regularizing role of the tolerable solution set (see [18]).
With this interpretation, IVE makes sense even with a negative maximum of the
recognizing functional, max Tol, when the tolerable solution set is empty and
the parameters of the linear function (1), which are strongly compatible with
the data, do not exist. The absolute value of IVE still shows, up to a certain
scaling coefficient, a measure of the collinearity of the data (a measure of
their ill-conditioning), and the negative sign indicates the status of the
solution to the problem, i. e., that the parameter vector computed is not
strongly compatible with the data, but only provides the best possible
approximation for the input data of the problem.
The immediate goal of further research is to justify the use of IVE for
underdetermined situations, where the number $m$ of observations is less than
the number $n$ of parameters to be determined. The maximum compatibility
method works well in this case too, we get parameter estimates and we can
calculate their values of IVE, but its application needs to be justified, at
least at the same level of rigor as was done in this work for $m\geq n$.
#### Aknowledgements
The author is grateful to Alexander Bazhenov, Sergey Kumkov, and Sergei Zhilin
for stimulating discussions and valuable comments on the work. Also, the
author thanks the anonymous reviewers for their constructive criticism and
good suggestions.
## References
* [1] D.A. Belsley, E. Kuh, R.E. Welsch, Regerssion Diagnostics, Wiley-Interscience, Hoboken, N.J., 1980, 2004.
* [2] G.H. Golub, Ch.F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, 1996, 2013.
* [3] R.A. Horn, Ch.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1994.
* [4] L. Jaulin, M. Kieffer, O. Didrit, E. Walter, Applied Interval Analysis, Springer, London, 2001.
* [5] R.B. Kearfott, M. Nakao, A. Neumaier, S. Rump, S.P. Shary, P. van Hentenryck, Standardized notation in interval analysis, Computational Technologies 15 (2010), No. 1, pp. 7–13.
* [6] V. Kreinovich, S.P. Shary, Interval methods for data fitting under uncertainty: a probabilistic treatment, Reliable Computing 23 (2016), pp. 105–140. URL: http://interval.louisiana.edu/reliable-computing-journal/volume-23/reliable-computing-23-pp-105-140.pdf (accessed 10 March 2020).
* [7] M. Milanese, J. Norton, H. Piet-Lahanier, E. Walter (Eds.), Bounding Approaches to System Identification, Plenum Press, New York, 1996. DOI: 10.1007/978-1-4757-9545-5
* [8] R.E. Moore, R.B. Kearfott, M.J. Cloud, Introduction to Interval Analysis, SIAM, Philadelphia, 2009.
* [9] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, 1990.
* [10] H.T. Nguyen, V. Kreinovich, B. Wu, G. Xiang, Computing Statistics under Interval and Fuzzy Uncertainty. Applications to Computer Science and Engineering, Springer, Berlin-Heidelberg, 2012.
* [11] E.A. Nurminski, Separating plane algorithms for convex optimization, Mathematical Programming 76 (1997), pp. 373–391. DOI: 10.1007/BF02614389
* [12] J. Rohn, Inner solutions of linear interval systems, in: Nickel K. (Ed.), Interval Mathematics 1985, Lecture Notes on Computer Science 212, Springer, New York, 1986, pp. 157–158.
* [13] I.A. Sharaya, On unbounded tolerable solution sets, Reliable Computing 11 (2005), pp. 425–432. DOI: 10.1007/s11155-005-0049-9
* [14] I.A. Sharaya, IntLinInc2D, a MATLAB software package for visualization of solution sets to interval linear 2D systems. Novosibirsk, 2014. URL: http://www.nsc.ru/interval/sharaya
* [15] S.P. Shary, Solving the linear interval tolerance problem, Mathematics and Computers in Simulation 39 (1995), pp. 53–85. DOI: 10.1016/0378-4754(95)00135-K
* [16] S.P. Shary, Solvability of interval linear equations and data analysis under uncertainty, Automation and Remote Control 73 (2012), pp. 310–322. DOI: 10.1134/S0005117912020099
* [17] S.P. Shary, Maximum consistency method for data fitting under interval uncertainty, Journal of Global Optimization 66 (2016), pp. 111–126. DOI: 10.1007/s10898-015-0340-1
* [18] S.P. Shary, Interval regularization for imprecise linear algebraic equations. Deposited in arXiv.org on 27 Sep 2018, No. arXiv:1810.01481.
* [19] S.P. Shary, Weak and strong compatibility in data fitting under interval uncertainty, Advances in Data Science and Adaptive Analysis 12 (2020) (in press).
* [20] N.Z. Shor, N.G. Zhurbenko, A minimization method using space dilation in the direction of difference of two successive gradients, Cybernetics 7(3) (1971), pp. 450–459. DOI: 10.1007/BF01070454
* [21] TOLSOLVTY2, a MATLAB code for examining solvability of the interval linear tolerance problem. URL: https://www.researchgate.net/publication/294889566_TOLSOLVTY2
* [22] S.A. Vavasis, Complexity theory: Quadratic programming, in: C.A. Floudas and P.M. Pardalos (Eds.), Encyclopedia of Optimization. Second Edition, New York, Springer, 2009, pp. 451–453.
* [23] E. Vorontsova, Extended separating plane algorithm and NSO-solutions of PageRank problem, in: Y. Kochetov, M. Khachay, V. Beresnev, E. Nurminski, P. Pardalos (Eds.), Discrete Optimization and Operations Research. Proceedings of 9th International Conference DOOR 2016, Vladivostok, Russia, September 19-23, 2016, Lecture Notes in Computer Science 9869, Cham, Switzerland, Springer International, 2016, pp. 547–560. DOI: 10.1007/978-3-319-44914-2`_`43
* [24] D.S. Watkins, Fundamentals of Matrix Computations, Wiley-Interscience, New York, 2002.
|
2024-09-04T02:54:59.042857 | 2020-03-11T06:31:10 | 2003.05133 | {
"authors": "B\\\"ulent Karas\\\"ozen",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:26153",
"submitter": "Bulent Karas\\\"ozen",
"url": "https://arxiv.org/abs/2003.05133"
} | arxiv-papers | # Model Order Reduction in Neuroscience
Bülent Karasözen Institute of Applied Mathematics & Department of Mathematics,
Middle East Technical University, Ankara-Turkey<EMAIL_ADDRESS>
###### Abstract
Human brain contains approximately $10^{9}$ neurons, each with approximately
$10^{3}$ connections, synapses, with other neurons. Most sensory, cognitive
and motor functions of our brains depend on the interaction of a large
population of neurons. In recent years, many technologies are developed for
recording large numbers of neurons either sequentially or simultaneously.
Increase in computational power and algorithmic developments have enabled
advanced analyses of neuronal population parallel to the rapid growth of
quantity and complexity of the recorded neuronal activity. Recent studies made
use of dimensionality and model order reduction techniques to extract coherent
features which are not apparent at the level of individual neurons. It has
been observed that the neuronal activity evolves on low-dimensional subspaces.
The aim of model reduction of large-scale neuronal networks is accurate and
fast prediction of patterns and their propagation in different areas of brain.
Spatiotemporal features of the brain activity are identified on low
dimensional subspaces with methods such as dynamic mode decomposition (DMD),
proper orthogonal decomposition (POD), discrete empirical interpolation (DEIM)
and combined parameter and state reduction. In this paper, we give an overview
about the currently used dimensionality reduction and model order reduction
techniques in neuroscience.
Keywords:neuroscience, dimensionality reduction, proper orthogonal
decomposition, discrete empirical interpolation, dynamic mode decomposition,
state and parameter estimation.
Classification[MSC 2010]: 93A15,92C55, 37M10,37M99,37N40,65R32.
## 1 Introduction
Due to the advances in recording and imaging technologies, the number of
recorded signals from brain cells increased significantly in the last few
years. The recorded spatio-temporal neural activity give rise to networks with
complex dynamics. In neuroscience, molecular and cellular level details are
incorporated in large-scale models of the brain in order to reproduce
phenomena such as learning and behavior. The rapid growth of simultaneous
neuronal recordings in scale and resolution brings challenges with the
analysis of the neuronal population activity. New computational approaches
have to be developed to analyze, visualize, and understand large-scale
recordings of neural activity. While algorithmic developments and the
availability of significantly more computing power have enabled analysis of
larger neuronal networks, these advances cannot keep pace with increasing size
and complexity of recorded activity. The activity of complex networks of
neurons can often be described by relatively few distinct patterns. Model
order reduction techniques enable us to identify the coherent spatial–temporal
patterns.
The presence or absence of a neural mechanism can be analyzed for neuronal
populations. Dimensionality reduction methods [6] are data-driven statistical
techniques for forming and evaluating hypotheses about population activity
structure, which are summarized in Section 2. One of the goals of
neurosciences is fast and accurate predictions of the potential propagation in
neurons. The differential equations describing the propagation of potential in
neurons were developed and are described by Hodgkin and Huxley equations [12].
They consists of a coupled system of ordinary and partial differential
equations (ODEs and PDEs). The dimension of the associated discretized systems
is very large for accurately simulating neurons with realistic morphological
structure and synaptic inputs. In Section 3 we present two model order
reduction approaches based on POD and DEIM [5] which can predict accurately
the potential propagation in large scale neuronal networks leading to
important speedups [17, 16, 2]. Using the functional neuroimagining data from
electroencephalography (EEG) or functional magnetic resonance imaging (fMRI),
different regions of the brain can be inferred by dynamic causal modeling
(DCM) [7]. Effective connectivity is parameterised in terms of coupling among
unobserved brain states and neuronal activity in different regions. In Section
4 we describe a combined state and parameter reduction for parameter
estimation and identification [10] to extract effective connectivity in
neuronal networks from measured data, such as data from electroencephalography
(EEG) or functional magnetic resonance imaging (fMRI). In Section 5 the data-
driven, equation free, model order reduction method dynamic mode decomposition
(DMD) is described for identifying sleep spindle networks [3]. Reduced order
models with POD and DEIM and four variants of them are presented for neuronal
synaptic plasticity and neuronal spiking networks in Section 6.
## 2 Dimensionality reduction methods
Coordination of responses across neurons exists only at the level of the
population and not at the level of single neurons. The presence or absence of
a neural mechanism can be analyzed for neuronal populations. Dimensionality
reduction methods are data-driven statistical techniques for forming and
evaluating hypotheses about population activity structure. They produce low-
dimensional representations of high-dimensional data with the aim to extract
coherent patterns which preserve or highlight some feature of interest in the
data [6]. The recorded neurons of dimension $D$ are likely not independent of
each other, because they belong to a common network of neuronal populations.
From the high-dimensional data of neuronal recordings, a smaller number of
explanatory variables $K$ ( $K<D$) are extracted with the help of
dimensionality reduction methods. The explanatory variables are not directly
observed, therefore they are referred to as latent variables. The latent
variables define a $K$-dimensional space representing coherent patterns of the
noisy neural activity of $D$ neurons.
There exists several dimensionality reduction methods which differ in the
statistical interpretation of the preserved and discarded features of the
neuronal populations. We summarize the commonly used statistical methods of
dimensionality reduction methods in [6], where further references about the
methods can be found.
Principal component and factor analysis; The most widely known technique to
extract coherent patterns from high dimensional data is the modal
decomposition. A particularly popular modal decomposition technique is
principal component analysis (PCA), which derives modes ordered by their
ability to account for energy or variance in the data. In particular, PCA is a
static technique and does not model temporal dynamics of time-series data
explicitly, so it often performs poorly in reproducing dynamic data, such as
recordings of neural activity. The low-dimensional space identified by PCA
captures variance of all types, including firing rate variability and spiking
variability, whereas factor analysis (FA) discards the independent variance
for each neuron. and preserves variance that is shared across neurons.
Time series methods: The temporal dynamics of the population activity can be
identified if the data comes from a time series. The most commonly used time
series methods for dimensionality reduction neural recordings are: hidden
Markov models (HMM) [18], kernel smoothing followed by a static dimensionality
reduction method, Gaussian process factor analysis (GPFA) [35], latent linear
dynamical systems (LDS) [4] and latent nonlinear dynamical systems (NLDS)
[26]. They produce latent neural trajectories that capture the shared
variability across neurons. The HMM is applied when a jump between discrete
states of neurons exists, other methods identify smooth changes in firing
rates over time.
Methods with dependent variables: On many neuronal recordings the high-
dimensional firing rate space is associated with labels of one or more
dependent variables, like stimulus identity, decision identity or a time
index. The dimensionality reduction aims in this case to project the data such
that differences in these dependent variables are preserved. The linear
discriminant analysis (LDA) can be used to find a low-dimensional projection
in which the $G$ groups to which the data points belong are well separated.
Nonlinear dimensionality reduction methods: All the previous methods assume a
linear relationship between the latent and observed variables. When the data
lies on a low-dimensional, nonlinear manifold in the high-dimensional space, a
linear method may require more latent variables than the number of true
dimensions of the data. The most frequently used methods to identify nonlinear
manifolds are Isomap79 [31] and locally linear embedding (LLE) [28]. Because
the nonlinear methods use local neighborhoods to estimate the structure of the
manifold, population responses may not evenly explore the high-dimensional
space. Therefore theses methods should be used with care.
## 3 Proper orthogonal decomposition (POD) and discrete empirical
interpolation (DEIM) for Hodgin-Huxley model
One of the goals of neurosciences is fast and accurate predictions of the
potential propagation in neurons. The differential equations describing
propagation of potential in neurons are described by Hodgkin and Huxley (HH)
cable equations [12]. They consists of a coupled system of ordinary (ODEs) and
partial differential equations PDEs. Accurate simulation of morphology,
kinetics and synaptic inputs of neurons requires solution of large systems of
nonlinear ODEs. The complexity of the models are determined by the synapse
density of the dentritic length ($1\mu$ one micron). In simulations, for one
synapse per micron on a cell $5$ mm dendrite at $5,000$ compartments each with
$10$ variables are needed, which results in $50,000$ coupled nonlinear system
of ODEs [17, 16]. To recover complex dynamics, efficient reduced order
neuronal methods are developed using the POD and DEIM from the snapshots of
the in space and time discretized coupled PDEs and ODEs [17, 16, 2]. In this
section we describe two of them. They differ in the formulation of the HH
cable equation and of the equations for the gating variables.
### 3.1 Morphologically accurate reduced order modeling
The neuronal full order models (FOMs) in [17, 16] consists of $D$ branched
dendritic neurons $B=\sum_{d=1}^{D}B_{d}$ meeting at the soma, where the
$d^{th}$ has $B_{d}$ branches. It is assumed that the branch $b$ carries $C$
distinct ionic currents with associated densities and $G_{bc}(x)$ and reversal
potentials $E_{c},c=1,\ldots,C$. The kinetics of current $c$ on branch $b$ is
governed by the $F_{c}$ gating variables, $w_{bcf},f=1,\ldots,F_{c}$. When
subjected to input at $S_{b}$ synapses, the nonlinear HH cable equation for
the transmembrane potential $v_{b}(x,t)$ with the equation for the gating
variables $w_{bcf}$ is given by (see [2] Fig.1, model network with three
cables)
$\displaystyle a_{b}C_{m}\frac{\partial v_{b}}{\partial t}=$
$\displaystyle\frac{1}{2R_{i}}\frac{\partial}{x}\left(a_{b}^{2}\frac{\partial
v_{b}}{\partial x}\right)$ (1) $\displaystyle-
a_{b}\sum_{c=1}^{C}G_{bc}(x)(v_{b}-E_{c})\prod_{f=1}^{F_{c}}w_{bcf}^{q_{cf}}$
$\displaystyle\frac{1}{2\pi}\sum_{s=1}^{S_{b}}g_{bs}(t)\delta(x-x_{bs})(v_{b}-E_{bs})$
$\displaystyle\frac{\partial w_{bcf}}{\partial t}$ $\displaystyle=$
$\displaystyle\frac{w_{cf,\infty}(v_{b})-w_{bcf}}{\tau_{cf}(v_{b})},\quad
0<x<l_{b},\;t>0,$ (2)
where $g_{bs}(nS)$ is the time course, $x_{bs}$ is the spatial location and
$E_{bs}$ is the reversal potential of the $s$th synapse on branch $b$. The
variables and parameters in (1) are described in [17, 16].
These branch potentials interact at $J$ junction points, where junction $J$
denotes the soma. The $D$ dendrites join at soma. Continuity of the potential
at the soma leads to a common value at current balance denoted by
$v_{\sigma}(t)$. Then the networked form of (1) becomes
$\displaystyle a_{b}C_{m}\frac{\partial v_{\sigma}}{\partial t}=$
$\displaystyle\frac{\pi}{A_{\sigma}R_{i}}\sum_{d=1}^{D}\frac{\partial}{\partial
x}\left(a_{b_{J}^{d}}^{2}(l_{b_{J}^{d}})\frac{\partial
v_{b_{J^{d}}}(l_{b_{J^{d}}},t)}{\partial x}\right)$ (3) $\displaystyle-
a_{b}\sum_{c=1}^{C}G_{\sigma
c}(x)(v_{\sigma}-E_{c})\prod_{f=1}^{F_{c}}w_{\sigma cf}^{q_{cf}}(t)$
$\displaystyle\frac{1}{A_{\sigma}}\sum_{s=1}^{S_{b}}g_{\sigma
s}(t)(v_{\sigma}(t)-E_{\sigma s})$ $\displaystyle\frac{\partial w_{\sigma
cf}(t)}{\partial t}$ $\displaystyle=$
$\displaystyle\frac{w_{cf,\infty}(v_{\sigma}(t))-w_{\sigma
cf}(t)}{\tau_{cf}(v_{\sigma})(t)},\quad 0<x<l_{b},\;t>0.$ (4)
When the cell is partitioned into $N$ compartments, with $C$ distinct ionic
currents per compartment and with $F$ gating variables per current, the
following nonlinear ODEs are obtained
$\displaystyle v^{\prime}(t)=$ $\displaystyle
Hv(t)-(\Phi(w(t)e).v(t)+\Phi(w(t))E_{i}$ (5)
$\displaystyle+G(t).(v(t)-E_{s}),\quad v(t)\in\mathbb{R}^{N}$ $\displaystyle
w^{\prime}(t)=$ $\displaystyle(A(v(t))-w(t))./B(v(t)),\quad
w(t)\in\mathbb{R}^{N\times C\times F}$ (6)
where $H\in\mathbb{R}^{N\times N}$ is the Hines matrix [11], $e=[1\;1\cdots
1]^{T}\in\mathbb{R}^{C}$ and the ‘dot’ operator, $a.b$, denotes element-wise
multiplication. $E_{i}$ and $E_{s}$ are respectively the vector of channel
reversal potentials and the vector of synaptic reversal potentials,
respectively Eq. (5) is discretized in time by the second order discretized
Euler scheme [11].
In [16] using the snapshots of $v(t)$ and of the nonlinear term
$N(v(t),w(t))\equiv(\Phi)w(t))e).v(t)-\Phi(w(t)))E_{i}$ at times
$t_{1},t_{2},\ldots,t_{n}$ the POD and DEIM modes are constructed.
The reduced membrane potential $v_{r}$ are constructed using the POD basis,
the reduced gating variables $w_{r}$ are obtained after applying the DEIM to
the nonlinear terms. The reduced order model in [16] preserves the spatial
precision of synaptic input, captures accurately the subthreshold and spiking
behaviors.
In [17] a linearized quasi active reduced neuronal model is constructed using
balanced truncation and ${\mathcal{H}}_{2}$ approximation of transfer
functions in time. ROMs preserve the input-output relationship and reproduce
only subthreshold dynamics.
### 3.2 Energy stable neuronal reduced order modeling
In [1, 2] a different form of the HH cable equation and ODEs for gating
variables is considered. The intracellular potential $v(x,t)$ and three gating
variables $m(x,t),\;h(x,t)$, and $n(x,t)$ describe the activation and
decativation of the ion channels, of the sodium channels and of the potassium
channels, respectively. A single cable in the computational domain
$(x,t)\in[0,L]\times(0,T]$ describing the distribution of the potential
$u(x,t)$ is given by [1, 2]
$\frac{\partial u}{\partial
t}=\frac{\mu}{a(x)}\left(a(x)^{2}u_{x}\right)_{x}-\frac{1}{C_{m}}g(m,h,n)u+\frac{1}{C_{m}}f(m,h,n,x,t),$
(7)
where $a(x)$ the radius of the neurons and $C_{m}$ is specific membrane
capacitance, $\mu=\frac{1}{2C_{m}R_{i}}>0$ the ratio with $R_{i}$ the axial
resistivity. The conductance $g(x,t)$ is a polynomial of the gating variables
$g(x,t)=g_{1}m^{3}h+g_{2}n^{4}+g_{3}>0,$ (8)
with the source term
$f(m,h,n,x,t)=g_{1}E_{1}m^{2}h+g_{2}E_{2}n^{4}+g_{3}E_{3}-i(x,t),$ (9)
where $E_{l},\;l=1,2,3$ are equilibrium potentials and $i(x,t)$ input current
at $x$
$i(x,t)=\sum_{s=1}^{N_{s}}i_{s}(x,t),\quad x\in[0,L].$ (10)
The nonlinear ODEs for the gating variables are given by
$\displaystyle\frac{\partial m}{\partial t}$ $\displaystyle=$
$\displaystyle\alpha_{m}(v(x,t)(1-m(x,t))-\beta_{m}v(x,t))m(x,t),$
$\displaystyle\frac{\partial h}{\partial t}$ $\displaystyle=$
$\displaystyle\alpha_{h}(v(x,t))(1-h(x,t))-\beta_{h}v(x,t))h(x,t),$ (11)
$\displaystyle\frac{\partial n}{\partial t}$ $\displaystyle=$
$\displaystyle\alpha_{n}(v(x,t))(1-n(x,t))-\beta_{n}v(x,t))n(x,t),$
Expression for
$\alpha_{m},\;\alpha_{h},\;\alpha_{n},\;\beta_{m},\;\beta_{h},\;\beta_{n}$ and
boundary conditions can be found in [2].
In [1, 2], a model network with three cables connected to a soma is used. The
equations governing the potential propagation in the network $N_{c}$ neuron
cables-dentrites and /or axons with the superscript ${}^{(c)},\;c=1,\ldots
N_{c}$, are given as
$\displaystyle\frac{\partial v^{(c)}}{\partial t}=$
$\displaystyle\frac{\mu}{a^{(c)}(x^{(c)})}\left(\left(a^{(c)}\left(x^{(c)}\right)^{2}\right)v^{(c)}_{x}\right)_{x}-\frac{1}{C_{m}}g\left(m^{(c)},h^{(c)},n^{(c)}\right)u^{(c)}$
$\displaystyle+$
$\displaystyle\frac{1}{C_{m}}f\left(m^{(c)},h^{(c)},n^{(c)},x^{(c)},t\right)$
(12)
$\displaystyle\frac{\partial m^{(c)}}{\partial t}$ $\displaystyle=$
$\displaystyle\alpha_{m}(v^{(c)}(1-m^{(c)})-\beta_{m}v^{(c)})m^{(c)},$
$\displaystyle\frac{\partial h^{(c)}}{\partial t}$ $\displaystyle=$
$\displaystyle\alpha_{h}(v^{(c)})(1-h^{(c)})-\beta_{h}v{(c)})h^{(c)},$ (13)
$\displaystyle\frac{\partial n^{(c)}}{\partial t}$ $\displaystyle=$
$\displaystyle\alpha_{n}(v^{(c)}))(1-n^{(c)})-\beta_{n}v^{(c)})n^{(c)},$
for $x^{(c)}\in\Omega^{(c)}=[0,L^{(c)}]$ together with the boundary
conditions.
The semi-discrete form of these equations are approximated using energy stable
summation by parts [1, 2] for the model network. The reduced order bases (ROB)
for multiple cables of identical lengths are assembled into a network in block
form. The block structure of the ROB allows a flexible structure-preserving
model reduction approach with an independent approximation in each cable and
energy stability and accuracy properties follow from this block structure.
Computation of the time varying reduced variables in the gating equations at
every time $t$ is costly because they scale with dimension of FOM. A
nonnegative variant of the discrete empirical interpolation method (NNDEIM) is
developed in [2] to preserve the structure and energy stability properties of
the equations.
The capability of the greedy-based approach to generate accurate predictions
in large-scale neuronal networks is demonstrated for system with more than
$15,000$ degree of freedoms (dofs). The state variable ROB of dimension $l=15$
with POD modes together with the nonnegative ROBs of dimension $p=60$ with
NNDEIM modes are constructed using a greedy approach to predict the potential
variation at the soma. The speedup of simulations is about $20$ larger than
Galerkin projection only is $1.3$ without using the NDEIM.
## 4 Combined state and parameter reduction for dynamic causal modelling
In neuroscience different regions of the brain are inferred using
neuroimagining data from EEG or fMRI recordings using the method od dynamic
causal modeling (DCM) [7]. Effective connectivity is parameterised in terms of
coupling among unobserved brain states and neuronal activity in different
regions. In DCM the neuronal activity is of the observed brain region is
represented as a SISO (single input single output) linear state-space system
$\dot{x}=A_{\mathrm{d}yn}(\mu)x+B_{\mathrm{d}yn}u$ (14)
with the parameterized connectivity $A_{\mathrm{d}yn}(\mu)$ and external input
matrices $B_{\mathrm{d}yn}$.
Linearization of the nonlinear DCM hemodynamic forward sub-model (balloon
model) [7] transforms the neuronal activity to the measured BOLD (blood oxygen
level dependent) response. Linearization around the equilibrium results in the
following single input, single output (SISO) system:
$\displaystyle B_{obs}$ $\displaystyle:=$ $\displaystyle(1\;0\;0\;0)^{T},\quad
C_{obs}=(0\;0\;V_{0}k_{1}\;V_{0}k_{2}),$ (15) $\displaystyle\dot{z}_{i}$
$\displaystyle=$ $\displaystyle A_{obs}z_{i}+B_{obs}x_{i},$ (16)
$\displaystyle y_{i}$ $\displaystyle=$ $\displaystyle C_{obs}z_{i},$ (17)
$\displaystyle z_{0}$ $\displaystyle=$ $\displaystyle(0\;0\;0\;0)^{T},$ (18)
$A_{\mathrm{o}bs}:=\left(\begin{array}[]{cccc}\frac{1}{\tau_{s}}&\frac{1}{\tau_{f}}&0&0\\\
1&0&0&0\\\
0&\frac{1}{\tau_{0}E_{0}}(1-(1-E_{0})(1-\ln(1-E_{0})))&\frac{1}{\tau_{0}}&\frac{1-\alpha}{\tau_{0}\alpha}\\\
0&\frac{1}{\tau_{0}}&0&\frac{1}{\tau_{0}\alpha}\end{array}\right).$ (19)
The fMRI measurements at the $i^{th}$ brain region are reflected by the output
variables $y_{i}$. For the meaning of the variables and parameters in (15) and
(19) we refer to [10, 9]. The linearized forward sub-models are embedded into
the fMRI connectivity model
$\left(\begin{array}[]{c}\dot{x}\\\ \dot{z}_{1}\\\ \dot{z}_{2}\\\ \vdots\\\
z_{N_{dyn}}\end{array}\right)=\left(\begin{array}[]{ccccc}A_{dyn}(\mu)&0&0&\cdots&0\\\
\delta_{1,1}&A_{obs}&0&&\\\ \delta_{2,1}&0&A_{obs}&&\\\ \vdots&&\ddots&\\\
\delta_{1,N_{dyn}}&&&A_{obs}\end{array}\right)\left(\begin{array}[]{c}x\\\
z_{1}\\\ z_{2}\\\ \vdots\\\
z_{N_{dyn}}\end{array}\right)+\left(\begin{array}[]{c}B_{dyn}\\\ 0\\\ 0\\\
\vdots\\\ 0\end{array}\right)v,$ (20)
$y=\left(0\left(\begin{array}[]{ccc}C_{obs}&&\\\ &\ddots&\\\
&&C_{obs}\end{array}\right)\right)\left(\begin{array}[]{c}x\\\ z_{1}\\\
z_{2}\\\ \vdots\\\ z_{N_{dyn}}\end{array}\right),$ (21)
where $\delta_{ij}\in\mathbb{R}^{4\times N_{\mathrm{d}yn}}$ denotes the
Kronecker matrix with non-zero elements located at the $(i,j)^{th}$ component.
The linearized state-space forward model (20) and (21) corresponds to a
multiple input, multiple output (MIMO) system
$\dot{x}(t)=A(\mu)x(t)+Bu(t),\qquad y(t)=Cx(t),$ (22)
where $x\in\mathbb{R}^{N}$ is the internal state, $u\in\mathbb{R}^{J}$ the
external input, $y\in\mathbb{R}^{O}$ the observed output and $\mu$ are the
parameters describing different conditions.
For large number of $M:=N^{2}$ parameters, the computational cost for
inferring the parameters and states is very high. In [10, 8] a combined state
and parameter model order reduction is developed for parameter estimation and
identification to extract effective connectivity. The inversion procedure
consists of two phases, the offline and online phases. In the offline phase,
the underlying parameterized model is reduced jointly in states and
parameters. In online phase, the reduced order model’s parameters are
estimated to fit the observed experimental data. Using state and parameter
reduction, the computational cost is reduced in the offline phase. The
simultaneous reduction of state and parameter space is based on Galerkin
projections with the orthogonal matrices for the state
$V\in\mathbb{R}^{N\times n}$ and for the parameters $P\in\mathbb{R}^{M\times
m}$. The reduced model is of lower order $n<<N,\;m<<M$ than the original full
order model. The reduced states $x_{r}(t)\in\mathbb{R}^{n}$ and the reduced
parameters $\mu\in\mathbb{R}^{m}$ are computed as
$\dot{x}_{r}(t)=A_{r}(\mu_{r})x_{r}(t)+B_{r}u(t),\qquad y_{r}(t)=C_{r}x(t)$
(23)
with a reduced initial condition $x_{r,0}=V^{T}x_{0}$ and the reduced
components
$\displaystyle\mu_{r}$ $\displaystyle=$ $\displaystyle
P^{T}\mu\in\mathbb{R}^{m},$ $\displaystyle A_{r}(\mu_{r})$ $\displaystyle=$
$\displaystyle V^{T}A(P\mu_{r})V\in\mathbb{R}^{n\times n},$ $\displaystyle
B_{r}$ $\displaystyle=$ $\displaystyle V^{T}B\in\mathbb{R}^{n\times J},$
$\displaystyle C_{r}$ $\displaystyle=$ $\displaystyle CV\in\mathbb{R}^{O\times
m}.$
In the online phase, the optimization based inverse problem is combined with
the reduction of state and parameter space. The inversion is based on
generalized data-driven optimization approach to construct the ROMs in [23]
and enhanced with the Monte-Carlo method to speed up the computations. The
state projection $V\in\mathbb{R}^{N\times n}$ and parameter projection
$P\in\mathbb{R}^{m\times m}$ are determined iteratively based on a greedy
algorithm by maximizing the error between the high-fidelity original and the
low-dimensional reduced model in the Bayesian setting.
Numerical experiments with the DCM model in [23] show highly dimensional
neuronal network system is efficiently inverted due to the short offline
durations. In the offline phase, Monte-Carlo enhanced methods require more
than an order of magnitude less offline time compared to the original and
data-misfit enhanced methods. In the online phase the reduced order model has
a speedup factor about an order of magnitude more compared to the full-order
inversion. The output error of the data-misfit enhanced method is close to
full order method. The output-errors of Monte-Carlo decrease with increasing
number of simulation but does not reach the output error of the full order
system. The source code is available in MATLAB [8].
## 5 Dynamic mode decomposition
Dynamic mode decomposition (DMD) is a data-driven, equation free ROM technique
[20]. It was initially developed to reduce the high dimensional dynamic data
obtained from experiments and simulations in the fluid mechanics into a small
number of coupled spatial–temporal modes [29, 30]. DMD was applied to explore
spatial–temporal patterns in large-scale neuronal recordings in [3]. DMD can
be interpreted as combination of discrete Fourier transform (DFT) in time and
principal component analysis (PCA) [15] in space. Both PCA and independent
component analyses (ICA) [13] are static techniques, which perform poorly in
reproducing dynamic data, such as recordings of neural activity.
The data is taken from electrocorticography (ECoG) recordings. Voltages from
$n$ channels of an electrode array sampled every $\Delta t$. These
measurements are arranged at snapshot $k$ to the column vectors
${\mathbf{x}}_{k}$. The $m$ snapshots in time construct to data matrices
shifted in time with $\Delta t$
${\mathbf{X}}=\left[\begin{array}[]{cccc}|&|&&|\\\
{\mathbf{x}}_{1}&{\mathbf{x}}_{2}&\cdots&{\mathbf{x}}_{m-1}\\\
|&|&&|\end{array}\right],\quad{\mathbf{X}}^{\prime}=\left[\begin{array}[]{cccc}|&|&&|\\\
{\mathbf{x}}_{2}&{\mathbf{x}}_{3}&\cdots&{\mathbf{x}}_{m}\\\
|&|&&|\end{array}\right]$ (24)
These matrices are assumed to be related linearly in time
${\mathbf{X}}^{\prime}={\mathbf{A}}{\mathbf{X}}.$ (25)
The DMD of the data matrix pair ${\mathbf{X}}$ and ${\mathbf{X}}^{\prime}$ is
given by the eigendecomposition of ${\mathbf{A}}$ using the singular value
decomposition (SVD) of the data matrix ${\mathbf{X}}=U\Sigma V^{*}$ by
computing the pseudoinverse
${\mathbf{A}}\approx{\mathbf{X}}^{\prime}{\mathbf{X}}^{\dagger}\equiv{\mathbf{X}}^{\prime}{\mathbf{V}}{\mathbf{\Sigma}}^{-1}{\mathbf{U}}^{*}.$
The spatio-temporal modes are computed by the exact DMD algorithm [32].
Because DMD does not contain explicit spatial relationship between neighboring
measurements, traveling waves occurring in neuronal networks can not be
captured well with a few coherent modes. DMD was also used as a windowed
technique with a temporal window size constrained by lower bound as for the
discrete Fourier transformation (DFT). In contrast to the fluid dynamics where
$n>>m$, in neuroscience the electrode arrays that have tens of channels $n$ in
the recordings with $m$ number of snapshots in the windows data per second, so
that $n<m$. The number of singular values $v$ of ${\mathbf{X}}$ are less than
$n$ and $m-1$, which restricts the maximum number of DMD modes and eigenvalues
to $n$. Because of this the dynamics can be captured over $m$ snapshots. The
rank mismatch is resolved by appending to the snapshot measurements with $h-1$
time-shifted versions of the data matrices. The augmented data matrix
${\mathbf{X}}_{\mathrm{a}ug}$ is given as
${\mathbf{X}}_{\mathrm{a}ug}=\left[\begin{array}[]{cccc}|&|&&|\\\
{\mathbf{x}}_{1}&{\mathbf{x}}_{2}&\cdots&{\mathbf{x}}_{m-h}\\\ |&|&&|\\\
|&|&&|\\\ {\mathbf{x}}_{2}&{\mathbf{x}}_{3}&\cdots&{\mathbf{x}}_{m-h-1}\\\
|&|&&|\\\ &&\cdots&\\\ |&|&&|\\\
{\mathbf{x}}_{h}&{\mathbf{x}}_{h+1}&\cdots&{\mathbf{x}}_{m-1}\\\ |&|&&|\\\
\end{array}\right].$ (26)
The augmented matrices ${\mathbf{X}}_{{\mathrm{a}ug}}$ and
${\mathbf{X}}^{\prime}_{{\mathrm{a}ug}}$ are Hankel matrices, which are
constant along the skew diagonal, as in the Eigenvalue Realization Algorithm
(ERA) [14]. The number of the stacks $h$ is chosen such that $hn>2m$. A
measure to determined the optimal number of stacks $h$ is the approximation
error
$E=\frac{||{\mathbf{X}}-\hat{\mathbf{X}}||_{F}}{||{\mathbf{X}}||_{F}}$
where $||\cdot||_{F}$ is the Frobenius norm. The approximation error $E$ is
decreasing with increasing number of stacks $h$ and reaches a plateau, so that
the DMD accuracy does not significantly increases.
DMD is applied in [3] as an automated approach to detect and analyze reliably
spatial localization and frequencies of sleep spindle networks from human ECoG
recordings. A MATLAB implementation is available at github.com/bwbrunton/dmd-
neuro/.
## 6 Reduced order modeling of biophysical neuronal networks
Recently reduced order models for ODEs
$\dot{x}(t)=A(t)x(t)+f(x(t))+Bu(t)$ (27)
are constructed using POD and DEIM to investigate input-output behavior of the
neuronal networks in the brain [22, 21], where $x(t)$ are state, and $u(t)$
are input variables, respectively.
The model in [22] is based on the chemical reactions of molecules in synapses,
that are the intercellular information transfer points of neurons. The
signaling pathways in striatal synaptic plasticity are modeled in [19]. This
model describes how certain molecules, which are a prerequisite for learning
in the brain, act in synapses. The stoichiometric equations obey the law of
mass action, which leads to a deterministic system of $44$ ODEs of the form
(27) . The state $x(t)$ of the control system (27) is a collection of ions,
molecules, and proteins that act in neuronal synapses. The linear part of (27)
is sparse, the nonlinearities are quadratic. The time dependent stimulus
$u(t)$ consists of molecules that are important for neuronal excitability and
plasticity, calcium and glutamate.
In [21], a nonlinear biophysical network model is considered, describing
synchronized population bursting behavior of heterogeneous pyramidal neurons
in the brain [27]. Neurons communicate by changing their membrane voltage to
create action potentials (spikes), propagating from cell to cell. Spiking is
the fundamental method of sensory information processing in the brain, and
synchronized spiking is an emergent property of biological neuronal networks.
The ODE system (27) in [21] consists of the states $x(t)$ as a collection of
$50$ neurons, each modeled with $10$ ODEs, leading totally to a system of ODEs
of dimension $500$. Each cell is modeled with Hodgkin-Huxley equations, where
each cell has only two compartments (dendrites and soma) and these
compartments have different ion channels. The state variables $x(t)$ include
the voltages of somatic and dendritic compartments, dendritic calcium
concentration, synaptic and ion channel gating variables and the input $u(t)$
is an injected current. Additionally, the soma compartment voltages are
coupled to dentritic compartments of randomly chosen cells. This networking of
the output of cells as input to other cells is key for producing synchronized
population behavior. The nonlinearities are Hodgkin-Huxley type, i.e.
exponential functions as well as cubic and quartic polynomials.
In [22], POD+DEIM was applied to a data-driven biological model of plasticity
in the brain (27). The ROMs with POD-DEIM reduce significantly the simulation
time and error between the original model and reduced order solutions can be
tuned by adjusting the number POD and DEIM bases independently. When the ROMs
are trained in a matching time interval of $10000$ seconds, accurate results
are obtained. However, generalizing the reduced model to longer time intervals
is challenging, which is characteristic for all nonlinear models.
In [21], the network model (27) is reduced with localized DEIM (LDEIM) [24],
discrete adaptive POD (DAPOD) [33, 34], and adaptive DEIM [25]. DEIM and the
variations are used here in combination with POD. ROMs require large number
POD and DEIM bases, to accurately simulate the input-output behavior in the
ROMs. In this model, every cell is heterogeneous in parameters and there are
also jump/reset conditions, which are factors that pose additional challenges
to the reduced order methods. However, the ROMs in [21] were able to replicate
the emergent synchronized population activity in the original network model.
DAPOD and ADEIM perform best in preserving the spiking activity of the
original network model. ADEIM is too slow and does not allow low enough
dimensions to offset the computational costs of online adaptivity. DAPOD is
able to find a lower dimensional POD basis online than the other methods find
offline, but the runtime is close to the original model.
## References
* [1] D. Amsallem and J. Nordström. High-order accurate difference schemes for the Hodgkin–Huxley equations. Journal of Computational Physics, 252:573 – 590, 2013.
* [2] D. Amsallem and J. Nordström. Energy stable model reduction of neurons by nonnegative discrete empirical interpolation. SIAM Journal on Scientific Computing, 38(2):B297–B326, 2016.
* [3] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz. Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. Journal of Neuroscience Methods, 258:1 – 15, 2016.
* [4] L. Buesing, J. H. Macke, and M. Sahani. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1682–1690. Curran Associates, Inc., 2012.
* [5] S. Chaturantabut and D. Sorensen. Nonlinear model reduction via discrete empirical interpolation. SIAM Journal on Scientific Computing, 32(5):2737–2764, 2010.
* [6] John P Cunningham and Byron M Yu. Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17(11):1500–1509, 2014.
* [7] K.J. Friston, L. Harrison, and W. Penny. Dynamic causal modelling. NeuroImage, 19(4):1273 – 1302, 2003.
* [8] C. Himpe. optmor - optimization-based model order reduction (version 1.2), 2015\.
* [9] C. Himpe. Combined State and Parameter Reduction: For Nonlinear Systems with an Application in Neuroscience. Internationaler Fachverlag für Wissenschaft & Praxis, 2016.
* [10] C. Himpe and M. Ohlberger. Data-driven combined state and parameter reduction for inverse problems. Advances in Computational Mathematics, 41(5):1343–1364, 2015.
* [11] M. Hines. Efficient computation of branched nerve equations. Int J Biomed Comput., 15(1):69–76, 1984.
* [12] A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. Bulletin of Mathematical Biology, 52(1):25–71, Jan 1990.
* [13] A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. Neural Networks, 13(4):411 – 430, 2000.
* [14] Juang J.N. and Pappa R.S. An eigensystem realization algorithm for modal parameter identification and model reduction. Journal of Guidance, Control, and Dynamics, 8(5):620–627, 1985\.
* [15] I. T. Jolliffe. Principal component analysis. Springer Series in Statistics. Springer-Verlag, New York, 2005.
* [16] A. R. Kellems, S. Chaturantabut, D. C. Sorensen, and S. J. Cox. Morphologically accurate reduced order modeling of spiking neurons. Journal of Computational Neuroscience, 28(3):477–494, 2010.
* [17] A. R. Kellems, D. Roos, N. Xiao, and S. J. Cox. Low-dimensional, morphologically accurate models of subthreshold membrane potential. Journal of Computational Neuroscience, 27(2):161, 2009.
* [18] C. Kemere, G. Santhanam, B. M. Yu, A. Afshar, S. I. Ryu, T. H. Meng, and K. V. Shenoy. Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology, 100(4):2441–2452, 2008.
* [19] B. Kim, SL. Hawes, F. Gillani, LJ. Wallace, and KT. Blackwell. Signaling pathways involved in striatal synaptic plasticity are sensitive to temporal pattern and exhibit spatial specificity. PLoS Comput Biol, 9(3):e1002953, 2013.
* [20] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor. Dynamic mode decomposition: Data-driven modeling of complex systems. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2016.
* [21] M. Lehtimäki, L. Paunonen, and M.-L. Linne. Projection-based order reduction of a nonlinear biophysical neuronal network model. In 2019 IEEE 58th Conference on Decision and Control (CDC), pages 1–6, 2019.
* [22] M. Lehtimäki, L. Paunonen, S. Pohjolainen, and M.-L. Linne. Order reduction for a signaling pathway model of neuronal synaptic plasticity. IFAC-PapersOnLine, 50(1):7687 – 7692, 2017. 20th IFAC World Congress.
* [23] C. Lieberman, K. Willcox, and O. Ghattas. Parameter and state model reduction for large-scale statistical inverse problems. SIAM Journal on Scientific Computing, 32(5):2523–2542, 2010.
* [24] B. Peherstorfer, D. Butnaru, K. Willcox, and H.-J. Bungartz. Localized discrete empirical interpolation method. SIAM J. Sci. Comput., 36(1):A168–A192, 2014.
* [25] B. Peherstorfer and K. Willcox. Online adaptive model reduction for nonlinear systems via low-rank updates. SIAM J. Sci. Comput., 37(4):A2123–A2150, 2015.
* [26] B. Petreska, B. M Yu, J. P Cunningham, S. Gopal, Stephen I. Y., K. V. Shenoy, and M. Sahani. Dynamical segmentation of single trials from population neural data. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 756–764. Curran Associates, Inc., 2011.
* [27] P. F. Pinsky and J. Rinzel. Intrinsic and network rhythmogenesis in a reduced traub model for CA3 neurons. Journal of Computational Neuroscience, 1(1):39–60, 1994.
* [28] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
* [29] C. W. Rowley, I. Mezić, S. Bahheri, P. Schlatter, and D. S. Hennigson. Spectral analysis of nonlinear flows. Journal of Fluid Mechanics, 641:115–127, 2009.
* [30] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics, 656:5–28, 2010.
* [31] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
* [32] J. H. Tu, C. W. Rowley., D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics, 1(2158-2491_2014_2_391):391, 2014.
* [33] M. Yang and A. Armaou. Dissipative distributed parameter systems on-line reduction and control using DEIM/APOD combination. In 2018 Annual American Control Conference (ACC), pages 2557–2562, 2018.
* [34] M. Yang and A. Armaou. Revisiting APOD accuracy for nonlinear control of transport reaction processes: A spatially discrete approach. Chemical Engineering Science, 181:146 – 158, 2018.
* [35] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Journal of Neurophysiology, 102(1):614–635, 2009.
|
2024-09-04T02:54:59.054047 | 2020-03-11T08:03:25 | 2003.05150 | {
"authors": "G. Hasinger",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26154",
"submitter": "Guenther Hasinger",
"url": "https://arxiv.org/abs/2003.05150"
} | arxiv-papers | # Illuminating the dark ages: Cosmic backgrounds from accretion onto
primordial black hole dark matter
G. Hasinger
###### Abstract
The recent interpretation of cold dark matter as the sum of contributions of
different mass Primordial Black Hole (PBH) families [1] could explain a number
of so far unsolved astrophysical mysteries. Here I assume a realistic
$10^{-8}$–$10^{10}$ M⊙ PBH mass distribution providing the bulk of the dark
matter, consistent with all observational constraints. I estimate the
contribution of baryon accretion onto this PBH population to various cosmic
background radiations, concentrating first on the cross-correlation signal
between the Cosmic X–ray and the Cosmic infrared background fluctuations
discovered in deep Chandra and Spitzer surveys. I assume Bondi capture and
advection dominated disk accretion with reasonable parameters like baryon
density and effective relative velocity between baryons and PBH, as well as
appropriate accretion and radiation efficiencies, and integrate these over the
PBH mass spectrum and cosmic time. The prediction of the PBH contribution to
the X–ray background is indeed consistent with the residual X–ray background
signal and the X–ray/infrared fluctuations. The predicted flux peaks at
redshifts z$\approx$17–30, consistent with other constraints requiring the
signal to come from such high redshifts. The PBH contribution to the 2–5
$\mu$m cosmic infrared background fluctuations is only about 1%, so that these
likely come from star formation processes in regions associated with the PBH.
I discuss a number of other phenomena, which could be significantly affected
by the PBH accretion. Magnetic fields are an essential ingredient in the Bondi
capture process, and I argue that the PBH can play an important role in
amplifying magnetic seed fields in the early universe and maintaining them
until the galactic dynamo processes set in. Next I study the contribution of
the assumed PBH population to the re-ionization history of the universe and
find that they do not conflict with the stringent ionization limits set by the
most recent Planck measurements. X–ray heating from the PBH population can
provide a contribution to the entropy floor observed in groups of galaxies.
The tantalizing redshifted 21-cm absorption line feature observed by EDGES
could well be connected to the radio emission contributed by PBH to the cosmic
background radiation. Finally, the number of intermediate-mass black holes and
the diffuse X–ray emission in the Galactic Center region are not violated by
the assumed PBH dark matter, on the contrary, some of the discrete sources
resolved in the deepest Chandra observations of the Galactic Ridge could
indeed be accreting PBH.
## 1 Introduction
Recent years saw a revival of the idea originally put forward by S. Hawking
[2], that Primordial Black Holes (PBH) could make up the so far elusive Dark
Matter. LIGOs first detection of gravitational waves from merging binary black
holes of approximately equal masses in the range 10–30 M⊙ [3, 4] led to the
suggestion that these could be a signature of dark matter stellar mass PBH [5,
6, 7] in a mass window not yet excluded by other astrophysical constraints. A
recent review about the rich literature constraining the possible
contributions of PBH to the dark matter is e.g. given in [8].
In a recently published theoretical prediction [1, 9] PBH are created in the
QCD phase transitions (around 100 MeV) of different particle families freezing
out of the primordial Quark-gluon plasma within the first two seconds after
the inflationary phase. When W+/-, Z bosons, baryons, pions are created, and
e+e- pairs annihilate, they leave an imprint in form of a significant
reduction of the sound speed at the corresponding phase transitions, and allow
regions of high curvature to collapse and form PBH [see also 10]. The typical
mass scale of these PBH is defined by the size of the horizon at the time of
the corresponding phase transition. In this model four distinct populations of
PBH in a wide mass range are formed: planetary mass black holes at the W+/-, Z
transition, PBH of around the Chandrasekhar mass when the baryons (protons and
neutrons) are formed from 3 quarks, PBH of masses of order 30 M⊙ (these
correspond to the LIGO black holes), when pions are formed from two quarks,
and finally supermassive black holes (SMBH) at the e+e- annihilation [see also
11]. Another remarkable aspect of this theory is, that the gravitational
energy released at at the PBH collapse locally reheats regions (hot spots)
around the black holes to the electroweak transition scale (around 100 GeV),
where chiral sphaleron selection effects can introduce the matter/antimatter
asymmetry. The PBH in this picture would therefore also be responsible for the
baryogenesis and fix the ratio of dark matter to baryons. Clustering of the
PBH in a very wide mass distribution could alleviate some of the more
stringent observational constraints on the allowed contribution of PBH to the
dark matter [7, 12]. The interpretation of cold dark matter as the sum of
contributions of different mass PBH families could explain a number of so far
unsolved mysteries, like e.g. the massive seed black holes required to create
the supermassive black holes in the earliest QSOs [13], the ubiquitous massive
LIGO/VIRGO massive binary black holes [e.g. 6], or even the putative "Planet
X" PBH in our own Solar System [14].
The most abundant family of PBH should be around the Chandrasekhar mass (1.4
M⊙). This prediction may already have been vindicated by the recent OGLE/GAIA
discovery of a sizeable population of putative black holes in the mass range
1–10 M⊙ [15]. The microlensing survey OGLE has detected $\sim$60 long-duration
microlensing events. About 20 of these have GAIA DR2 parallax distances of a
few kpc, which break the microlensing mass–distance degeneracy and allow the
determination of masses in the few solar mass range, implying that these
objects are most likely black holes, since stars at those distances would be
directly visible by OGLE.
Important fingerprints of a population of PBH may be hidden in the Cosmic
infrared and X–ray background radiation (see [16] for a comprehensive review).
Indeed, [6] argues, that the near-infrared Cosmic background (CIB)
anisotropies detected in deep Spitzer [17, 18, 19, 20] and Akari [21] images,
which cannot be accounted for by known galaxy populations [22], could be
connected to PBH. Similar fluctuations were discovered in the Cosmic X–ray
background (CXB) observed in a deep Chandra survey, which are correlated with
the CIB anisotropies in the same field [23]. Later studies of wider/deeper
fields covered by both Chandra and Spitzer [24, 25, 26] have substantially
improved the detection significance of the observed signal. The X–ray
fluctuations contribute about 20% to the CIB signal, indicating that black
hole accretion should be responsible for such highly efficient X–ray emission.
Similar studies of deep fields observed with the Hubble Space Telescope in the
optical range do not show such a cross-correlation signal down to mAB$\sim$28
[see 16]. The angular scale of the fluctuation power spectra of the CIB and
CXB reach values >1000", much larger than expected for the known galaxy
populations [27]. All of these findings can be understood, if the fluctuation
signal comes from a high-redshift (z$\gtrsim$12) population of black holes.
The spectral shape of the CXB fluctuations determined from a combination of
the deepest/widest fields [26] can be fit either with a very high redshift
population of obscured black holes, or with completely unobscured black hole
accretion. Original models [28] invoked highly obscured Direct Collapse Black
Holes formed in metal-free halos at z>12 to explain the observed CIB and CXB
signal. However, accreting massive black holes have recently been firmly ruled
out as the source of these fluctuations [29], because they would require an
unfeasible amount of black hole accretion at z>6, locking up a larger amount
of mass in massive black holes at high redshift, than the known black hole
mass function at z=0. These authors also ruled out local diffuse emission as
the source of the X–ray fluctuations. The CXB has been largely resolved into
discrete sources in deep X–ray images, either directly [see 30, 31], or by
crosscorrelating with the deepest Hubble galaxy catalogues [32, 33]. However,
[32] show that some marginally significant diffuse CXB still remains after
accounting for all discrete contributions. This is consistent with the
independent determination of [34]. The residual unresolved flux is about 3
times larger than the X-ray flux associated with the above CXB/CIB
fluctuations.
Given the difficulties in explaining the CIB/CXB correlation with known
classes of sources, and motivated by the notion that the dark matter could be
dominated by an extended mass distribution of PBH, I constructed a toy model
to explore the potential contribution to the cosmic backgrounds by the
accretion of baryons throughout cosmic history onto such a population of early
black holes. Assuming a combination of Bondi-Hoyle-Lyttleton quasi-spherical
capture at large distances from the PBH, and advection-dominated disk
accretion flows (ADAF) in the vicinity of the central object, I can explain
the observed residual CXB flux and the CXB/CIB crosscorrelation with minimal
tuning of the input parameters, and find a maximum contribution to the
extragalactic background light in the redshift range 15<z<30\. I further
estimate that this accretion onto PBH can produce enough flux to significantly
contribute to the pre-ionization of the intergalactic medium with UV photons
by a redshift z$\gtrsim$15 and to the pre-heating of the baryons with X–ray
photons, observed as an "entropy floor" in the X–ray emission of galaxy
groups.
In section 2 the assumed PBH mass distribution is introduced and contrasted
with recent observational limits on the PBH contribution to the dark matter.
The basic ingredients of the toy model for the accretion onto PBH are
presented in section 3. The assumed radiation mechanism and efficiency is
discussed in section 4. The contribution of the PBH emission to the different
bands is compared with the observational constraints in section 5. Other
potential diagnostics of this putative dark matter black hole population are
discussed in section 6, and conclusions are presented in section 7. Throughout
this work a $\Lambda$CDM cosmology with $\Omega_{M}$=0.315,
$\Omega_{\Lambda}$=0.685, and H0=67.4 km s-1 Mpc-1 [35] is used. These
parameters define the baryon density $\Omega_{bar}$=0.049, the dark matter
density $\Omega_{DM}$=0.264, and the critical mass density of the universe
$\rho_{crit}$=1.26$\times 10^{20}M_{\odot}~{}{\rm Gpc}^{-3}$. All logarithms
in this paper are taken to the base 10.
## 2 The assumed PBH mass distribution
The theoretical predictions in [1, 9, 11, 36] yield a broad distribution of
PBH masses with a number of peaks corresponding to the particle families
freezing out from the Big Bang. Depending on the spectral index $n_{s}$ of the
primordial curvature fluctuation power spectrum, the PBH mass distribution has
a different overall slope. [36] find consistency of these predictions with a
number of recent observational limits on the PBH contribution to the dark
matter, but there is a tension of their models with the Cosmic Microwave
Background (CMB) constraints from accretion at large PBH masses [37, 38].
Recent limits from gravitational lensing of type Ia supernovae on a maximum
contribution of stellar-mass compact objects to the dark matter of around 35%
[39], and from the LIGO OI gravitational wave merger rate of black holes in
the mass range 10–300 M⊙ [40] are also in tension with these models. An
additional important constraint comes from a comparison of the predicted PBH
fraction with the measured local mass function of supermassive black holes
(SMBH) in the centers of nearby galaxies. Integrating the local SMBH mass
function of [41] (see figure 1) in the range $10^{6}$–$10^{10}$ M⊙ yields a
local SMBH mass density of $\rho_{SMBH}$=6.3$\times$105 M⊙ Mpc-3,
corresponding to a dark matter fraction of fSMBH=1.89$\times$10-5, which is
about a factor of 10–100 lower than the fPBH predictions in [1, 36].
Figure 1: The PBH mass spectrum (thick red line) assumed for this work
(García-Bellido, 2020, priv. comm.), compared to a number of observational
constraints. Microlensing limits from SNe [39], EROS [42], and the Subaru M31
survey [43] are shown as solid, dashed and dotted green lines, respectively.
LIGO limits from gravitational merger event rates are shown as blue solid line
for subsolar masses [44], and as blue dashed line for 10-300 M⊙ [40]. The CMB
accretion limits from [37] are shown as orange dashed line. Multiwavelength
limits from the Galactic Center [45] are shown in magenta for X-ray (solid)
and radio (dashed) observations. Finally, the local SMBH mass function [41] is
shown as black line at 106-10 M⊙.
For these reasons, García-Bellido et al. (2020 in prep.) are revising their
model parameters in order to predict a steeper PBH mass function at large MPBH
and shared one of their new models, shown as red curve in figure 1. Here a
value of ns=0.987 is assumed for the spectral index of the primordial
fluctuation power spectrum, as well as a running curvature of dns=$-$0.0006.
The integral of this PBH distribution over the whole mass range yields fPBH=1.
On the other hand, the distribution yields only $\sim$40% of the dark matter
in the peak mass range [0.1,10] M⊙, and is thus fully consistent with the
microlensing constraints in figure 1. In the mass range of the LIGO black hole
binaries it predicts just the right amount of dark matter to explain the
gravitational wave merger rates, and in the SMBH range it is consistent with
the local black hole mass function (taking into account the accretion onto
supermassive PBH over cosmic time producing the bulk of the X-ray background
[46]). Apart from small sections, the new PBH mass function is thus fully
consistent with the most recent observational constraints.
## 3 Baryon accretion onto the PBH
In the following I use the PBH mass spectrum presented in section 2 to
calculate the accretion of baryons onto PBH over cosmic time, and to predict
the electromagnetic emission from this process. As we will see, for most of
the cosmic history these black holes move at supersonic speeds among the
baryons and will therefore undergo Bondi-Hoyle-Lyttleton quasi-spherical
capture [47, 48, 49, 50]. In the Bondi-Hoyle picture of a black hole moving
supersonically through a homogeneous gas, the capture happens in the wake of
the moving object. Behind the object, material moves in from a wide cone, and
needs to lose angular momentum before it can fall towards the black hole. The
gas is in principle collisionless, so that only the magnetic field trapped in
the plasma allows particles to lose angular momentum and start to behave like
a fluid. This gas forms the accretion flow, in which it is adiabatically
heated. The accreting gas is ionized and embedded in the magnetic field. Any
plasma drawn in by the gravitational field will carry along the magnetic
field. Shvartsman [51] argues that in the black hole tail, where the matter
flow stops, the gravitational and magnetic energy densities become nearly
equal. This equipartition is preserved in the infalling flow and thus the
magnetic field grows towards the black hole. Like the heat has to be
ultimately radiated away, the magnetic field needs a way to dissipate energy
on its way inward. [52] discuss that the most likely dissipation mechanism for
the magnetic field is reconnection of field lines in narrow current sheets,
similar to the processes we observe in solar flares and active galactic
nuclei. Magnetic reconnection causes the acceleration and non-thermal heating
of a small fraction of the infalling electrons. In parallel, decoupled
magnetic field lines can carry some of the amplified magnetic field outward
and eject plasma [52].
An important question is, whether the accretion flow is spherically symmetric
close to the black hole, or whether an accretion disk is formed. Originally
most researchers assumed spherical accretion for PBH [e.g. 53, 54, 38].
However, [37] argues, that the accreted angular momentum is large enough, that
an accretion disk is formed, at least close to the black hole. According to
these authors, the granularity of the PBH distribution and the formation of
PBH binaries at the scale of the Bondi radius will imprint density and
velocity gradients into the relative distribution of baryons and PBH, such
that ultimately an accretion disk and an advection-dominated accretion flow
(ADAF) will form [55]. The formation of an ADAF disk significantly reduces the
accretion rate and the radiative efficiency [56], compared to spherical
accretion. But to first order the Bondi-Hoyle-Lyttleton mechanism can be used
to estimate the accretion rate $\dot{M}$ onto the PBH [37, 8].
Bondi [49] discusses two different approximations to the spherical gas
accretion problem, (i) the velocity-limited case, where the motion of the
accreting object through the gas is dominant and an accretion column is formed
in the wake of the moving object, and (ii) the temperature-limited case, where
the sound speed of the gas is dominant and a spherical accretion flow forms.
In the velocity-limited case (i) the mass accretion rate is given as
$\dot{M}=2.5\pi\rho(GM)^{2}v_{rel}^{-3},$ (3.1)
where $\rho$ is the gas density, $M$ is the PBH mass, and $v_{rel}$ is the
relative velocity between object and gas. In the temperature-limited case (ii)
with negligible relative velocity, the thermal velocity of the gas particles
is dominant and the corresponding accretion rate is given by
$\dot{M}=2.5\pi\rho(GM)^{2}c_{s}^{-3},$ (3.2)
where $c_{s}$ is the sound speed. For intermediate cases, [49] introduces an
effective velocity
$v_{eff}=\sqrt{v_{rel}^{2}+c_{s}^{2}}$ (3.3)
and the corresponding mass accretion rates becomes
$\dot{M}=2\lambda\pi\rho(GM)^{2}v_{eff}^{-3},$ (3.4)
where the so called accretion eigenvalue $\lambda$ is is a fudge factor of
order unity, dependent on non-gravitational aspects of the problem, like e.g.
the gas equation of state or outflows from feedback effects. Different authors
have discussed this parameter for the particular application of gas accretion
onto PBH in the early universe. [53] find values of $1.12>\lambda>10^{-3}$,
depending e.g. on the PBH mass. For masses of order 1 M⊙ they find
$\lambda=1.12$. [5] discriminate between isothermal and adiabatic gas with
accretion eigenvalues of $\lambda$=1.12, and 0.12, respectively. In this paper
I assume an eigenvalue $\lambda$=0.05. The motivation for this choice is
discussed in section 4, while section 5 and 6 show that this choice fits the
observational constraints quite well.
Figure 2: Left: Baryon temperature as a function of redshift. Right: Mean
relative velocity $\langle v_{rel}\rangle$ between dark matter and baryons,
sound speed $c_{s}$ and the effective velocity $v_{eff}$ (eq. 3.8) as a
function of redshift.
Let us first look at the thermal history and thus the sound speed of the gas
over cosmic history. A nice summary is given in figure 15 of [57]. Despite
having decoupled from the CMB at z$\approx$1089, the gas temperature continues
to follow the temperature evolution T$\propto$(1+z) of the background photons
due to Compton scattering off residual ionized electrons from the
recombination era. Below redshifts z$\approx$200 the residual ionization in
the gas is low enough, that it decouples from the background radiation and
cools adiabatically following the relation T$\propto$(1+z)2. When the first
objects form and reionization starts around z$\lesssim$20, the gas is heated
up to temperatures $\sim$104 K. The details of re-ionization are still
uncertain and will be discussed below. I have deliberately chosen a redshift
of z$\approx$20 for re-ionization to become dominant, with full ionization
occurring around z$\approx$7\. Finally, at z<3, when the bulk of the cosmic
baryons are falling into increasingly larger dark matter halos and become
virialized, they are further heated up to form the warm/hot intergalactic
medium at temperatures $10^{5-7}$K [58]. Using figure 2b in this paper I
estimate average temperatures for the IGM of 5$\times 10^{4}$, 1.5$\times
10^{5}$, and 8$\times 10^{5}$ K at z=2, 1, 0, respectively. The baryon
temperature as a function of redshift assumed in this work is shown in figure
2 (left). The sound speed of the gas is given by
$c_{s}=\sqrt{\frac{\gamma kT}{\mu m_{H}}},$ (3.5)
where $\gamma$=5/3 for an ideal monoatomic gas, $\mu$=1.22 is the mean
molecular weight including a helium mass fraction of 0.24, $m_{H}$ is the mass
of the hydrogen atom, and $T$ is the temperature of the baryons as a function
of cosmic history discussed above [59]. The sound speed as a function of
redshift is the dotted curve shown in figure 2 (right).
I now discuss the relative velocity $v_{rel}$ between the dark matter PBH and
the baryons throughout cosmic history. In the radiation-dominated phase of the
universe at z>1089, the dark matter is already hierarchically clustering under
the influence of its own gravity. The sound speed of the photon-baryon fluid
is very high, of order one third of the velocity of light, and thus the normal
matter undergoes baryonic acoustic oscillations [60, 61]. This leads to a
spatial separation between baryons and dark matter and thus to a Gaussian
distribution of relative velocities with an average around $\langle
v_{rel}\rangle$$\approx$30 km/s [see 59, 62]. At z$\approx$1089, when
electrons and protons combine and the universe becomes transparent, the sound
speed of the gas dramatically drops to $\sim$6 km/s. The dark matter PBH
kinematically decouple from the baryons and their relative velocities become
highly supersonic. In the linear growth phase of the universe, at scales
larger than the gas Jeans-length, the dark matter and the baryons fall in the
same gravitational potentials of the cosmic web and thus their relative
velocity decreases with the cosmic expansion:
$\langle v_{rel}\rangle_{linear}\approx 30~{}{\frac{1+z}{1000}}~{}{\rm
km~{}s}^{-1}.$ (3.6)
This relation is shown as the right branch of the dashed line in figure 2
(right), above redshifts $z\gtrsim 20$. From this figure it becomes apparent,
that between recombination and re-ionization the PBH move with highly
supersonic, but decreasing velocities through the gas, due to the decaying
sound waves. As we will see below, in this branch of the velocity curve the
contribution of PBH to the cosmic backgrounds has a maximum. At lower
redshifts, at scales smaller than the gas Jeans-length, the hierarchical
clustering becomes non-linear and baryons falling into growing dark matter
halos are virialized. As a consequence, the velocity dispersion between dark
matter and gas increases again towards lower redshifts, scaling as
$M_{Halo}^{1/3}$, where $M_{Halo}$ is the mass of the dark matter halo
becoming non-linear. I used two different methods to estimate the average
virial velocity for redshifts z$\lesssim$20\. First, the Millenium Simulation
run described in [63] gives the mass function of dark matter halos with halo
masses $M_{Halo}$>$10^{10}M_{\odot}$ for five different redshifts between z=10
and z=0. I extrapolated these simulated mass functions to lower masses
($M_{Halo}$>10${}^{3}M_{\odot}$) using the empirical universal halo mass
function shape found through simulations by [64]. For every mass bin I
determined the virial velocity according to the calibration of the velocity
dispersion as a function of halo mass described in [65], and then calculated
the average for each epoch. These virial velocities are shown as crosses in
figure 2 (right). The extrapolation to halo masses as small as
$M_{Halo}>10^{3}M_{\odot}$ is rather uncertain, both for the mass function and
the velocity dispersion, because the cosmological simulations do not have a
fine enough mass resolution at this scale. Also, the velocity dispersion
relevant for Bondi capture onto PBH is determined by the smallest mass scales
becoming non-linear at any redshift. A second possibility to calculate the
relative velocities in the non-linear phase is therefore to determine the
velocity dispersion directly from the dark matter power spectrum and integrate
this over the smallest non-linear scales. This calculation has been performed
by M. Bartelmann (2020, priv. comm.), adopting the normalization of the
primordial power spectrum of $\sigma_{8}$=0.8. The relative velocity in the
non-linear regime can be approximated by
$\langle v_{rel}\rangle_{nonlinear}\approx 620~{}(1+z)^{-2.3}~{}{\rm
km~{}s}^{-1},$ (3.7)
and is shown as the left branch ($z\lesssim 20$) of the dashed line in figure
2 (right). At z=2 the cluster velocity dispersion agrees with this estimate,
but it systematically overestimates the small-scale velocity dispersion at
larger redshifts.
Since we are interested in the total contribution of PBH to the
electromagnetic radiation of the universe, we have to average over the whole
Gaussian distribution of relative velocities. The Bondi accretion rate is
proportional to $v_{rel}^{-3}$ (see above), and therefore smaller velocities
dominate. For this particular case [38] propose to replace the quadratic
average of relative velocity and sound speed in Bondi’s formula (3.3) above
with their harmonic mean:
$v_{eff}=\sqrt{\langle v_{rel}\rangle~{}c_{s}.}$ (3.8)
This is the assumption I adopt here, and the resulting effective velocity
$v_{eff}$ is shown as solid red curve in figure 2 (right). With equation (3.8)
the accretion rate becomes
$\dot{M}=2\lambda\pi\rho(GM)^{2}~{}(\langle v_{rel}\rangle~{}c_{s})^{-3/2}$
(3.9)
It is interesting to note that in the range 200<z<20 both relative velocity
and sound speed decrease linearly with (1+z). Therefore the mass accretion
rate is expected to be constant in this era. It is important to understand
that the redshift, at which both the sound speed and the relative velocity of
the gas turn around due to re-ionization and virialization, respectively, and
rapidly increase towards lower redshift, is crucial for our analysis. The
redshift, where the minimum velocity occurs, ultimately determines the maximum
flux contribution of PBH accretion to the cosmic backgrounds.
The calculation of the Bondi accretion rate in equation (3.9) requires the
density $\rho$ as a function of redshift. With $\Omega_{bar}$=0.049 and
$\rho$=n$\cdot$mH, where $n$ is the number density of particles, I find
$n=250~{}\left(\frac{1+z}{1000}\right)^{3}~{}{\rm cm}^{-3}.$ (3.10)
I define $\dot{m}$ as the normalized mass accretion rate
$\dot{m}=\dot{M}/\dot{M}_{Edd}$, with the Eddington accretion rate
$\dot{M}_{Edd}$=1.44$\times 10^{17}M/M_{\odot}$ g s-1. Then I can rewrite
equation (3.9) into normalized quantities
$\dot{m}=\lambda\cdot
0.321\left(\frac{1+z}{1000}\right)^{3}~{}\left(\frac{M}{M_{\odot}}\right)\left(\frac{v_{eff}}{1~{}{\rm
km~{}s}^{-1}}\right)^{-3}$ (3.11)
With a very broad PBH mass spectrum, including intermediate-mass and
supermassive black holes (MPBH>1000 M⊙), it is important to include the
effective viscosity due to the Hubble expansion in the early universe [53].
The Bondi radius determines the amount mass captured by the PBH:
$r_{B}={\frac{G~{}M}{v_{eff}^{2}}}\approx 1.34\cdot
10^{16}\left(\frac{M}{M_{\odot}}\right)\left(\frac{v_{eff}}{1~{}{\rm
km~{}s}^{-1}}\right)^{-2}cm.$ (3.12)
This is shown for two different PBH masses in figure 8 (left). The
characteristic time scale for accretion is the Bondi crossing time
$t_{cr}=r_{B}/v_{eff}$, which can be compared to the Hubble time $t_{H}$ at
the corresponding redshift. If $t_{cr}<t_{H}$ there will be stationary
accretion, while for Bondi crossing times larger than the Hubble time the
accretion is suppressed. For every redshift we can calculate a critical PBH
mass Mcr, below which the steady-state Bondi assumption can be applied. For
redshifts z=1000, 200, 20 this critical mass corresponds to
$log(M_{cr}/M_{\odot})$=5.3, 4.8, 3.4, respectively. At redshifts below z=20
Mcr rapidly increases to values above 106 M⊙. For PBH masses close to and
above Mcr the Bondi accretion rate can be scaled by the Hubble viscosity loss
given in the dashed curve in figure 3 (left) of [53].
Inserting $v_{eff}$ from equation (3.8) and figure 2 (right) into equation
(3.11), assuming an accretion eigenvalue $\lambda$=0.05 and applying the above
Hubble viscosity correction, I can finally calculate the normalized accretion
rate as a function of redshift and PBH mass. The results are shown in figure 3
(left). For PBH masses smaller than $\sim$1000 M⊙ the normalized accretion
rate is roughly constant in the redshift range 20<z<200 due to the fact that
the density and velocity dependence on redshift in equation (3.9) roughly
cancel out (see also the lower panel of figure 4 in [38]). At z<20 $\dot{m}$
drops dramatically because of the effective velocity increase. PBH masses
larger than $\sim 10^{4}$ M⊙ reach accretion rates close to the Eddingon limit
at z$\gtrsim$100, but are significantly affected by the Hubble viscosity at
z$\gtrsim$20\. For all PBH masses the accretion rate is small enough, that the
growth of the PBH population can be neglected over cosmic time (PBH with
masses in the range 105-7 M⊙ accrete about 0.5–2% of their mass until z>20).
Figure 3: Left: Normalized accretion rate onto PBH with masses in the range
0.1–107 M⊙ as a function of redshift. Right: Radiative efficiencies derived
from the accretion rates, assuming the hot accretion flow model of [56] with a
viscous heating parameter $\delta$=0.5.
Figure 4: Spectra of the hot disk accretion flow (ADAF) from [55] with a
viscous heating parameter $\delta$=0.5, divided by the normalized accretion
rate. Left: accretion onto a 10 M⊙ black hole for different accretion rates,
as indicated. Right: same for an accretion rate of $log(\dot{m})$=-1.6 but
different black hole masses (as indicated) .
## 4 Accretion spectrum and radiative efficiency
For the accretion flow and the electromagnetic emission mechanism I follow
[37, 8] and assume the formation of an accretion disk. Accretion rates close
to the Eddington limit will lead to the formation of a standard Shakura-
Sunyaev disk [66], which has a canonical radiative efficiency $\eta\approx
0.1$. For much lower accretion rates $\dot{m}$$\ll$1 an advection-dominated
hot accretion flow [55] is expected, with a significantly lower radiation
efficiency [56], roughly scaling according to $\eta\propto\dot{m}$. Figure 4
shows hot accretion flow spectra from [55] with a viscous heating parameter
$\delta$=0.5 for black holes, normalized by Eddington luminosity and mass
accretion rate. The left graph shows radiation from a 10 M⊙ black hole at
different mass accretion rates. The right graph shows the spectrum from black
holes with different masses in the range 10-109 M⊙ and a mass accretion rate
$log(\dot{m})$=–1.6.
It is important to understand, that for advection dominated accretion flows
not all the matter entering the Bondi radius will actually reach the black
hole. This is due to feed-back mechanisms acting on the accreted gas, e.g.
producing outflows or jets. The advection dominated flow models of [56, 55]
therefore find a radial dependence of mass accretion rate $\dot{M}\propto
R^{\alpha}$, typically with $\alpha\sim 0.4$. Within a radius of about 10 RS,
where $R_{S}=2GM/c^{2}$ is the Schwarzschild radius, the accretion flow more
closely follows the standard Shakura-Sunyaev description of a constant
accretion rate with radius down to the last stable orbit ($\sim 3R_{S}$). In
terms of the classical Bondi description of quasi-spherical capture, the loss
of accreted matter can be associated with the accretion eigenvalue:
$\lambda\approx\left(\frac{10R_{S}}{R_{D}}\right)^{\alpha},$ (4.1)
where $R_{D}$ is the outer radius of the accretion disk formed. For
$\alpha$=0.4, the value of $\lambda$=0.05 chosen for the analysis in this
paper therefore corresponds to an outer disk radius of $R_{D}\sim$2$\times$104
$R_{S}$, about 8 orders of magnitude smaller than the Bondi radius. In this
picture the accretion flow on large scales follows the Bondi quasi-spherical
flow for most of the radial distance, until the advection-dominated accretion
disk is formed.
The radiative efficiency for the ADAF spectra in figure 4 is the integral over
these curves and has been calculated through numerical simulations by [56].
For this work I use a digitized version of the highest efficiency curve in
their figure 1, with a viscous heating parameter $\delta$=0.5111Please take
note that the definition of $\dot{m}$ between these authors and the analysis
presented here differs by a factor of 10.. A maximum radiative efficiency of
$\eta$$\sim$0.08 is achieved for log($\dot{m}$)>–1.6. We can now calculate the
radiative efficiency for every mass and redshift bin from the normalized
accretion rate in figure 3 (left). The result is shown in figure 3 (right). It
turns out that above redshifts z$\gtrsim$20 and black hole masses MPBH>100 M⊙,
which dominate the contribution to the extragalactic background light, the
radiative efficiencies are relatively large (typically >3%).
Figure 5: Density-weighted bolometric luminosity of single PBH as a function
of mass for different redshifts indicated (left), and as a function of
redshift for different mass bins indicated (right).
Figure 6: Density-weighted bolometric flux of single PBH as a function of mass
for different redshifts indicated (left), and as a function of redshift for
different mass bins indicated (right).
We now have the ingredients to calculate the bolometric luminosity and flux
expected from the baryon accretion onto the assumed PBH mass spectrum over
cosmic time. For every black hole of mass MPBH I calculate the expected
bolometric luminosity L${}_{bol}=\dot{m}~{}\eta~{}L_{Edd}$, where
LEdd=1.26$\times$10${}^{38}~{}M_{PBH}/M_{\odot}$ erg/s is the Eddington
luminosity, and the normalized mass accretion rate $\dot{m}$ as well as the
radiation efficiency $\eta$ are taken from the data in figure 3. In every mass
bin, the relative number density of PBH compared to those of 1 M⊙ is
nPBH=fPBH/MPBH, where fPBH is the PBH mass function from figure 1. For every
mass and redshift bin I thus multiply the bolometric luminosity with this
relative number density in order to obtain the density-weighted luminosity
$\langle L_{bol}\rangle^{*}$ for an equivalent PBH of 1 M⊙. This quantity is
shown in figure 5 as a function of PBH mass (left) and redshift (right). It
shows that the largest contribution to the PBH luminosity over cosmic time
comes from PBH in the mass range MPBH=103-7 at redshifts z>100\. The
Chandrasekhar PBH mass peak is subdominant in this representation. The total
PBH luminosity deposited in the baryonic gas at high redshifts is important
for the pre-ionization and X–ray heating of the intergalactic medium discussed
in section 6.
To calculate the contribution of PBH accretion to the extragalactic background
light we need to convert the density-weighted luminosities in Figure 5 to
bolometric fluxes using the luminosity distance $D_{L}$ at the corresponding
redshift: $\langle F_{bol}\rangle^{*}$=$\langle
L_{bol}\rangle^{*}$/(4$\pi~{}D_{L}^{2}$). This quantity is shown in figure 6
as a function of PBH mass (left) and redshift (right). It shows that the
largest contribution to the extragalactic surface brightness is produced at a
redshift z$\approx$20 from PBH in the mass range MPBH=102-5, and a similar
contribution from the Chandrasekhar mass peak. SMBH at M${}_{PBH}\sim
10^{6.5}$ have a notable contribution around z$\sim$10.
## 5 The contribution of PBH to the extragalactic background light
To calculate the surface brightness per redshift shell in a particular
observed frequency band [$\nu_{1}$;$\nu_{2}$] of the electromagnetic spectrum,
I take into account the spectral shape and the fraction of the radiation
falling into the rest frame frequency band [$\nu_{1}$/(1+z);$\nu_{2}$/(1+z)].
The exact spectral shape is not so important for this derivation, it is mainly
used to calculate the bolometric corrections, i.e. the fraction of the total
luminosity falling into the various frequency bands as a function of redshift.
The ADAF spectra in figure 4, in particular those at high $\dot{m}$ values,
can be approximated by power laws with an exponential cutoff at $\sim$200 keV.
Following [37] and [8], I assume a power law slope of –1 (corresponding to a
flat line in figure 4). Below a critical frequency $\nu_{c}$ the power law
spectrum is cut off by synchrotron self-absorption into a power law with a
steep slope of approximately +1.86. As can be seen in figure 4 (right),
$\nu_{c}$ depends on MPBH and can be approximated by
log($\nu_{c}$)$\approx$14.82–0.4log(MPBH/M⊙). The bolometric corrections are
then obtained by integrating the analytic normalized spectra over the observed
frequency bands. For the 2–5$\mu$m band we have to consider in addition the
Lyman-$\alpha$ break, which produces a sharp cutoff at z$\gtrsim$30 (see e.g.
[28, 67]). These bolometric corrections are shown in figure 7 (left) for the
2–5$\mu$m NIR band, the 0.5–2 keV and the 2–10 keV X–ray bands, respectively.
To predict the surface brightness of all PBH across cosmic time in these
observed frequency bands, the total flux per PBH in figure 6 (right) has to be
multiplied with the bolometric correction and the PBH surface density in a
particular redshift shell. Using the critical mass density of the universe
$\rho_{crit}$=1.26$\times 10^{20}M_{\odot}~{}{\rm Gpc}^{-3}$ and the Dark
Matter density $\Omega_{DM}$=0.264, as well as the reference mass 1M⊙, a
comoving PBH space density of $n_{PBH}=3.32\times 10^{19}(1+z)^{3}~{}{\rm
Gpc}^{-3}$ is obtained. For every redshift shell [z+$\Delta$z] the PBH space
density is multiplied with the volume of the shell [V(z+$\Delta$z)–V(z)] and
divided by 4$\pi$ deg2 to obtain the number of PBH per deg2. Figure 7 (right)
shows the derived surface brightness as a function of redshift (per
$\Delta$z=1 interval) for the three spectral bands considered here. The
emission in all three bands peaks around z$\approx$20 with a FWHM of
$\Delta$z$\approx$[-3;+6].
Figure 7: Left: The bolometric correction, i.e. the fraction of the total
luminosity falling into the respective observed frequency band as a function
of redshift, for the 2–5$\mu$m NIR band, as well as the 0.5–2 and 2–10 keV
X–ray bands. Right: Predicted surface brightness of the PBH in the same
observed bands as a function of redshift (per $\Delta$z=1).
The curves in figure 7 (right) can now be integrated to predict the total PBH
contribution to the extragalactic background light as SB2-5μ$\approx$10-13,
SB0.5-2keV$\approx$1.9$\times$10-13, and SB2-10keV$\approx$1.3$\times$10-13
erg cm-2 s-1 deg-2, respectively. The minimum amount of X–ray surface
brightness necessary to explain the CXB/CIB cross-correlation signal observed
by [23] in the 0.5–2 keV band has been discussed by [29]. This is $9\times
10^{-14}$ erg cm-2 s-1 deg-2, corresponding to roughly 1% of the total CXB
signal in this band. The 0.5–2 keV PBH contribution predicted for an accretion
eigenvalue of $\lambda$=0.05 in equation (3.11) is thus about a factor of 2
larger than the observed CXB fluctuation signal, which could well be
consistent, given the coherence between the CXB and CIB signals. As discussed
above, there is a marginally significant diffuse CXB remaining after
accounting for all discrete source contributions [31, 34]. Extrapolating into
the X–ray bands considered here, this residual flux corresponds to
$\approx$(7$\pm$3) and $\approx$(9$\pm$20)$\times 10^{-13}$ erg cm-2 s-1 deg-2
in the 0.5–2 keV and 2–10 keV band, respectively. Assuming the $\lambda$=0.05
value, the predicted PBH contribution is therefore well below the upper limit
(15–25%) of any unresolved component left in the CXB. The main result of this
paper is therefore, that the assumed PBH population for the dark matter can
indeed explain the X–ray fluctuation signal, with a Bondi accretion eigenvalue
of $\lambda$=0.05.
The flux measured in the 2–5$\mu$m CIB fluctuations at angular scales >100" is
about 1 nW m-2 sr-1 [68], or 3$\times 10^{-10}$ erg cm-2 s-1. The cross-
correlated CIB/CXB fluctuations contribute about 10% to the total CIB
fluctuations [23], i.e. 3$\times 10^{-11}$ erg cm-2 s-1. Therefore the
predicted PBH contribution to these CIB fluctuations is only about 0.5% for
$\lambda$=0.05. It is argued in [6] that PBH in the early universe could
amplify the cosmic power spectrum at small spatial scales (see below).
Together with the pre-ionization of the intergalactic medium discussed below,
the PBH can therefore significantly increase the associated star formation.
The NIR emission in this picture would then be dominated by early star
formation associated with PBH instead of direct PBH emission.
## 6 Discussion
### 6.1 Linear versus post-linear growth
In this simplified treatment I only consider the linear evolution of the power
spectrum above the virialization redshift around z$\approx$20 (see figure 2
right). On sufficiently large scales the initial power spectrum has been very
precisely determined as nearly scale invariant with overdensities of 10-4
[35], and the PBH density field is expected to follow the standard adiabatic
perturbations. On small scales the power spectrum is only poorly constrained
and could be significantly amplified by the discrete nature of the PBH
population itself [6, 69, 70]. Poisson variations in the density of PBH will
introduce non-linear growth of density fluctuations and the corresponding
velocity dispersion already well before the virialization redshift z$\sim$20
discussed above. However, from numerical simulations [70] conclude that the
nonlinear velocity perturbations introduced by >20 M⊙ PBH are too small to
dominate the relative velocities between baryons and PBH at z$\gtrsim$100 [see
also 71]. However, non-linear effects definitely become more important at
lower redshifts (see above) and could effectively reduce the Bondi capture
rate.
### 6.2 Magnetic fields in the early universe
The accretion mechanism assumed in the Bondi capture model only works, if
there is a rough equipartition between the kinetic energy and magnetic fields
in the accreted gas, as it is the case in the turbulent interstellar medium of
our Galaxy. It is therefore justified to ask, whether this mechanism can also
work at high redshifts, where the existence and magnitude of magnetic fields
is still unclear. Magnetic fields are present at almost every scale of the low
redshift universe, from stars and planets to galaxies and clusters of galaxies
and possibly even in the intergalactic medium in voids of the cosmic web, as
well as in high-redshift galaxies. [72] and [73] review the observations and
possible origin of magnetic fields. There is a surprising similarity between
the relatively strong magnetic fields measured in our own Galaxy (0.3–0.4 nT)
and other nearby galaxies ($\sim$1 nT) with magnetic fields measured in
clusters of galaxies (0.1–1 nT), as well as in high redshift galaxies ($\sim$1
nT), when the universe was only about 1/3 of its current age. There are even
indications of magnetic fields of order $\gtrsim$10-20 T in cosmic voids
derived from the gamma ray emission of blazars [74].
One can come to the conclusion that the origin of cosmic magnetism on the
largest scales of galaxies, galaxy clusters and the general intergalactic
medium is still an open problem [75]. It is usually assumed that primordial or
cosmic seed fields are amplified over time through the galactic dynamo effect
to produce the rather strong fields observed in local galaxies. In this
picture it is, however, unclear how similar fields can be created in such
different settings (e.g. clusters) and different cosmic times (high-redshift
galaxies). An interesting possibility is therefore that cosmic magnetic fields
could be remnants from the early universe, or created in a process without
galactic dynamos. Assuming equipartition, the energy density in the CMB
photons would correspond to a magnetic field of about 0.3 nT. Magnetic fields
of $10^{-20}$ T, as observed in cosmic voids today, would only require a
minute fraction of $10^{-10}$ of this energy density in the early universe to
be channeled into magnetic fields.
Figure 8: Left: The Bondi radius for a 104 M⊙ (thin blue) and 1 M⊙ (thick
blue) PBH compared to the proton (red) and electron (green) Larmor radius,
assuming a magnetic field of B=$10^{-20}$ T, as observed in local galaxy
voids. Right: Baryon ionization/heating fraction $\chi_{e}$ as a function of
redshift. The thin dash-dotted line shows the residual ionization left over
from the radiation dominated era [76]. The red curve shows the ionization
fraction from UV photons produced by accreting PBH. The blue curve shows the
corresponding heating fraction by >1 keV X–ray photons. The thick dashed black
line shows one of the models consistent with the Planck satellite data [35]
(see text). The green hatched areas shows the range of high–redshift
ionization fractions considered in [16].
Here I argue, that PBH could play a role in amplifying early magnetic seed
fields and sustaining them until the epoch of galaxy formation. I compare the
Bondi radius in eq. (3.12) and figure 8 (left) with the Larmor radius
$r_{L}={\frac{m~{}v_{\bot}}{|q|~{}B}},$ (6.1)
which determines the gyro motion of particles moving through a magnetic field.
Here $m$ is the mass of the particle (either proton or electron), $v_{\bot}$
is the velocity component of the particle perpendicular to the magnetic field,
$|q|$ is the absolute electric charge of the particle, and $B$ is the magnetic
field. Assuming a seed field of $B$=10-20 T and approximating the velocity
with the sound speed $v_{\bot}$$\approx$$c_{s}$ yields the gyro radius for
both protons and electrons. The proton gyro radius is about a factor of 2000
larger than the electron gyro radius.
Figure 8 (left) shows the Bondi radius as well as the proton and electron
Larmor radii as a function of redshift. If the gyro radius is smaller than the
Bondi radius, the respective particle is easily accreted by the PBH. If,
however, the gyro radius is larger than the Bondi radius, the particle will
first not be easily accreted, but rather spiral around the PBH. From 8 we see,
that at redshifts z$\gtrsim 70$ and PBH masses in the range
MPBH$\approx$0.3–500 for our assumed magnetic field strength the proton Larmor
radius is larger than the Bondi radius, while the electron Larmor radius is
smaller than the Bondi radius. There is still a substantial fraction of
residual electrons and protons/helium ions from the era before recombination
(see the dash-dotted curve in 8 right from [76]). These electrons have
therefore no problem being accreted, while for certain PBH masses protons
resist the accretion. This will create a net electric current, which in turn
will increase the average magnetic field strength until the proton gyro radius
becomes smaller than the Bondi radius. This way the PBH can amplify the
average magnetic field. The supersonic motion between baryon gas and PBH
discussed above is expected to be coherent over large scales (of the order of
Mpc) and can therefore induce large-scale ordered magnetic fields. A further
magnetic field amplification occurs, as discussed above, in the accretion
funnel, when magnetic fields are dissipated through reconnection and ejected
with the plasma. In a sense, the ubiquitous PBH can possibly create their own
magnetic field and distribute it throughout the universe. It is, however,
plausible to assume, that magnetic fields in the early universe should be
smaller than today, and the fractions of ionized baryons is less. This could
also explain a rather small Bondi accretion eigenvalue required to match the
observations.
### 6.3 Re-Ionization
Next I turn to the contribution of PBH accretion to the re-ionization and re-
heating history of the universe. At z$\approx$1089, when the photons decoupled
from the baryons and formed the CMB radiation, the universe became
predominantly neutral. Afterwards the universe entered a long period of
“darkness”, in which the residual ionization left over from the primordial
photon-baryon fluid diminished (see figure 8 right), the background photons
cooled down, and any higher-frequency emission was quickly absorbed in the
atomic gas. In the model described here the first sources to illuminate the
“dark ages” would be the PBH accreting from the surrounding gas. Their
ultraviolet emission, above the Hydrogen ionization energy of 13.6 eV, would
start to re-ionize small regions around each PBH. However, in the beginning
the density of the surrounding gas is still so high that the ionized matter
quickly recombines. As long as the re-combination time is much shorter than
the Hubble time at the corresponding epoch, UV photons from the PBH cannot
penetrate the surrounding medium, but instead produce an ionized bubble
growing with time. In this type of ionization equilibrium the number of
ionizing photons $N_{ion}$ required to overcome recombination is given as the
ratio between the Hubble time $t_{H}(z)$ and the recombination time
$t_{rec}(z)$ at the particular epoch, and can be derived from equations (2)
and (3) in [77] as
$N_{ion}=t_{H}/t_{rec}=max[1,0.59~{}\left(\frac{1+z}{7}\right)^{1.5}].$ (6.2)
At a redshift z=1000, $N_{ion}$ is about 1000, and reaches a level of unity at
z$\lesssim$10 for the assumed set of parameters. For this calculation I ignore
clumping of the ionized gas. In reality the effective clumping factor is
relatively large for reionization at high redshift because the ionizing
sources are more numerous in the filaments of the cosmic web, but must also
ionize a correspondingly larger fraction of dense gas in the filaments, and
thus ionization is slowed down. At lower redshift, when molecular gas and
stars have already formed, not all UV photons will escape the dense regions.
The effective escape fraction is one of the largest uncertainties in our
current understanding of re-ionization. For simplicity, I assume an escape
fraction $f_{esc}$=0.1 for UV photons, and $f_{esc}$=1 for X–ray photons,
independent of redshift.
To calculate the history of pre-ionization by PBH I integrate the above
normalized ADAF model for frequencies log($\nu$)>15.52 Hz, corresponding to
the hydrogen ionization energy of 13.6 eV. To calculate the number of ionizing
photons per PBH of reference mass 1 M⊙ I take the density-weighted luminosity
$\langle L_{bol}\rangle^{*}$ from figure 5 (right). To determine the average
space density of ionizing photons I multiply with the average comoving space
density of PBH (assuming the reference mass 1 M⊙):
$n_{PBH}=1.06\times 10^{-54}~{}\left(\frac{1+z}{1000}\right)^{3}~{}{\rm
cm}^{-3},$ (6.3)
and with the escape fraction $f_{esc}$ and finally divide by $N_{ion}$ from
eq. (6.3) and the average density of baryons given in equation (2.10) to
determine the ionization rate of baryons over cosmic time.
The red curve in figure 8 (right) shows the cumulative ionization fraction
$\chi_{e}$ as a function of redshift for the accretion eigenvalue
$\lambda$=0.05. A maximum cumulative ionization fraction of $\sim$2.8%, is
reached at a redshift z$\approx$10\. This can be compared to one of the recent
models determined from the Planck satellite data [35]. Here the 1$\sigma$
upper percentile of the FlexKnot model in their figure 45, which is consistent
with the ionization optical depth determined from the most recent Planck data,
is shown as dashed curve. A high-redshift contribution to the ionization
history of the universe has also been discussed by [78] and [16]. The range of
$\chi_{e}$ values assumed in the latter work is shown as green hatched region
in figure 8 (right). For the choice of $\lambda$=0.05, the UV emission from
the PBH population assumed in the toy model is therefore fully consistent with
the observational constraints from Planck.
### 6.4 X–ray heating
The role of X–ray heating in shaping the early universe has been discussed by
[79]. Compared to UV photons, X–ray photons have a much larger mean free path
and can therefore ionize regions far away from the source. In addition, most
of the X–ray energy gets deposited into heating up the gas. In order to
estimate the amount of X–ray heating of the gas I applied the same mechanism
as for the UV photons above, but integrating the above ADAF model for
frequencies log($\nu$)>17.68 Hz, corresponding to 2 keV. I assume an escape
fraction of $f_{esc}$=1 and $N_{ion}$=1. The blue curve in figure 8 (right)
shows the cumulative 2 keV heating fraction per baryon as a function of
redshift for the assumed accretion eigenvalue of $\lambda$=0.05. The maximum
cumulative heating fraction is $\sim$1.6%. X–rays from PBH therefore have only
a small contribution to the pre-ionization of the universe as a whole, but can
be responsible for a part of the pre-heating of gas observed in the "entropy
floor" of groups of galaxies. [80] reviewed the energetics of groups and
clusters of galaxies, which cannot be reproduced by simple models, where the
gas density is proportional to dark matter density. [81] and [82] argued that
the gas must have been pre-heated before falling into the cluster potential.
X–ray observations of groups of galaxies with ROSAT by [83] confirmed the need
for a non-gravitational entropy injection in the group gas. These authors
coined the term "entropy floor", which amounts to an energy of about 2 keV per
baryon injected into the group gas. The pre-heating of the gas by PBH, albeit
only contributing to a small fraction of the total baryon content of the
universe, could have played an important role in heating the high-density
regions, which first formed groups and clusters.
### 6.5 Cosmological 21-cm signal
Figure 9: Density-weighted 1.4 GHz (observed) luminosity of a single PBH as a
function of mass for different redshifts indicated.
The red-shifted 21-cm line can provide important new constraints on the
physical processes in the early universe [see e.g. 84, 8]. The Experiment to
Detect the Global EoR Signature (EDGES) has measured a strong, sky-averaged
21-cm absorption line profile after subtracting the Galactic synchrotron
emission [85]. The signal is centered at a frequency around 78 MHz and covers
a broad range in redshift z=15–20. The signal may be due to ultraviolet light
from first objects in the universe altering the emission of the 21-cm line by
lowering the spin temperature of neutral hydrogen relative to the CMB.
However, the signal is about three times larger than that expected from the
standard $\Lambda$CDM cosmology, which led some authors to suggest new dark
matter physics [e.g. 86]. Instead of new physics, an increased 21-cm radio
background contribution above the CMB at the epoch around z=15–20 could also
explain the EDGES signal. Indeed, [87] estimate the additional 21-cm radio
background from accretion onto putative radio-loud intermediate-mass black
holes (IMBH) forming in first molecular cooling halos at redshifts z=16–30.
This could be sufficient to explain the EDGES feature, however, it requires
extreme assumptions about the radio loudness of the IMBH population. Instead
of assuming an interpretation in terms of mini-QSOs from IMBH grown through
accretion, I estimate here, whether PBH accretion could have a significant
contribution to the EDGES signal. A full treatment of this effect for the PBH
toy model is beyond the scope of this paper, but similar to the treatment of
the PBH contribution to the CXB and CIB derived in section 5, I can estimate
the PBH contribution to the observed low-frequency cosmic radio background,
and thus to the EDGES signal.
The balloon-borne double-nulled Absolute Radiometer for Cosmology,
Astrophysics and Diffuse Emission (ARCADE2) instrument has measured the
absolute temperature of the sky at frequencies 3, 8, 10, 30, and 90 GHz, and
detected a significant excess over the CMB blackbody spectrum at a temperature
of 2.731K [88]. Combining the ARCADE2 measurements with lower frequency data
from the literature, the excess brightness temperature can be characterized as
a power law TR=1.19 ($\nu$/1 GHz)-2.62 K, which translates into a radio
spectrum with a slope of -0.62 and a normalization of 3$\times$10-22 W m-2
sr-1 at 1.4 GHz. This cosmic radio synchrotron background is substantially
larger than that expected from an extrapolation of the observed radio counts
[89], and thus presents a major new challenge in astrophysics. [90] found that
the global 21cm signal can be significantly amplified by an excess background
radiation compared to the standard $\Lambda$CDM models, especially in
absorption. Assuming that only 10% of the radio synchrotron background
originates at large redshifts, they predict a 21cm feature almost an order of
magnitude stronger than that expected purely from the CMB. Interpolating
between their calculations for 0.1% and 10% excess background I find, that an
excess high-redshift radiation field of about 5% of the observed radio
synchrotron background is sufficient to explain the EDGES findings.
In order to calculate the expected PBH contribution to the radio background I
assume that each black hole has a radio emission following the fundamental
plane relation between X-ray luminosity, radio luminosity and black hole mass
found by [91]. I use the parameterization for radio-quiet AGN from [92]:
log(LR)=0.85 log(LX)+0.12 log(MPBH), where LR is the 1.4 GHz radio luminosity
in units of 1040 erg/s, LX is the 0.1–2.4 keV X–ray luminosity in units of
1044 erg/s, and MPBH is the PBH mass in units of 108 M⊙. The X–ray luminosity
is calculated from the bolometric luminosity shown in figure 5 (right).
Assuming the ADAF radiation spectrum above, the fraction of the bolometric
luminosity radiated in the 0.1-2.4 keV band is 0.23. For the radio spectrum I
assume a power law with spectral index -0.62. This means that the bolometric
correction is $\propto$(1+z)0.38. The radio luminosities derived this way as a
function of PBH mass and redshift are shown in figure 9. Multiplying these
luminosities with the PBH density over cosmic time, converting into observed
fluxes and integrating over mass and redshift I obtain a contribution of
radio-quiet PBH to the observed radio background of $\sim$3$\times$10-25 W m-2
sr-1 at 1.4 GHz, i.e. a fraction of 0.1% of the observed synchrotron radio
background. Most of this additional radiation field is accumulated at
redshifts z$\gtrsim$20\. Following [90], this excess radio flux would increase
the depth of the 21cm absorption line only by about 30%. If, however, some
fraction of the PBH would be radio-loud (e.g. 5% with 1000 times higher
fluxes), like observed in the AGN population, the 5% excess high-redshift
radio background flux necessary to explain the EDGES feature could be easily
achieved by PBH.
### 6.6 Primordial Black Holes in the Galactic Center
Next I discuss some observational effects of the putative PBH population
residing in the Galactic Center region. First, assuming a Milky Way dark
matter halo of $\sim$10${}^{12}M_{\odot}$, the PBH mass spectrum from section
2 (figure 1) indeed predicts about one supermassive PBH with a mass $\gtrsim
10^{6.5}M_{\odot}$, consistent with the Sgr A∗ SMBH in the center of our
Galaxy [93]. To estimate the density of dark matter and baryons in the
Galactic bulge region itself, I refer to dynamical models of the Milky Way’s
center, using the density of red clump giant stars stars measured in infrared
photometric surveys, as well kinematic radial velocity measurements of M-giant
stars in the Galactic bulge/bar constructed in [94]. From N–body simulations
of stellar populations for barred spiral discs in different dark matter halos
these authors were able to determine with high precision the mass in a volume
of ($\pm 2.2\times\pm 1.4\times\pm 1.2$ kpc3) centered on the Galactic
Bulge/Bar. The total mass is (1.84$\pm$0.07)$\times$1010 M⊙. Depending on the
assumed model, about 9–30% consists of dark matter, i.e. 1.7–5.5$\times$109
M⊙. Applying the above PBH mass spectrum, we thus expect 5–10 intermediate-
mass PBH with MPBH>104 M⊙ in the Galactic bulge region, but zero with MPBH>105
M⊙.
Recent high-resolution observations of high-velocity compact clouds (HVCC) in
the central molecular zone of our Milky Way with the Atacama Large
Millimeter/submillimeter Array (ALMA) have indeed identified five promising
IMBH candidates, wandering through the patchy ISM in the Galactic Center [see
95]. The most compelling case is HCN–0.044–0.009, which shows two dense
molecular gas streams in Keplerian orbits around a dark object with a mass
MIMBH=(3.2$\pm$0.6)$\times$104 M⊙ [96]. The semimajor axis of these Keplerian
streams are around 2 and 5$\times$1017 cm. Another interesting case is the
infrared and radio object IRS13E, a star cluster close to the Galactic Center
potentially hosting an IMBH [97]. ALMA observations identified a rotating,
ionized gas ring around IRS13E [98], with an orbit radius of 6$\times$1015 cm
and a rotation velocity of $\sim$230 km/s. This is thus another promising IMBH
candidate with a mass of MIMBH=2.4$\times$104 M⊙.
Two of the five IMBH candidate sources in [95] are possibly associated with
X–ray sources detected in the deep Chandra images of the Galactic Center [99].
IRS13E has the X–ray counterpart CXOGC 174539.7-290029 with an X–ray
luminosity L2-10keV$\approx$3$\times$1030 erg/s, and CO–0.31+0.11 has the
possible X–ray counterpart CXOGC 174426.3-290816 with an X–ray luminosity
L2-10keV$\approx$4$\times$1029 erg/s. The other three sources have X–ray upper
limits in the range of several 1030 erg/s. Assuming a bolometric correction
factor of 1/30 for the 2–10 keV range, the combination of the mass accretion
eigenvalue $\lambda$ and the radiative efficiency $\eta$ therefore has to be
extremely small, on the order of 3$\times$10-11. This is about a factor of 100
lower than the 2$\times$10-9 LEdd derived for the Galactic Center black hole
Sgr A∗ [55]. Even assuming a very low efficiency ADAF model, a steady-state
accretion solution is unlikely for these objects. The solution of this puzzle
may come from the fact, that the velocity and density gradients of the gas in
the Galactic Center region are so large, that the angular momentum forces any
accreted matter into Keplerian motion well outside the classical Bondi radius
[see 37]. Indeed, the orbital periods and lifetimes of the Keplerian streams
around HVCCs are in the range 104-5 years, and thus accretion is expected to
be highly variable on very long time scales. Another possibility to halt
accretion for a considerable time is the feedback created by outflows during
efficient accretion events. Numerical simulations of the gas dynamics in the
center of the Galaxy [100] show that the outflows significantly perturb the
gas dynamics near the Bondi radius and thus substantially reduce the capture
rate. The net result of both these effects would be a highly variable, low
duty cycle bursty accretion onto the IMBH and SMBH in the Galactic Center,
consistent with the extremely low accretion efficiencies observed. The
accretion limits for black holes in the mass range MPBH=20–100 M⊙ derived from
deep Chandra and radio observations of the Galactic Center [45], are already
shown in figure 1 to be consistent with the assumed PBH mass spectrum. Recent
NuSTAR observations of the Galactic Center, including the effects of gas
turbulence and the uncertainties related to the dark matter density profile
even further weaken these constraints [101]. At any rate, the assumed PBH mass
distribution of section 2 is fully consistent with the observational
constraints for all PBH masses >20 M⊙ in the Galactic Center.
Finally, I check the PBH predictions for lower masses against the Galactic
ridge X–ray emission (GRXE), an unresolved X–ray glow at energies above a few
keV discovered almost 40 years ago and found to be coincident with the
Galactic disk. The GRXE in the 2–10 keV band has a background-corrected
surface brightness of (7.1$\pm$0.5)$\times 10^{-11}$ erg cm-2 s-1 deg-2 which
was largely resolved into discrete sources [102], with the brightest source
having an X–ray luminosity of about 1032 erg s-1, and the minimum detectable
luminosity around 1030 erg s-1. The integrated emission has a strong iron line
from hot plasma at 6.7 keV, and the authors interpret the X–ray emission as
coming from a large population of cataclysmic variables and coronally active
stars. Using the mass determination in the Galactic bulge/bar above I find
that the average baryon density in this region is in the range 17–22 cm-3.
However, most of these baryons are locked up in stars. In order to estimate
the physical conditions of the gas in the Galactic Bulge/Bar region I follow
[103]. According to these authors, there are four phases of the interstellar
medium in the Galactic center region: (1) a cold molecular phase in Giant
Molecular Clouds with temperatures around 50 K and gas densities n=103.5-4
cm-3 covering a volume fraction around 1%; (2) a warm molecular phase with
temperatures around 150 K and gas density n=102.5 cm-3, covering a volume
fraction of $\sim$10%; (3) an atomic phase with temperatures around 500-1000 K
and density $\sim$10 cm-3, covering a volume fraction around 70%, and (4)
ionized gas with temperatures 104-8 K and an unknown density. Depending on the
temperature of the interstellar medium, the sound speeds are in the range
$c_{s}$=1–100 km/s. The stellar velocity dispersion in the central region of
our Galaxy is in the range 100–140 km/s [104], while the dark matter velocity
dispersion is about 110 km/s [105]. In the spirit of the discussion leading up
to equation 3.9 and figure 2 (right) above, I assume an effective velocity for
Bondi accretion $v_{eff}\approx$50 km/s and $\lambda$=0.1. As shown in figures
5 and 6, the PBH emissivity for the assumed mass spectrum is typically
dominated by objects with MPBH>100, which already are discussed above. Indeed,
calculating the Bondi accretion rates and radiative efficiencies for objects
with MPBH<100 for the four ISM phases in the Galactic Center I obtain
negligible PBH contributions to the total GRXE brightness. Some individual
MPBH$\sim$100 M⊙ objects in high density regions could in principle have X–ray
luminosities up to $L_{2-10keV}$=1033 erg/s, more luminous than the brightest
X–ray source detected in the Galactic Ridge survey [102], but taking into
account the strong variability and small duty cycle expected for this class of
objects, their absence in the surveys is understandable. Some of the fainter
unidentified sources in the current deep X–ray surveys could indeed be
accreting PBH and future large X–ray observatories like ATHENA [106] or LYNX
[107] should be able to identify more. See also [108] for future searches in
the IR and sub-mm region.
## 7 Conclusions and Outlook
The interpretation of cold dark matter as the sum of contributions of
different mass PBH families [1] could explain a number of so far unsolved
mysteries, like e.g. the massive seed black holes required to create the
supermassive black holes in the earliest QSOs [13], the ubiquitous massive
LIGO/VIRGO massive binary black holes [e.g. 6], or even the putative "Planet
X" PBH in our own Solar System [14]. The most abundant family of PBH should be
around the Chandrasekhar mass (1.4 M⊙). This prediction may already have been
vindicated by the recent OGLE/GAIA discovery of a sizeable population of
putative black holes in the mass range 1-10 M⊙ [15]. Here I estimate the
contribution of baryon accretion onto the overall PBH population to various
cosmic background radiations, concentrating first on the crosscorrelation
signal between the CXB and the CIB fluctuations discovered in deep Chandra and
Spitzer surveys [23]. Assuming Bondi capture and advection dominsted disk
accretion with reasonable parameters like baryon density and the effective
relative velocity between baryons and PBH over cosmic time, as well as
appropriate accretion and radiation efficiencies, I indeed predict a
contribution of PBH consistent with the residual X–ray fluctuation signal.
This signal peaks at redshifts z$\approx$17–30. The PBH contribution to the
2–5 $\mu$m CIB fluctuations, however, is only about 1%, so that these would
have to come from star formation processes associated with the PBH.
I discuss a number of other phenomena, which could be significantly affected
by the PBH accretion. Magnetic fields are an essential ingredient in the Bondi
accretion process, and I argue that the PBH can play an important role in
amplifying magnetic seed fields in the early universe and maintaining them
until the galactic dynamo processes set in. Next I study the contribution of
the assumed PBH population to the re-ionization history of the universe and
find that they do not conflict with the stringent ionization limits set by the
most recent Planck measurements [35]. X–ray heating from the PBH population
can provide a contribution to the entropy floor observed in groups of galaxies
[83]. The tantalizing redshifted 21-cm absorption line feature observed by
EDGES [85] could well be connected to the radio emission contributed by PBH to
the cosmic background radiation. Finally, the number of IMBH and the diffuse
X–ray emission in the Galactic Center region are not violated by the PBH dark
matter, on the contrary, some of the discrete sources in the resolved GRXE
could be accreting PBH.
It is obvious, that our simple PBH toy model for the dark matter requires
significantly more work to turn it into quantitative predictions. Real
magnetohydrodynamic simulations of the whole PBH mass spectrum including their
own hierarchical clustering would be required to obtain the full history of
their contribution to the cosmic backgrounds. The exciting EDGES discovery
definitely requires a full-blown analysis of the radio contribution of PBH to
the cosmic background. Future X–ray observations with eROSITA and ATHENA,
infrared wide field surveys with Euclid and WFIRST, and microlensing
observations with WFIRST will provide important additional diagnostics in this
exciting and dramatically developing PBH field (see [109, 110]).
## Acknowledgments
I am thankful to Juan García-Bellido for sharing a digital copy of the new
running spectral index PBH mass distribution model in figure 1 in advance of
publication, as well as many very useful discussions about PBH. I am indebted
to Matthias Bartelmann for computing the small-scale non-linear relative
velocity dispersion (figure 2 right) and providing very valuable comments and
corrections to the manuscript. I would like to thank Sergey Karpov for very
helpful discussions and inputs about their spherical accretion model. I would
also like to thank my colleagues Nico Cappelluti, Sasha Kashlinsky and
Alexander Knebe for very helpful discussions and contributions. Finally, I
thank an anonymous referee for pointing out a substantial flaw in the first
version of the paper, which has been corrected here and led to significant
improvements. Throughout this work I made use of Ned Wright’s cosmology
calculator [111] and the NASA Astrophysics Data System (ADS), operated by the
Smithsonian Astrophysical Observatory under NASA Cooperative Agreement
NNX16AC86A.
## References
* [1] J. García-Bellido, _Primordial black holes and the origin of the matter–antimatter asymmetry_ , _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ 377 (2019) 91.
* [2] S. Hawking, _Gravitationally collapsed objects of very low mass_ , _Monthly Notices of the RAS_ 152 (1971) 75.
* [3] B. P. Abbott, R. Abbott, T. D. Abbott, M. R. Abernathy, F. Acernese, K. Ackley et al., _Binary Black Hole Mergers in the First Advanced LIGO Observing Run_ , _Physical Review X_ 6 (2016) 041015 [1606.04856].
* [4] B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley et al., _Binary Black Hole Population Properties Inferred from the First and Second Observing Runs of Advanced LIGO and Advanced Virgo_ , _Astrophysical Journal, Letters_ 882 (2019) L24 [1811.12940].
* [5] S. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D. Kovetz et al., _Did LIGO Detect Dark Matter?_ , _Physical Review Letters_ 116 (2016) 201301 [1603.00464].
* [6] A. Kashlinsky, _LIGO Gravitational Wave Detection, Primordial Black Holes, and the Near-IR Cosmic Infrared Background Anisotropies_ , _Astrophysical Journal, Letters_ 823 (2016) L25 [1605.04023].
* [7] S. Clesse and J. García-Bellido, _The clustering of massive Primordial Black Holes as Dark Matter: Measuring their mass distribution with advanced LIGO_ , _Physics of the Dark Universe_ 15 (2017) 142 [1603.05234].
* [8] O. Mena, S. Palomares-Ruiz, P. Villanueva-Domingo and S. J. Witte, _Constraining the primordial black hole abundance with 21-cm cosmology_ , _Physical Review D_ 100 (2019) 043540 [1906.07735].
* [9] J. García-Bellido, B. Carr and S. Clesse, _A common origin for baryons and dark matter_ , _arXiv e-prints_ (2019) arXiv:1904.11482 [1904.11482].
* [10] C. T. Byrnes, M. Hindmarsh, S. Young and M. R. S. Hawkins, _Primordial black holes with an accurate QCD equation of state_ , _Journal of Cosmology and Astroparticle Physics_ 2018 (2018) 041 [1801.06138].
* [11] J. García-Bellido, _Massive Primordial Black Holes as Dark Matter and their detection with Gravitational Waves_ , in _Journal of Physics Conference Series_ , vol. 840 of _Journal of Physics Conference Series_ , p. 012032, May, 2017, DOI [1702.08275].
* [12] K. M. Belotsky, V. I. Dokuchaev, Y. N. Eroshenko, E. A. Esipova, M. Y. Khlopov, L. A. Khromykh et al., _Clusters of Primordial Black Holes_ , _European Physical Journal C_ 79 (2019) 246 [1807.06590].
* [13] Y. Li, L. Hernquist, B. Robertson, T. J. Cox, P. F. Hopkins, V. Springel et al., _Formation of z ~6 Quasars from Hierarchical Galaxy Mergers_, _Astrophysical Journal_ 665 (2007) 187 [astro-ph/0608190].
* [14] J. Scholtz and J. Unwin, _What if Planet 9 is a Primordial Black Hole?_ , _arXiv e-prints_ (2019) arXiv:1909.11090 [1909.11090].
* [15] Ł. Wyrzykowski and I. Mandel, _Constraining the masses of microlensing black holes and the mass gap with Gaia DR2_ , _arXiv e-prints_ (2019) arXiv:1904.07789 [1904.07789].
* [16] A. Kashlinsky, R. G. Arendt, F. Atrio-Barandela, N. Cappelluti, A. Ferrara and G. Hasinger, _Looking at cosmic near-infrared background radiation anisotropies_ , _Reviews of Modern Physics_ 90 (2018) 025006 [1802.07774].
* [17] A. Kashlinsky, R. G. Arendt, J. Mather and S. H. Moseley, _Tracing the first stars with fluctuations of the cosmic infrared background_ , _Nature_ 438 (2005) 45 [astro-ph/0511105].
* [18] A. Kashlinsky, R. G. Arendt, J. Mather and S. H. Moseley, _New Measurements of Cosmic Infrared Background Fluctuations from Early Epochs_ , _Astrophysical Journal, Letters_ 654 (2007) L5.
* [19] R. G. Arendt, A. Kashlinsky, S. H. Moseley and J. Mather, _Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis_ , _Astrophysical Journal, Supplement_ 186 (2010) 10 [0909.3816].
* [20] A. Kashlinsky, R. G. Arendt, M. L. N. Ashby, G. G. Fazio, J. Mather and S. H. Moseley, _New Measurements of the Cosmic Infrared Background Fluctuations in Deep Spitzer/IRAC Survey Data and Their Cosmological Implications_ , _Astrophysical Journal_ 753 (2012) 63 [1201.5617].
* [21] T. Matsumoto, H. J. Seo, W. S. Jeong, H. M. Lee, S. Matsuura, H. Matsuhara et al., _AKARI Observation of the Fluctuation of the Near-infrared Background_ , _Astrophysical Journal_ 742 (2011) 124 [1010.0491].
* [22] K. Helgason, M. Ricotti and A. Kashlinsky, _Reconstructing the Near-infrared Background Fluctuations from Known Galaxy Populations Using Multiband Measurements of Luminosity Functions_ , _Astrophysical Journal_ 752 (2012) 113 [1201.4398].
* [23] N. Cappelluti, A. Kashlinsky, R. G. Arendt, A. Comastri, G. G. Fazio, A. Finoguenov et al., _Cross-correlating Cosmic Infrared and X-Ray Background Fluctuations: Evidence of Significant Black Hole Populations among the CIB Sources_ , _Astrophysical Journal_ 769 (2013) 68 [1210.5302].
* [24] N. Cappelluti, R. Arendt, A. Kashlinsky, Y. Li, G. Hasinger, K. Helgason et al., _Probing Large-scale Coherence between Spitzer IR and Chandra X-Ray Source-subtracted Cosmic Backgrounds_ , _Astrophysical Journal, Letters_ 847 (2017) L11 [1709.02824].
* [25] Y. Li, N. Cappelluti, R. G. Arendt, G. Hasinger, A. Kashlinsky and K. Helgason, _The SPLASH and Chandra COSMOS Legacy Survey: The Cross-power between Near-infrared and X-Ray Background Fluctuations_ , _Astrophysical Journal_ 864 (2018) 141 [1807.10304].
* [26] Y. Li, N. Cappelluti, G. Hasinger, R. G. Arendt, A. Kashlinsky and F. Pacucci, _Spectral Properties of Populations Behind the Coherence in Spitzer Near-infrared and Chandra X-Ray Backgrounds_ , _Astrophysical Journal_ 883 (2019) 64 [1908.02293].
* [27] K. Helgason, N. Cappelluti, G. Hasinger, A. Kashlinsky and M. Ricotti, _The Contribution of z <~6 Sources to the Spatial Coherence in the Unresolved Cosmic Near-infrared and X-Ray Backgrounds_, _Astrophysical Journal_ 785 (2014) 38 [1311.1254].
* [28] B. Yue, A. Ferrara, R. Salvaterra, Y. Xu and X. Chen, _Infrared background signatures of the first black holes_ , _Monthly Notices of the RAS_ 433 (2013) 1556 [1305.5177].
* [29] A. Ricarte, F. Pacucci, N. Cappelluti and P. Natarajan, _The clustering of undetected high-redshift black holes and their signatures in cosmic backgrounds_ , _Monthly Notices of the RAS_ 489 (2019) 1006 [1907.03675].
* [30] W. N. Brandt and G. Hasinger, _Deep Extragalactic X-Ray Surveys_ , _Annual Review of Astron and Astrophys_ 43 (2005) 827 [astro-ph/0501058].
* [31] R. C. Hickox and M. Markevitch, _Absolute Measurement of the Unresolved Cosmic X-Ray Background in the 0.5-8 keV Band with Chandra_ , _Astrophysical Journal_ 645 (2006) 95 [astro-ph/0512542].
* [32] R. C. Hickox and M. Markevitch, _Resolving the Unresolved Cosmic X-Ray Background in the Chandra Deep Fields_ , _Astrophysical Journal, Letters_ 661 (2007) L117 [astro-ph/0702556].
* [33] L. L. Cowie, A. J. Barger and G. Hasinger, _The Faintest X-Ray Sources from z = 0 TO 8_ , _Astrophysical Journal_ 748 (2012) 50 [1110.3326].
* [34] N. Cappelluti, Y. Li, A. Ricarte, B. Agarwal, V. Allevato, T. Tasnim Ananna et al., _The Chandra COSMOS Legacy Survey: Energy Spectrum of the Cosmic X-Ray Background and Constraints on Undetected Populations_ , _Astrophysical Journal_ 837 (2017) 19 [1702.01660].
* [35] Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi et al., _Planck 2018 results. VI. Cosmological parameters_ , _arXiv e-prints_ (2018) arXiv:1807.06209 [1807.06209].
* [36] B. Carr, S. Clesse, J. Garcia-Bellido and F. Kuhnel, _Cosmic Conundra Explained by Thermal History and Primordial Black Holes_ , _arXiv e-prints_ (2019) arXiv:1906.08217 [1906.08217].
* [37] V. Poulin, P. D. Serpico, F. Calore, S. Clesse and K. Kohri, _CMB bounds on disk-accreting massive primordial black holes_ , _Physical Review D_ 96 (2017) 083524 [1707.04206].
* [38] Y. Ali-Haïmoud and M. Kamionkowski, _Cosmic microwave background limits on accreting primordial black holes_ , _Physical Review D_ 95 (2017) 043534 [1612.05644].
* [39] M. Zumalacárregui and U. Seljak, _Limits on Stellar-Mass Compact Objects as Dark Matter from Gravitational Lensing of Type Ia Supernovae_ , _Physical Review Letters_ 121 (2018) 141101 [1712.02240].
* [40] Y. Ali-Haïmoud, E. D. Kovetz and M. Kamionkowski, _Merger rate of primordial black-hole binaries_ , _Physical Review D_ 96 (2017) 123523 [1709.06576].
* [41] F. Shankar, _Black hole demography: from scaling relations to models_ , _Classical and Quantum Gravity_ 30 (2013) 244001 [1307.3289].
* [42] P. Tisserand, L. Le Guillou, C. Afonso, J. N. Albert, J. Andersen, R. Ansari et al., _Limits on the Macho content of the Galactic Halo from the EROS-2 Survey of the Magellanic Clouds_ , _Astronomy and Astrophysics_ 469 (2007) 387 [astro-ph/0607207].
* [43] H. Niikura, M. Takada, N. Yasuda, R. H. Lupton, T. Sumi, S. More et al., _Microlensing constraints on primordial black holes with Subaru/HSC Andromeda observations_ , _Nature Astronomy_ 3 (2019) 524 [1701.02151].
* [44] B. P. Abbott, R. Abbott, T. D. Abbott, S. Abraham, F. Acernese, K. Ackley et al., _Search for Subsolar Mass Ultracompact Binaries in Advanced LIGO’s Second Observing Run_ , _Physical Review Letters_ 123 (2019) 161102.
* [45] J. Manshanden, D. Gaggero, G. Bertone, R. M. T. Connors and M. Ricotti, _Multi-wavelength astronomical searches for primordial black holes_ , _Journal of Cosmology and Astroparticle Physics_ 2019 (2019) 026 [1812.07967].
* [46] A. Comastri, R. Gilli, A. Marconi, G. Risaliti and M. Salvati, _Mass without radiation: Heavily obscured AGNs, the X-ray background, and the black hole mass density_ , _Astronomy and Astrophysics_ 574 (2015) L10 [1501.03620].
* [47] F. Hoyle and R. A. Lyttleton, _The effect of interstellar matter on climatic variation_ , _Proceedings of the Cambridge Philosophical Society_ 35 (1939) 405.
* [48] H. Bondi and F. Hoyle, _On the mechanism of accretion by stars_ , _Monthly Notices of the RAS_ 104 (1944) 273.
* [49] H. Bondi, _On spherically symmetrical accretion_ , _Monthly Notices of the RAS_ 112 (1952) 195.
* [50] R. Edgar, _A review of Bondi-Hoyle-Lyttleton accretion_ , _New Astronomy Review_ 48 (2004) 843 [astro-ph/0406166].
* [51] V. F. Shvartsman, _Halos around “Black Holes”._ , _Soviet Astronomy_ 15 (1971) 377.
* [52] G. M. Beskin and S. V. Karpov, _Low-rate accretion onto isolated stellar-mass black holes_ , _Astronomy and Astrophysics_ 440 (2005) 223 [astro-ph/0403649].
* [53] M. Ricotti, _Bondi Accretion in the Early Universe_ , _Astrophysical Journal_ 662 (2007) 53 [0706.0864].
* [54] M. Ricotti, J. P. Ostriker and K. J. Mack, _Effect of Primordial Black Holes on the Cosmic Microwave Background and Cosmological Parameter Estimates_ , _Astrophysical Journal_ 680 (2008) 829 [0709.0524].
* [55] F. Yuan and R. Narayan, _Hot Accretion Flows Around Black Holes_ , _Annual Review of Astron and Astrophys_ 52 (2014) 529 [1401.0586].
* [56] F.-G. Xie and F. Yuan, _Radiative efficiency of hot accretion flows_ , _Monthly Notices of the RAS_ 427 (2012) 1580 [1207.3113].
* [57] S. Zaroubi, _The Epoch of Reionization_ , vol. 396 of _Astrophysics and Space Science Library_ , p. 45. 2013\. 10.1007/978-3-642-32362-1.
* [58] R. Cen and J. P. Ostriker, _Where Are the Baryons?_ , _Astrophysical Journal_ 514 (1999) 1 [astro-ph/9806281].
* [59] D. Tseliakhovich and C. Hirata, _Relative velocity of dark matter and baryonic fluids and the formation of the first structures_ , _Physical Review D_ 82 (2010) 083520 [1005.2416].
* [60] R. A. Sunyaev and Y. B. Zeldovich, _Small-Scale Fluctuations of Relic Radiation_ , _Astrophysics and Space Science_ 7 (1970) 3.
* [61] P. J. E. Peebles and J. T. Yu, _Primeval Adiabatic Perturbation in an Expanding Universe_ , _Astrophysical Journal_ 162 (1970) 815.
* [62] A. Fialkov, _Supersonic relative velocity between dark matter and baryons: A review_ , _International Journal of Modern Physics D_ 23 (2014) 1430017 [1407.2274].
* [63] V. Springel, S. D. M. White, A. Jenkins, C. S. Frenk, N. Yoshida, L. Gao et al., _Simulations of the formation, evolution and clustering of galaxies and quasars_ , _Nature_ 435 (2005) 629 [astro-ph/0504097].
* [64] W. A. Watson, I. T. Iliev, A. D’Aloisio, A. Knebe, P. R. Shapiro and G. Yepes, _The halo mass function through the cosmic ages_ , _Monthly Notices of the RAS_ 433 (2013) 1230 [1212.0095].
* [65] E. Munari, A. Biviano, S. Borgani, G. Murante and D. Fabjan, _The relation between velocity dispersion and mass in simulated clusters of galaxies: dependence on the tracer and the baryonic physics_ , _Monthly Notices of the RAS_ 430 (2013) 2638 [1301.1682].
* [66] N. I. Shakura and R. A. Sunyaev, _Reprint of 1973A &A….24..337S. Black holes in binary systems. Observational appearance._, _Astronomy and Astrophysics_ 500 (1973) 33.
* [67] M. R. Santos, V. Bromm and M. Kamionkowski, _The contribution of the first stars to the cosmic infrared background_ , _Monthly Notices of the RAS_ 336 (2002) 1082 [astro-ph/0111467].
* [68] A. Kashlinsky, R. G. Arendt, J. Mather and S. H. Moseley, _On the Nature of the Sources of the Cosmic Infrared Background_ , _Astrophysical Journal, Letters_ 654 (2007) L1 [astro-ph/0612447].
* [69] B. Carr and J. Silk, _Primordial black holes as generators of cosmic structures_ , _Monthly Notices of the RAS_ 478 (2018) 3756 [1801.00672].
* [70] D. Inman and Y. Ali-Haïmoud, _Early structure formation in primordial black hole cosmologies_ , _Physical Review D_ 100 (2019) 083528 [1907.08129].
* [71] G. Hütsi, M. Raidal and H. Veermäe, _Small-scale structure of primordial black hole dark matter and its implications for accretion_ , _Physical Review D_ 100 (2019) 083016 [1907.06533].
* [72] D. Grasso and H. R. Rubinstein, _Magnetic fields in the early Universe_ , _Physics Reports_ 348 (2001) 163 [astro-ph/0009061].
* [73] K. Subramanian, _The origin, evolution and signatures of primordial magnetic fields_ , _Reports on Progress in Physics_ 79 (2016) 076901 [1504.02311].
* [74] K. Takahashi, M. Mori, K. Ichiki, S. Inoue and H. Takami, _Lower Bounds on Magnetic Fields in Intergalactic Voids from Long-term GeV-TeV Light Curves of the Blazar Mrk 421_ , _Astrophysical Journal, Letters_ 771 (2013) L42 [1303.3069].
* [75] K. Subramanian, _Magnetic Fields in the Universe_ , _arXiv e-prints_ (2018) arXiv:1809.03543 [1809.03543].
* [76] M. Bruscoli, A. Ferrara and E. Scannapieco, _How is the reionization epoch defined?_ , _Monthly Notices of the RAS_ 330 (2002) L43 [astro-ph/0201094].
* [77] M. Ricotti and J. P. Ostriker, _Reionization, chemical enrichment and seed black holes from the first stars: is Population III important?_ , _Monthly Notices of the RAS_ 350 (2004) 539 [astro-ph/0310331].
* [78] C. Heinrich and W. Hu, _Does Planck 2015 polarization data favor high redshift reionization?_ , _Physical Review D_ 98 (2018) 063514 [1802.00791].
* [79] A. Mesinger, A. Ferrara and D. S. Spiegel, _Signatures of X-rays in the early Universe_ , _Monthly Notices of the RAS_ 431 (2013) 621 [1210.7319].
* [80] S. Dos Santos and O. Doré, _Competition between shocks and entropy floor: Unifying groups and clusters of galaxies_ , _Astronomy and Astrophysics_ 383 (2002) 450 [astro-ph/0106456].
* [81] N. Kaiser, _Evolution of Clusters of Galaxies_ , _Astrophysical Journal_ 383 (1991) 104.
* [82] A. E. Evrard and J. P. Henry, _Expectations for X-Ray Cluster Observations by the ROSAT Satellite_ , _Astrophysical Journal_ 383 (1991) 95.
* [83] T. J. Ponman, D. B. Cannon and J. F. Navarro, _The thermal imprint of galaxy formation on X-ray clusters_ , _Nature_ 397 (1999) 135 [astro-ph/9810359].
* [84] A. Fialkov and A. Loeb, _Precise Measurement of the Reionization Optical Depth from the Global 21 cm Signal Accounting for Cosmic Heating_ , _Astrophysical Journal_ 821 (2016) 59 [1601.03058].
* [85] J. D. Bowman, A. E. E. Rogers, R. A. Monsalve, T. J. Mozdzen and N. Mahesh, _An absorption profile centred at 78 megahertz in the sky-averaged spectrum_ , _Nature_ 555 (2018) 67 [1810.05912].
* [86] A. Fialkov, R. Barkana and A. Cohen, _Constraining Baryon-Dark-Matter Scattering with the Cosmic Dawn 21-cm Signal_ , _Physical Review Letters_ 121 (2018) 011101 [1802.10577].
* [87] A. Ewall-Wice, T. C. Chang, J. Lazio, O. Doré, M. Seiffert and R. A. Monsalve, _Modeling the Radio Background from the First Black Holes at Cosmic Dawn: Implications for the 21 cm Absorption Amplitude_ , _Astrophysical Journal_ 868 (2018) 63 [1803.01815].
* [88] D. J. Fixsen, A. Kogut, S. Levin, M. Limon, P. Lubin, P. Mirel et al., _ARCADE 2 Measurement of the Absolute Sky Brightness at 3-90 GHz_ , _Astrophysical Journal_ 734 (2011) 5 [0901.0555].
* [89] J. Singal, J. Haider, M. Ajello, D. R. Ballantyne, E. Bunn, J. Condon et al., _The Radio Synchrotron Background: Conference Summary and Report_ , _Publications of the ASP_ 130 (2018) 036001 [1711.09979].
* [90] C. Feng and G. Holder, _Enhanced Global Signal of Neutral Hydrogen Due to Excess Radiation at Cosmic Dawn_ , _Astrophysical Journal, Letters_ 858 (2018) L17 [1802.07432].
* [91] A. Merloni, S. Heinz and T. di Matteo, _A Fundamental Plane of black hole activity_ , _Monthly Notices of the RAS_ 345 (2003) 1057 [astro-ph/0305261].
* [92] R. Wang, X.-B. Wu and M.-Z. Kong, _The Black Hole Fundamental Plane from a Uniform Sample of Radio and X-Ray-emitting Broad-Line AGNs_ , _Astrophysical Journal_ 645 (2006) 890 [astro-ph/0603514].
* [93] R. Genzel, F. Eisenhauer and S. Gillessen, _The Galactic Center massive black hole and nuclear star cluster_ , _Reviews of Modern Physics_ 82 (2010) 3121 [1006.0064].
* [94] M. Portail, C. Wegg, O. Gerhard and I. Martinez-Valpuesta, _Made-to-measure models of the Galactic box/peanut bulge: stellar and total mass in the bulge region_ , _Monthly Notices of the RAS_ 448 (2015) 713 [1502.00633].
* [95] S. Takekawa, T. Oka, Y. Iwata, S. Tsujimoto and M. Nomura, _The Fifth Candidate for an Intermediate-mass Black Hole in the Galactic Center_ , _Astrophysical Journal_ 890 (2020) 167 [2002.05173].
* [96] S. Takekawa, T. Oka, Y. Iwata, S. Tsujimoto and M. Nomura, _Indication of Another Intermediate-mass Black Hole in the Galactic Center_ , _Astrophysical Journal, Letters_ 871 (2019) L1 [1812.10733].
* [97] R. Schödel, A. Eckart, C. Iserlohe, R. Genzel and T. Ott, _A Black Hole in the Galactic Center Complex IRS 13E?_ , _Astrophysical Journal Letters_ 625 (2005) L111 [astro-ph/0504474].
* [98] M. Tsuboi, Y. Kitamura, T. Tsutsumi, R. Miyawaki, M. Miyoshi and A. Miyazaki, _Rotating ionized gas ring around the Galactic center IRS13E3_ , _Publications of the Astronomical Society Japan_ 71 (2019) 105 [1907.12311].
* [99] M. P. Muno, F. E. Bauer, F. K. Baganoff, R. M. Band yopadhyay, G. C. Bower, W. N. Brandt et al., _A Catalog of X-Ray Point Sources from Two Megaseconds of Chandra Observations of the Galactic Center_ , _Astrophysical Journal, Supplement_ 181 (2009) 110 [0809.1105].
* [100] J. Cuadra, S. Nayakshin and Q. D. Wang, _The role of feedback in accretion on low-luminosity AGN: Sgr A* case study_ , _Monthly Notices of the RAS_ 450 (2015) 277 [1503.02745].
* [101] A. Hektor, G. Hütsi and M. Raidal, _Constraints on primordial black hole dark matter from Galactic center X-ray observations_ , _Astronomy and Astrophysics_ 618 (2018) A139 [1805.06513].
* [102] M. Revnivtsev, S. Sazonov, E. Churazov, W. Forman, A. Vikhlinin and R. Sunyaev, _Discrete sources as the origin of the Galactic X-ray ridge emission_ , _Nature_ 458 (2009) 1142 [0904.4649].
* [103] K. Ferrière, W. Gillard and P. Jean, _Spatial distribution of interstellar gas in the innermost 3 kpc of our galaxy_ , _Astronomy and Astrophysics_ 467 (2007) 611 [astro-ph/0702532].
* [104] E. Valenti, M. Zoccali, A. Mucciarelli, O. A. Gonzalez, F. Surot, D. Minniti et al., _The central velocity dispersion of the Milky Way bulge_ , _Astronomy and Astrophysics_ 616 (2018) A83 [1805.00275].
* [105] P. M. W. Kalberla, J. Kerp and U. Haud, _The Velocity Dispersion of Galactic Dark Matter_ , vol. 276 of _Astronomical Society of the Pacific Conference Series_ , p. 453. 2002\.
* [106] K. Nandra, D. Barret, X. Barcons, A. Fabian, J.-W. den Herder, L. Piro et al., _The Hot and Energetic Universe: A White Paper presenting the science theme motivating the Athena+ mission_ , _arXiv e-prints_ (2013) arXiv:1306.2307 [1306.2307].
* [107] D. A. Schwartz, A. Vikhlinin, H. Tananbaum, M. Freeman, G. Tremblay, E. D. Schwartz et al., _The Lynx X-ray Observatory: revealing the invisible universe_ , in _Proc. SPIE_ , vol. 11118 of _Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series_ , p. 111180K, Sept., 2019, DOI.
* [108] P. B. Ivanov, V. N. Lukash, S. V. Pilipenko and M. S. Pshirkov, _Search for isolated Galactic Centre stellar mass black holes in the IR and sub-mm range_ , _Monthly Notices of the RAS_ 489 (2019) 2038 [1905.04923].
* [109] A. Kashlinsky, Y. Ali-Haïmoud, S. Clesse, J. Garcia-Bellido, L. Amendola, L. Wyrzykowski et al., _Electromagnetic probes of primordial black holes as dark matter_ , _Bulletin of the AAS_ 51 (2019) 51 [1903.04424].
* [110] A. Kashlinsky, R. G. Arendt, N. Cappelluti, A. Finoguenov, G. Hasinger, K. Helgason et al., _Probing the Cross-power of Unresolved Cosmic Infrared and X-Ray Backgrounds with Upcoming Space Missions_ , _Astrophysical Journal, Letters_ 871 (2019) L6 [1812.01535].
* [111] E. L. Wright, _A Cosmology Calculator for the World Wide Web_ , _Publications of the ASP_ 118 (2006) 1711 [astro-ph/0609593].
|
2024-09-04T02:54:59.069368 | 2020-03-11T08:05:02 | 2003.05151 | {
"authors": "Jukka Ruohonen and Kalle Hjerppe",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26155",
"submitter": "Jukka Ruohonen",
"url": "https://arxiv.org/abs/2003.05151"
} | arxiv-papers | 11institutetext: Department of Future Technologies, University of Turku,
Turku, Finland
# Predicting the Amount of GDPR Fines
Jukka Ruohonen Kalle Hjerppe
{juanruo<EMAIL_ADDRESS>
###### Abstract
The General Data Protection Regulation (GDPR) was enforced in 2018. After this
enforcement, many fines have already been imposed by national data protection
authorities in the European Union (EU). This paper examines the individual
GDPR articles referenced in the enforcement decisions, as well as predicts the
amount of enforcement fines with available meta-data and text mining features
extracted from the enforcement decision documents. According to the results,
articles related to the general principles, lawfulness, and information
security have been the most frequently referenced ones. Although the amount of
fines imposed vary across the articles referenced, these three particular
articles do not stand out. Furthermore, good predictions are attainable even
with simple machine learning techniques for regression analysis. Basic meta-
data (such as the articles referenced and the country of origin) yields
slightly better performance compared to the text mining features.
###### Keywords:
T
ext mining Legal mining Data protection Law enforcement
## 1 Introduction
Data protection has a long history in the EU. In particular, the GDPR repealed
the earlier Directive 95/46/EC. Although this directive laid down much of the
legal groundwork for EU-wide data protection and privacy, its national
adaptations, legal interpretations, and enforcement varied both across the
member states and different EU institutions [10]. In short: it was a paper
tiger. In contrast, Regulation (EU) 2016/679, the GDPR, is a regulation; it is
binding throughout the EU with only a minimal space for national adaptations.
In practice, only a few Articles (A) in the GDPR provide some but limited room
for national maneuvering; these include A6 with respect to relaxation in terms
of other legal obligations or public interests, A9 in terms of sensitive data,
and A10 regarding criminal matters. Thus, in general, this particular
legislation should be interpreted and enforced uniformly through the European
Union by national data protection authorities whose formal powers are defined
in A58. In practice, however, already the resources and thus the actual power
for enforcement vary across the member states [1, 7]. Coupled with a lack of
previous research on the enforcement of the GDPR, this variance provides a
motivation for the present work to examine the recent enforcement fines
imposed according to the conditions specified in A83. In addition, the work is
motivated by a tangential question; is it also possible to predict these fines
by machine learning methods?
To answer to the question, the paper uses meta-data and text miming features
extracted from the decision documents released by the national authorities. As
such, only black-box predictions are sought; the goal is not to make any legal
interpretations whatsoever. Nevertheless, the answer provided still
establishes a solid contribution—especially when considering that the paper is
presumably the very first to even examine the GDPR fines. As is discussed in
Section 2, the black-box approach also places the paper into a specific branch
of existing research dealing with legal documents. This section also refines
the question into two more specific research questions. Afterwards, the
structure is straightforward: the dataset and methods are elaborated in
Sections 3 and 4, results are presented in Section 5, and conclusions follow
in Section 6. As will be noted in the final section, there are also some
lessons that should not be learned from this work.
## 2 Background
Legal mining—in lack of a better term—has emerged in recent years as a
promising but at times highly contested interdisciplinary field that uses
machine learning techniques to analyze various aspects related to law [8].
Although the concrete application domains vary, case law and court cases are
the prime examples already because these constitute the traditional kernel of
legal scholarship. Within this kernel, existing machine learning applications
range from the classification of judges’ ideological positions [12], which may
be illegal in some European countries [3], to the prediction of decisions of
the European Court of Human Rights [16, 17]. These examples convey the
traditional functions of applied machine learning; exploratory data mining and
the prediction of the future.
There is also another closely related application domain. Again in lack of a
better term, data extraction could be a label for this domain: by exploiting
the nature of law as an art of persuasion [8], the domain uses distinct
information retrieval techniques to extract and quantify textual data from
legal documents into structured collections with a predefined logic and
semantics [2, 24, 28]. To gain a hint about the extraction, one might consider
a legal document to contain some facts, rights, obligations, and prohibitions,
statements and modalities about these, and so forth. Although the two
application domains are complementary in many respects, the underlying
rationales exhibit some notable differences.
Oftentimes, the legal mining domain is motivated by a traditional rationale
for empirical social science research: to better understand trends and
patterns in lawmaking and law enforcement; to contrast these with legal
philosophies and theories; and so forth. This rationale extends to public
administration: machine learning may ease the systematic archiving of legal
documents and the finding of relevant documents, and, therefore, it may also
reduce administrative costs [4]. These administrative aspects reflect the goal
of building “systems that assist in decision-making”, whereas the predictive
legal mining applications seek to build “systems that make decision” [21].
Although the data extraction domain can be motivated by the same
administrative rationale, providing data to predictive systems is seldom the
intention behind the extraction. Instead, there is a further rationale in this
domain: to extract requirements for software and systems in order to comply
with the laws from which a given extraction is done [24]. Driven by the
genuine interest to facilitate collaboration between lawyers and engineers in
order to build law-compliant software and systems [26], this rationale has
been particularly prevalent in the contexts of data protection and privacy.
For instance, previous work has been done to extract requirements from the
Health Insurance Portability and Accountability Act in the United States [2].
Against this backdrop, it is no real surprise that data extraction has been
applied also for laws enacted in the EU. While there is previous work for
identifying requirements from the GDPR manually [13], there indeed exists also
more systematic data extraction approaches [25]. However, neither domain has
addressed the enforcement of this EU-wide regulation. In fact, a reasonably
comprehensive literature search indicates no previous empirical research on
the GDPR’s enforcement. Given this pronounced gap in the existing literature,
this paper sets to examine the following two Questions (Q) regarding the
enforcement fines:
$\textmd{Q}_{1}$: (i) Which GDPR articles have been most often referenced in
the recent enforcement cases, (ii) and do the enforcement fines vary across
these articles?
$\textmd{Q}_{2}$: How well the recent GDPR fines can be predicted in terms of
basic available (i) meta-data and (ii) textual traits derived from the
enforcement decisions?
These two questions place the present work into the legal mining domain. Also
the underlying rationales are transferable. For instance, an answer to
$\textmd{Q}_{1}$ helps to understand which aspects of the GDPR have been
actively enforced during the early roll out of the regulation. Also
$\textmd{Q}_{2}$ carries a practical motivation: by knowing whether the
penalties are predictable by machine learning techniques, a starting point is
available for providing further insights in different practical scenarios.
These scenarios range from the automated archival of enforcement decisions and
the designation of preventive measures to litigation preparations. However, it
is important to remark that the GDPR’s enforcement is done by national data
protection authorities. Although the focus on public administration is
maintained nevertheless, documents about the enforcement decisions reached by
these authorities should not be strictly equated to law-like legal documents.
This point provides an impetus to move forward by elaborating the dataset
used.
## 3 Data
The dataset is based on a GDPR enforcement tracker that archives the fines and
penalties imposed by the European data protection authorities [5]. This
tracker is maintained by an international law firm for archiving many of the
known enforcement cases. Each case is accompanied by meta-data supplied by the
firm as well as a link to the corresponding decision from a national
authority. In addition to potentially missing cases due to the lack of
publicly available information, the archival material is unfortunately
incomplete in many respects. The reason originates from the incoherent
reporting practices of the European data protection authorities. Therefore,
all cases were obtained from the tracker, but the following four steps were
followed to construct a sample for the analysis:
1. 1.
To maintain coherence between $\textmd{Q}_{1}$ and $\textmd{Q}_{2}$, only
those cases were included that had both meta-data and links to the decisions
available. In terms of the former, some cases lacked meta-data about the fines
imposed, the particular GDPR articles referenced in the decisions, and even
links to the decisions.
2. 2.
To increase the quality of the sample, only those cases were included that
were accompanied with more or less formal documents supplied on the official
websites of the data protection authorities. Thus, those cases are excluded
whose archival material is based online media articles, excerpts collected
from annual reports released by the authorities, and related informal sources.
3. 3.
If two or more cases were referenced with the same decision, only one decision
document was included but the associated meta-data was unified into a single
case by merging the articles references and totaling the fines imposed.
4. 4.
All national decisions written in languages other than English were translated
to English with Google Translate. In general, such machine translation is
necessary due to the EU-wide focus of the forthcoming empirical analysis.
Given these restrictions, the sample amounts to about 72% of all cases
archived to the tracker at the time of data collection. Even with these
precautions, it should be stressed that the quality of the sample is hardly
optimal. While the accuracy of the meta-data supplied by the firm is taken for
granted, there are also some issues with the quality of the publicly available
decisions. The authorities in some countries (e.g., Hungary and Spain) have
released highly detailed and rigorous documents about their decisions, while
some other authorities (e.g., in Germany) have opted for short press releases.
Although most of the documents were supplied in the portable document format
(PDF) and informally signed by the authorities, it should be thus stressed
that the data quality is not consistent across the European countries
observed. In addition, it is worth remarking the detail that scanned PDF
documents (as used, e.g., in Portugal) had to be excluded due to the automatic
data processing. While these data quality issues underline the paper’s
exploratory approach, these carry also political and administrative
ramifications that are briefly discussed later on in Section 6.
## 4 Methods
Descriptive statistics and regression analysis are used for answering to the
two questions asked. In terms of Question $\textmd{Q}_{1}$, dummy variables
for the GDPR articles referenced are simply regressed against the logarithm of
the fines imposed by using the conventional analysis-of-variance (ANOVA). As
many of the cases reference multiple articles, it should be remarked that
these dummy variables are not so-called fixed effects. The methods for
answering to the second Question $\textmd{Q}_{2}$ require a more thorough
elaboration. In addition to (i) the GDPR articles, the meta-data aspects
include dummy variables for the following features: (ii) the year of a given
enforcement case; (iii) the country in which the given fine was imposed; and
(iv) the sector of the violating organization. The last feature was
constructed manually by using five categories: individuals, public sector
(including associations), telecommunications, private sector (excluding
telecommunications), and unknown sector due to the lack of meta-data supplied
in the enforcement tracker. In total, these features amount to $49$ dummy
variables.
The textual aspects for $\textmd{Q}_{2}$ are derived from the translated
decisions. Seven steps were used for pre-processing: (a) all translated
decision documents were lower-cased and (b) tokenized according to white space
and punctuation characters; (c) only alphabetical tokens recognized as English
words were included; (d) common and custom stopwords were excluded; (e) tokens
with lengths less than three characters or more than twenty characters were
excluded; (f) all tokens were lemmatized into their common English dictionary
forms; and, finally, (g) those lemmatized tokens were excluded that occurred
in the whole decision corpus in less than three times. A common natural
language processing library [22] was used for this processing together with a
common English dictionary [20]. In addition to the stopwords supplied in the
library, the twelve most frequent tokens were used as custom excluded
stopwords: data, article, personal, protection, processing, company,
authority, regulation, information, case, art, and page. After this pre-
processing, the token-based term frequency (TF) and term frequency inverse
document frequency (TF-IDF) were calculated from the whole corpus constructed
(for the exact formulas used see, e.g., [23]). These common information
retrieval statistics are used for evaluating the other part in
$\textmd{Q}_{2}$. In general, TF-IDF is often preferred as it penalizes
frequently occurring terms.
Sparsity is the biggest issue for prediction. There are only $154$
observations but already the meta-data amounts to $49$ independent
variables—and the TF and TF-IDF each to $4189$ independent variables.
Fortunately, the problem is not uncommon, and well-known solutions exist for
addressing it. Genomics is a good example about the application domains
riddled with the problem; within this domain, it is not uncommon to operate
with datasets containing a few thousand observations and tens of thousands of
predictors [6]. Dimension reduction is the generic solution in this domain and
other domains with similar problems. Thus, three common dimension reduction
methods for regression analysis are used: principal component regression
(PCR), partial least squares (PLS), and ridge regression (for a concise
overview of these methods see, e.g., [11]). In essence, PCR uses uncorrelated
linear combinations as the independent variables; PLS is otherwise similar but
also the dependent variable is used for constructing the combinations. Ridge
regression is based on a different principle: the dimensionality is reduced by
shrinking some of the regression coefficients to zero. In general, all three
methods are known to yield relatively similar results in applied work.
In terms of practical computation, the number of components for the PCR and
PLS models, and the shrinkage parameter for the ridge regression, is optimized
during the training while the results are reported with respect to a test set
containing 20% of the enforcement cases. Centering (but not scaling) is used
prior to the training with a $5$-fold cross-validation. Computation is carried
out with the caret package [14] in conjunction with the pls [18] and foba [30]
packages. Although root-mean-square errors (RMSEs) are used for optimizing the
training, the results are summarized with mean absolute errors (MAEs) due to
their straightforward interpretability. These are defined as the arithmetic
means of the absolute differences between the observed and predicted fines in
the test set.
## 5 Results
The GDPR fines imposed vary greatly. As can be seen from Fig. 1, a range from
about $e^{6}$ euros to $e^{12}$ euros capture the majority of the enforcement
fines observed. This range amounts roughly from about four hundred to $163$
thousand euros. That said, the distribution has a fairly long tail; also a few
large, multi-million euro fines are present in the sample. Therefore, the
sample cannot be considered biased even though the restrictions discussed in
Section 3 exclude some of the largest enforcement cases, including the
announcements about the intention to fine the British Airways and Marriott
International by the Information Commissioner’s Office in the United Kingdom.
Although these two excluded cases are—at least at the time of
writing—preliminary announcements, they are still illuminating in the sense
that both were about large-scale data breaches.
Figure 1: Enforcement Fines in the Sample
However, the GDPR’s corresponding A32 for information security has not been
the most frequently referenced article in the recent enforcement cases.
Instead, A5 and A6, which address the general principles and lawfulness of
personal data processing, have clearly been the most referenced individual
articles, as can be seen from Fig. 2. These two articles account for as much
as 87% of all $252$ references made in the $154$ enforcement cases. More than
six references have been made to A13 (informing obligations to data subjects),
A15 (right to access), A21 (right to object), and A17 (right to erasure).
These references indicate that enforcement has been active also with respect
to the rights granted by the GDPR for individual data subjects. Furthermore,
less frequent references have been made in the decisions to numerous other
articles. These include the obligations to designate data protection officers
(A37), conduct impact assessments (A35), and consult supervisory authorities
(A36), to name three examples. While the principles, lawfulness, and
information security account for the majority, the less frequent but still
visible references to more specific articles hint that the regulation’s whole
scope is slowly being enforced by the European authorities.
Figure 2: Referenced GDPR Articles in the Enforcement Cases
Turning to the second part of $\textmd{Q}_{1}$, the regression coefficients
from the log-linear ANOVA model are visualized in Fig. 3 (the intercept is
present in the model but not shown in the figure, and A36 is omitted as the
single reference made to the article corresponds with the single reference
made to A35 in the same decision; the dummy variable for A35 thus captures the
effect of both articles). As can be seen, the confidence intervals (CIs) are
quite wide for the articles referenced only infrequently, and only six
coefficients are statistically significant at the conventional threshold.
Thus, some care is required for interpretation.
Figure 3: Enforcement Fines Across Articles (logarithm, ANOVA, 95% CIs)
When looking at the coefficients with relatively tight CIs, it is evident that
variation is present but the magnitude of this variation is not substantial.
Most of the coefficients remain in the range $[-5,5]$. However, together all
the references do yield a decent model; an $F$-test is statistically
significant and the coefficient of determination is large ($R^{2}\simeq
0.44$). To put aside the statistical insignificance, it is also interesting to
observe that some of the coefficients have negative signs, meaning that some
references indicate smaller fines compared to the average. Among these are the
conditions for consent (A7), sensitive data (A9), transparency (A12), and
informing (A13), as well as the already noted right to access (A15), proper
notifications about data breaches (A33), and the powers granted for the
supervisory authorities (A58). Finally, the magnitude of the coefficient
($1.52$) for the information security article (A32) is significant but does
not stand out in terms of magnitude. When compared to cases without a
reference to this article, only about $1.5\%$ higher fines have been imposed
in cases referencing A32.
Figure 4: Prediction Performance (logarithm, MAEs) Figure 5: Observed and
Predicted Values in the Test Set
The results regarding $\textmd{Q}_{2}$ are summarized in Fig. 4 (the MAEs for
the training refer to the best cross-validated models). Three noteworthy
observations can be drawn from this summary. First and foremost, the
prediction performance is generally decent: the best-performing cases all
yield MAEs roughly between $1.3$ and $1.5$ for the log-transformed fines.
These average prediction errors seem also reasonable when taking a closer look
at the actual predictions—except for the outlying large fines. Take Fig. 5 as
a brief example; the figure displays the observed fines and the predicted
fines based on the PLS and ridge regression estimators for the first meta-data
model. Even though most of the predicted observations are fairly close to the
observed fines, the test set also contains one five million euro fine that is
quite severely underestimated by both regression estimators. The
underestimations amount to over $246$ thousand euros. Though, when a magnitude
is measured in millions, it is a matter of interpretation whether an error
measured in hundreds of thousands is large, small, or something else.
Second, there are some interesting differences between the regression
estimators. In particular, PLS and ridge regression exhibit relatively large
differences between training and testing. The explanation relates to the RMSE-
based optimization during training. For instance, PCR was estimated with only
one component for the first meta-data model and three components for the
remaining three models, whereas two components were picked for all four PLS
models.
Last but not least, the smallest MAE for the test set is outputted by ridge
regression using only the $49$ meta-data variables. The second and third
models containing the TF and TF-IDF variables both perform worse. Furthermore,
the fourth model, which contains the meta-data and TF-IDF variables, indicates
that the text mining features tend to slightly weaken the predictions. It is
also worth remarking that some redundancy is present among the meta-data
variables; comparable performance is obtained with only $17$ meta-data
variables that are left after prior pre-processing with the caret’s
nearZeroVar function. All this said, the overall interpretation should be less
explicit when considering the practical motivation for $\textmd{Q}_{2}$ noted
in Section 2. If only the decision documents are available without any prior
work to manually construct the meta-data from these, even the simple text
mining features could be used for black-box predictions.
## 6 Conclusion
This paper explored two questions. The answers to these can be summarized as
follows. First: regarding $\textmd{Q}_{1}$, the articles related to the
general principles (A5), lawfulness (A6), and information security (A32) have
been most frequently referenced by the national data protection authorities
during the early enforcement period observed in this paper. Although also the
enforcement fines vary across the various GDPR articles referenced in the
authorities’ decisions, the effects of these three articles do not stand out
in particular. A good corollary question for further work would be to examine
the future evolution of these references; a hypothesis is that the
regulation’s enforcement is slowly moving from the principles and lawfulness
conditions to more specific elements. Then: regarding $\textmd{Q}_{2}$, it is
possible to obtain decent predictions even with standard machine learning
techniques for regression analysis. Basic meta-data (i.e., articles
referenced, year of enforcement, country or origin, and industry sector) seems
to provide slightly better predictive performance compared to basic text
mining features (i.e., TF and TF-IDF) extracted from the decision documents.
Yet, even the text mining features seem sufficient for blind black-box
predictions. There are also many potential ways to improve the predictions
reported, including those related regression analysis (such as using specific
sparse-PLS estimators) and text mining (such as using word embeddings). Data
mining techniques (such as topic modeling) could be used also for better
understanding the nuances behind the decisions. An alternative path forward
would be to extend the specific data extraction approaches discussed in
Section 2 to the enforcement decisions. However, the motivation to move
forward is undermined by practical problems. As was remarked in Section 3,
already the quality of data is a problem of its own.
Recently, the enforcement of the GDPR has been fiercely criticized by some
public authorities and pundits alike. The reasons are many: a lack of
transparency and cooperation between national data protection authorities,
diverging legal interpretations, cultural conflicts, the so-called “one-stop-
shop” system, old-fashioned information systems and poor data exchange
practices, and so on and so forth [27]. The data collection used for the
present work testifies on behalf of the criticism: the decision documents
released by the national authorities have varied wildly in terms of quality
and rigor. Some national authorities have even hidden their decisions from
public scrutiny. A paradox is present: although A15 grants a right for data
subjects to access their personal data, the same subjects may need to exercise
their separate freedom of information rights to obtain cues about decisions
reached by national authorities. Four legs good, two legs bad.
Finally, it is necessary to briefly point out the bigger issues affecting the
legal mining and data extraction domains—and, therefore, also the present
work. For one thing, the practical usefulness of legal expert systems has been
questioned for a long time. The artificial intelligence hype has not silenced
the criticism [15]. Like with the “code is law” notion, which has never
existed in reality [19], there are also many philosophical counterarguments
against the legal mining and data extraction domains [8, 9, 21]. It is
problematic at best to codify the methodology of a scholarly discipline into
rigid schemas in order to nurse the methodological requirements of another
discipline; legal reasoning is distinct from other types of reasoning
exercised in empirical sciences; and so forth. Law is not code. But code is
increasingly used to predict law enforcement decisions. The legal mining
domain, in particular, is frequently involved with a motivation to build “a
system that could predict judicial decisions automatically” but with a
provision that there is “no intention of creating a system that could replace
judges” [17]. Such system-building leads to another delicate paradox. Namely,
the GDPR and related laws (such as Directive 2016/680 for data protection in
criminal matters) were also designed to provide certain guards against legal
mining and the resulting automated decision-making involving human beings
[29]. This paper is not immune to criticism originating from this fundamental
paradox. If it is seen as undesirable to build systems for making law
enforcement decisions, it should be also seen as undesirable to build systems
for automatically fining companies.
### Acknowledgements
This research was funded by the Strategic Research Council at the Academy of
Finland (grant no. 327391).
## References
* [1] Bennett, C.J., Raab, C.D.: Revisiting the Governance of Privacy: Contemporary Policy Instruments in Global Perspective. Regulation & Governance (Published online in September) (2018)
* [2] Breaux, T.D., Vail, M.W., Anton, A.I.: Towards Regulatory Compliance: Extracting Rights and Obligations to Align Requirements with Regulations. In: Proceedings of the 14th IEEE International Requirements Engineering Conference (RE 2006). pp. 49–58. IEEE, Minneapolis (2006)
* [3] Calomme, C.: Why Open Legal Data and Analytics Are Not Without Risks (2020), Centre for IT & IP Law (CiTiP) Blog, KU Leuven, available online in April: https://www.law.kuleuven.be/citip/blog/why-open-legal-data-and-analytics-are-not-without-risks/
* [4] Chhatwal, R., Huber-Fliflet, N., Keeling, R., Zhang, J., Zhao, H.: Empirical Evaluations of Active Learning Strategies in Legal Document Review. In: Proceedings of the IEEE International Conference on Big Data (Big Data 2017). pp. 1428–1437. IEEE, Boston (2017)
* [5] CMS Law.Tax: GDPR Enforcement Tracker (2020), Data obtained in 24 February from: https://enforcementtracker.com/
* [6] Colombani, C., Croiseau, P., Fritz, S., Guillaume, F., Legarra, A., Ducrocq, V., Robert-Granié, C.: A Comparison of Partial Least Squares (PLS) and Sparse PLS Regressions in Genomic Selection in French Dairy Cattle. Journal of Dairy Science 95(4), 2120–2131 (2012)
* [7] Custers, B., Dechesne, F., Sears, A.M., Tani, T., van der Hof, S.: A Comparison of Data Protection Legislation and Policies Across the EU. Computer Law & Security Review 34(2), 234–243 (2018)
* [8] Dyevre, A., Wijtvliet, W., Lampach, N.: The Future of European Legal Scholarship: Empirical Jurisprudence. Maastricht Journal of European and Comparative Law 26(3), 348–371 (2019)
* [9] Franklin, J.: Discussion Paper: How Much of Commonsense and Legal Reasoning is Formalizable? A Review of Conceptual Obstacles. Law, Probability and Risk 11(2–3), 225–245 (2012)
* [10] Fuster, G.G.: The Emergence of Personal Data Protection as a Fundamental Right of the EU. Springer, Cham (2014)
* [11] Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York (2011)
* [12] Hausladen, C.I., Schubert, M.H., Ash, E.: Text Classification of Ideological Direction in Judicial Opinions. International Review of Law and Economics 62, 105903 (2020)
* [13] Hjerppe, K., Ruohonen, J., Leppänen, V.: The General Data Protection Regulation: Requirements, Architectures, and Constraints. In: Proceedings of the 27th IEEE International Requirements Engineering Conference (RE 2019). pp. 265–275. IEEE, Jeju Island (2019)
* [14] Kuhn, M., et al.: caret: Classification and Regression Training (2020), R package version 6.0-85, available online in February: https://cran.r-project.org/web/packages/caret/
* [15] Leith, P.: The Rise and Fall of the Legal Expert System. International Review of Law, Computers & Technology 30(3), 94–106 (2016)
* [16] Liu, Z., Chen, H.: A Predictive Performance Comparison of Machine Learning Models for Judicial Cases. In: Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI 2017). pp. 1–6. IEEE, Honolulu (2017)
* [17] Medvedeva, M., Vols, M., Wieling, M.: Using Machine Learning to Predict Decisions of the European Court of Human Rights. Artificial Intelligence and Law (Published online in June), 1–30 (2019)
* [18] Mevik, B.H., Wehrens, R.: The pls Package: Principal Component and Partial Least Squares Regression in R. Journal of Statistical Software 18(2), 1–23 (2007)
* [19] Mueller, M., Badiei, F.: Requiem for a Dream: On Advancing Human Rights via Internet Architecture. Policy and Internet 11(1), 61–83 (2019)
* [20] Németh, L., Hendricks, K., McNamara, C., et al.: Hunspell (2020), Version 1.7.0, available online in February https://github.com/hunspell/hunspell
* [21] Nissan, E.: Computer Tools and Techniques for Lawyers and the Judiciary. Cybernetics and Systems 49(4), 201–233 (2018)
* [22] The Natural Language Toolkit (NLTK): Version 3.4.5 (2019), available online in January 2020: http://www.nltk.org
* [23] Ruohonen, J., Leppänen, V.: Toward Validation of Textual Information Retrieval Techniques for Software Weaknesses. In: Elloumi, M., Granitzer, M., Hameurlain, A., Seifert, C., Stein, B., Tjoa, A.M., Wagner, R. (eds.) Proceedings of the 29th International Conference on Database and Expert Systems Applications (DEXA 2018), Communications in Computer and Information Science (Volume 903). pp. 265–277. Springer, Regensburg (2018)
* [24] Sleimi, A., Ceci, M., Sannier, N., Sabetzadeh, M., Briand, L., Dann, J.: A Query System for Extracting Requirements-Related Information from Legal Texts. In: Proceedings of the IEEE 27th International Requirements Engineering Conference (RE 2019). pp. 319–329. IEEE, Jeju Island (2019)
* [25] Tamburri, D.A.: Design Principles for the General Data Protection Regulation (GDPR): A Formal Concept Analysis and Its Evaluation. Information Systems 91, 101469 (2020)
* [26] van Dijk, N., Tanas, A., Rommetveit, K., Raab, C.: Right Engineering? The Redesign of Privacy and Personal Data Protection. International Review of Law, Computers & Technology 32(2–3), 230–256 (2018)
* [27] Vinocur, N.: ‘We Have a Huge Problem’: European Tech Regulator Despairs Over Lack of Enforcement: The World’s Toughest Privacy Law Proves Toothless in the Eyes of Many Critics (2019), Politico. Available online in February 2020: https://www.politico.com/news/2019/12/27/europe-gdpr-technology-regulation-089605
* [28] Wagh, R.S., Anand, D.: Legal Document Similarity: A Multi-Criteria Decision-Making Perspective. PeerJ Computer Science 6, e262 (2020)
* [29] Završnik, A.: Criminal Justice, Artificial Intelligence Systems, and Human Rights. ERA Forum 20, 567–583 (2020)
* [30] Zhang, T.: foba: Greedy Variable Selection (2008), R package version 0.1, available online in February: https://cran.r-project.org/web/packages/foba/
|
2024-09-04T02:54:59.080131 | 2020-03-11T09:12:44 | 2003.05174 | {
"authors": "Jose Blanchet, Renyuan Xu and Zhengyuan Zhou",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:26156",
"submitter": "Renyuan Xu",
"url": "https://arxiv.org/abs/2003.05174"
} | arxiv-papers | # Delay-Adaptive Learning in
Generalized Linear Contextual Bandits
Jose Blanchet Department of Management Science and Engineering, Stanford
University, USA. Email<EMAIL_ADDRESS>Renyuan Xu Mathematical
Institute, University of Oxford, UK. Email<EMAIL_ADDRESS>Zhengyuan Zhou
Stern School of Business, New York University, USA. Email<EMAIL_ADDRESS>
###### Abstract
In this paper, we consider online learning in generalized linear contextual
bandits where rewards are not immediately observed. Instead, rewards are
available to the decision maker only after some delay, which is unknown and
stochastic. We study the performance of two well-known algorithms adapted to
this delayed setting: one based on upper confidence bounds, and the other
based on Thompson sampling. We describe modifications on how these two
algorithms should be adapted to handle delays and give regret
characterizations for both algorithms. Our results contribute to the broad
landscape of contextual bandits literature by establishing that both
algorithms can be made to be robust to delays, thereby helping clarify and
reaffirm the empirical success of these two algorithms, which are widely
deployed in modern recommendation engines.
## 1 Introduction
The growing availability of user-specific data has welcomed the exciting era
of personalized recommendation, a paradigm that uncovers the heterogeneity
across individuals and provides tailored service decisions that lead to
improved outcomes. Such heterogeneity is ubiquitous across a variety of
application domains (including online advertising, medical treatment
assignment, product/news recommendation ([LCLS2010],
[BCN2012],[chapelle2014],[bastani2015online],[SBF2017])) and manifests itself
as different individuals responding differently to the recommended items.
Rising to this opportunity, contextual bandits ([besbes2009dynamic,
rigollet2010nonparametric, goldenshluger2011note, hsu2014taming,
agrawal2016efficient]) have emerged to be the predominant mathematical
formalism that provides an elegant and powerful formulation: its three core
components, the features (representing individual characteristics), the
actions (representing the recommendation), and the rewards (representing the
observed feedback), capture the salient aspects of the problem and provide
fertile ground for developing algorithms that balance exploring and exploiting
users’ heterogeneity.
As such, the last decade has witnessed extensive research efforts in
developing effective and efficient contextual bandits algorithms. In
particular, two types of algorithms–upper confidence bounds (UCB) based
algorithms ([LCLS2010, FCGS2010, chu2011contextual, JBNW2017, LLZ2017]) and
Thompson sampling (TS) based algorithms ([AG2013a, AG2013b, RV2014,
russo2016information, agrawal2017thompson])–stand out from this flourishing
and fruitful line of work: their theoretical guarantees have been analyzed in
many settings, often yielding (near-)optimal regret bounds; their empirical
performance have been thoroughly validated, often providing insights into
their practical efficacy (including the consensus that TS based algorithms,
although sometimes suffering from intensive computation for posterior updates,
are generally more effective than their UCB counterparts, whose performance
can be sensitive to hyper-parameter tuning). To a large extent, these two
family of algorithms have been widely deployed in many modern recommendation
engines.
However, a key assumption therein–both the algorithm design and their
analyses–is that the reward is immediately available after an action is taken.
Although useful as a first-step abstraction, this is a stringent requirement
that is rarely satisfied in practice, particularly in large-scale systems
where the time-scale of a single recommendation is significantly smaller than
the time-scale of a user’s feedback. For instance, in E-commerce, a
recommendation is typically made by the engine in milliseconds, whereas a
user’s response time (i.e. to buy a product or conversion) is typically much
larger, ranging from hours to days, sometimes even to weeks. For instance, a
thorough empirical study in [chapelle2014] found that more than 10% of the
conversions in Criteo (a real-time bidding company) were at least 2 weeks old.
Furthermore, [chapelle2014] found that the delay distribution from the
company’s data follows the exponential distribution closely and hence does
have heavy tails. Similarly, in clinical trials, it is infeasible to
immediately observe and hence take into account the medical outcome after
applying a treatment to a patient–collecting medical feedback can be a time-
consuming and often random process; and in general, it is common to have
applied trial treatments to a large number of patients, with individual
medical outcomes only available much later at different, random points in
time. In both the E-commerce ([KCW2001, chapelle2014])and the clinical trials
cases ([CC2011]), a random and often significantly delayed reward is present.
Further, such delays empirically often follow a heavy tail distribution, and
hence a priori can have substantially negative impact on the learning
performance. Consequently, to understand such impact of delays, adjustments in
classical formulations must be made, both at the algorithmic level and at the
analysis level.
### 1.1 Related Work
In the past five years or so, the problem of learning on bandits with delays
has received increasing attention and has been studied in several different
settings in the existing literature, where most of the efforts have
concentrated on the multi-armed bandits setting, including both the stochastic
multi-armed bandits and the adversarial multi-armed bandits.
For stochastic multi-armed bandits with delays, [JGS2013] show a regret bound
$O(\log T+\mathbb{E}[\tau]+\sqrt{\log T\mathbb{E}[\tau]})$ where
$\mathbb{E}[\tau]$ is the mean of the iid delays. [DKVB2014] consider Gaussian
Process bandits with a bounded stochastic delay. [MLBP2015] follow the work of
[JGS2013] and propose a queue-based multi-armed bandit algorithm to handle
delays. [PASG2017] match the same regret bound as in [JGS2013] when feedback
is not only delayed but also anonymous.
For adversarial multi-armed bandits with delays, [NAGS2010] establish the
regret bound of $\mathbb{E}[R_{T}]\leq
O(\tau_{\text{const}})\times\mathbb{E}[R^{\prime}_{T}(\frac{T}{\tau_{\text{const}}})]$
for Markov decision process, where $\tau_{\text{const}}$ is the constant delay
and $R^{\prime}_{T}$ is the regret without delays. [CGM2019] consider
adversarial bandits with fixed constant delays on the network graph, with a
minimax regret of the order $\tilde{O}\sqrt{(K+\tau_{\text{const}})T}$, where
$K$ is the number of arms. Another related line of work to adversarial multi-
armed bandits is adversarial learning with full information, where the rewards
for all arms are observed. Different variants of this problems in the delayed
setting have been studied by [WO2002], [mesterharm2005], [QK2015] and
[GST2016].
On the other hand, learning in contextual bandits with delays are much less
explored. [JGS2013] consider learning on adversarial contextual bandits with
delays and establish an expected regret bound
$\mathbb{E}\left[R_{T}\right]\leq(1+\mathbb{E}[M_{T}^{*}])\times\mathbb{E}\left[R^{\prime}_{T}\left(\frac{T}{1+\mathbb{E}[M_{T}^{*}]}\right)\right]$
by using a black-box algorithm, where $M_{T}^{*}$ is the running maximum
number of delays up to round $T$. [DHKKLRZ2011] consider stochastic contextual
bandits with a fixed constant delay. The reward model they consider is general
(i.e. not necessarily parametric); however, they require the policy class to
be finite. In particular, they obtain the regret bound $O(\sqrt{K\log
N}(\tau_{\text{const}}+\sqrt{T}))$, where $N$ is the number of policies and
$\tau_{\text{const}}$ is again the fixed constant delay.
Finally, we also note that there is a growing literature on offline contextual
bandits (for a highly incomplete list, see [dudik2011doubly,
swaminathan2015batch, athey2017efficient, zhou2018offline, kitagawa2018should,
off-policy-evaluation-slate-recommendation, deep-learning-logged-bandit-
feedback]). This is a setting where all the data has been collected upfront
and a policy needs to be learned from this batch data at once. Although
sharing the same primitives (contexts, actions and rewards), this problem has
important differences from the online setting. In particular, the exploration
part is missing in this problem and a separate set of challenges exist in the
offline case. In this setting, delays would have no impact since all the
rewards will have been collected at the end (except perhaps at the tail of the
batch).
### 1.2 Our Contributions
In this paper, we consider learning on generalized linear (stochastic)
contextual bandits with stochastic unbounded delays. Our contributions are
two-fold. First, we design two delay-adaptive algorithms for generalized
linear contextual bandits, one based on UCB, the other based on TS. We refer
to the two variants as Delayed UCB (DUCB, as given in Algorithm 1) and Delayed
TS (DTS, as given in Algorithm 2) respectively. DUCB requires a carefully
designed delay-adaptive confidence parameter, which depends on how many
rewards are missing up to the current time step. In contrast, DTS is a
straightforward adaptation that incorporates the delayed rewards as they
become available.
Second, we give regret characterizations of both DUCB and DTS under (1)
independent stochastic, unbounded delays that can have heavy tails, (2)
unbounded Markov delays that can have near-heavy tails (tails that are
arbitrarily close to exponential tails), and (3) unbounded delays with any
dependency structure that have light (sub-Gaussian) tails. In particular, as a
special case of our results, when the delays are iid with mean $\mu_{I}$, we
have a high-probability regret bound of
$\tilde{O}\left(\left(\sigma_{G}\sqrt{d}+\mu_{I}d+d\right)\sqrt{T}\right)$ on
DUCB, where $\sigma_{G}$ is a parameter characterizing the tail bound of the
delays and $d$ is the feature dimension. For comparison, the state-of-the-art
regret bound of UCB on generalized linear contextual bandits without delays is
$\tilde{O}\left(d\sqrt{T}\right)$ ([FCGS2010, LLZ2017]). For DTS, we have the
Bayesian regret bound of
$\tilde{O}\left(\left(\sigma_{G}\sqrt{d}+\mu_{I}\sqrt{d}+d\right)\sqrt{T}\right)$.
For comparison, the state-of-the-art Bayesian regret bound of TS on
generalized linear contextual bandits without delays is
$\tilde{O}\left(d\sqrt{T}\right)$ ([RV2014, russo2016information]). The regret
bounds we have obtained highlight the dependence on the delays in two ways:
one is how much delay is present on average, the other is how heavy the tail
of the distribution is. Both factors contribute to the degradation of the
regret bounds: that the average delay enlarges regret is intuitive; that the
tail influences regret is because a more likely large delay (at the far right
end of a tail) can delay the learning for that context significantly,
particularly in the early stages when the decision maker is unsure about the
underlying parameter is.
To the best of our knowledge, these regret bounds provide the first
theoretical characterizations in generalized linear contextual bandits with
large delays. Our results contribute to the broad landscape of contextual
bandits literature by establishing that both algorithms are robust to delays,
thereby helping clarify and reaffirm the empirical success of these two
algorithms, which are widely deployed in modern recommendation engines.
Some of the initial results have appeared in the conference version
[zhou2019]. Our work here provides a comprehensive treatment of learning in
generalized linear contextual bandits with large delays that incorporates
substantially more in-depth inquiries on several fronts. First, we consider
the heavier-tailed delays that include exponential distributions whereas
[zhou2019] only dealt with light-tailed delays that are either sub-Gaussian or
have ($1+q$)-th moment (for some $q>0$). This relaxation is important both
from an empirical standpoint and from a theoretical standpoint. Empirically,
as mentioned earlier, the field study in [chapelle2014] found that the delay
distribution from the company’s data follows the exponential distribution
closely, rather than a sub-Gaussian distribution that is commonly assumed in
the bandits literature. Theoretically, establishing guarantees in this larger-
delay regime requires us to develop a new (and arguably more elegant) argument
from that in [zhou2019], which is not applicable here. We explain the
technical difficulty in more detail in Section 3.3. Second, the sole focus of
[zhou2019] is on adapting and analyzing UCB-based algorithms. However, as
mentioned earlier, it is known that Thompson sampling often achieves superior
empirical performance, despite the fact that their theoretical bounds (when no
delays are present) may not match exactly those of the UCB algorithms.
Furthermore, TS-based algorithms do not suffer from hyper-parameter tuning and
can effectively incorporate prior and can therefore significantly outperform
(when priors are available and correct). Consequently, in this paper, in
addition to adapting and analyzing the UCB-based algorithms, we also discuss
(in Section 4) the adaptation of TS-based algorithms in the delayed feedback
setting and obtain regret bounds that characterize the corresponding
performance. Finally, we move beyond the regime of the independent delay
setting studied in [zhou2019], and instead consider (in Section 5) the much
more general and realistic history-dependent delays setting. We give regret
bounds of both UCB-based algorithms and TS-based algorithms, under both the
Markov delays assumption and the general stationary delays assumption. We also
highlight, in this unified presentation, the comparison of the various regret
bounds as the assumption on delays get progressively weakened.
## 2 Problem Setup
In this section, we describe the formulation for learning in generalized
linear contextual bandits (GLCB) in the presence of delays. We start by
reviewing the basics of generalized linear contextual bandits, followed by a
description of the delay model. Before proceeding, we first fix some notation.
For a vector $x\in\mathbb{R}^{d}$, we use $\|x\|$ to denote its $l_{2}$-norm
and $x^{\prime}$ its transpose.
$\mathbb{B}^{d}:=\\{x\in\mathbb{R}^{d}:\|x\|\leq 1\\}$ is the unit ball
centered at the origin. The weighted $l_{2}$-norm associated with a positive-
definite matrix $A$ is defined by $\|x\|_{A}:=\sqrt{x^{\prime}Ax}$. The
minimum and maximum singular values of a matrix $A$ are written as
$\lambda_{\min}(A)$ and $\|A\|$ respectively. For two symmetric matrices $A$
and $B$ the same dimensions, $A\succeq B$ means that A-B is positive semi-
definite. For a real-valued function f, we use $\dot{f}$ and $\ddot{f}$ to
denote its first and second derivatives. Finally, $[n]:=\\{1,2,\cdots,n\\}$.
### 2.1 Generalized Linear Contextual Bandits
#### Decision procedure.
We consider the generalized linear contextual bandits problem with $K$
actions. At each round $t$, the agent observes a context consisting of a set
of $K$ feature vectors $x_{t}:=\\{x_{t,a}\in\mathbb{R}^{d}|a\in[K]\\}$, which
is drawn iid from an unknown distribution $\gamma$ with $\|x_{t,a}\|\leq 1$.
Each feature vector $x_{t,a}$ is associated with an unknown stochastic reward
$y_{t,a}\in[0,1]$. If the agent selects one action $a_{t}$, there is a
resulting reward $y_{t,a_{t}}\in[0,1]$ associated. In the standard contextual
bandits setting, the reward is immediately observed after the decision is made
and the observed reward can be utilized to make decision in the next round.
Although it is generally understood in the contextual bandits literature, for
completeness, here we briefly discuss the meaning of the above quantities, as
well as where they come from. In general, at each round $t$, an individual
characterized by $v_{t}$ (a list of characteristics associated with that
individual) is drawn from a population and becomes available. When the
decision maker decides to apply action $a_{t}$ (one of the available $K$
actions) to this individual, then a reward $y_{t}(v_{t},a_{t})$ is obtained:
this reward can depend stochastically on both the individual characteristics
$v_{t}$ and the selected action $a_{t}$. However, in practice, for both
modelling and computational reasons, one often first featurizes the individual
characteristics and the actions. In particular, with sufficient generality,
one assumes $\mathbf{E}[y_{t}(v_{t},a_{t})\mid
v_{t},a_{t}]=g_{\theta}(\phi(v_{t},a_{t}))$, where $g_{\theta}(\cdot)$ is the
parametrized mean reward function and $\phi(v_{t},a_{t})$ extracts the
features from the given raw individual characteristics $v_{t}$ and action
$a_{t}$. In the above formulation, as is standard in the contextual bandits
literature, we assume the feature map $\phi(\cdot)$ is known and given and
$x_{t,a}=\phi(v_{t},a)$. If $V_{t}$ is already a vector in Euclidean space,
then a common choice for the feature extractor is
$\phi(v_{t},a)=[\mathbf{0},\dots,\mathbf{0},v_{t},\mathbf{0},\dots,\mathbf{0}]$:
that is, a $Kd$-dimensional vector with all zeros except at the $a$-th block.
#### Relationship between reward $Y$ and context $X$.
In terms of the relationship between $Y_{t,a}$ and $X_{t,a}$, we follow the
standard generalized linear contextual bandits literature ([FCGS2010,
LLZ2017]). Define $\mathcal{H}^{0}_{t}=\\{(s,x_{s},a_{s},y_{s,a_{s}}),s\leq
t-1\\}\cup\\{x_{t}\\}$ as the information available at the beginning of round
$t$. The agent maximizes the cumulative expected rewards over $T$ rounds with
information $\mathcal{H}^{0}_{t}$ at each round $t$ ($t\geq 1$). Suppose the
agent takes action $a_{t}$ at round $t$. Denote by $X_{t}=x_{t,a_{t}}$,
$Y_{t}=y_{t,a_{t}}$ and we assume the conditional distribution of $Y_{t}$
given $X_{t}$ is from the exponential family. Therefore its density is given
by
$\displaystyle\mathbb{P}_{\theta^{*}}(Y_{t}|X_{t})=\exp\left(\frac{Y_{t}X_{t}^{\prime}\theta^{*}-m(X_{t}^{\prime}\theta^{*})}{h(\eta)}+A(Y_{t},\eta)\right).$
(1)
Here, $\theta^{*}$ is an unknown number under the frequentist setting;
$\eta\in\mathbb{R}^{+}$ is a given parameter; $A$, $m$ and $h$ are three
normalization functions mapping from $\mathbb{R}$ to $\mathbb{R}$.
For exponential families, $m$ is infinitely differentiable,
$\dot{m}(X^{\prime}\theta^{*})=\mathbb{E}[Y|X]$, and
$\ddot{m}(X^{\prime}\theta^{*})=\mathbb{V}(Y|X)$. Denote
$g(X^{\prime}\theta^{*})=\mathbb{E}[Y|X]$ , one can easily verify that
$g(x^{\prime}\theta)=x^{\prime}\theta$ for linear model,
$g(x^{\prime}\theta)=\frac{1}{1+\exp(-x^{\prime}\theta)}$ for logistic model
and $g(x^{\prime}\theta)=\exp(x^{\prime}\theta)$ for Poisson model. In the
generalized linear model (GLM) literature ([NW1972, McCullagh2018]), $g$ is
often referred to as the inverse link function.
Note that (1) can be rewritten as the GLCB form,
$\displaystyle Y_{t}=g(X_{t}^{\prime}\theta^{*})+\epsilon_{t},$ (2)
where $\\{\epsilon_{t},t\in[T]\\}$ are independent zero-mean noise,
$\mathcal{H}^{0}_{t}$-measurable with
$\mathbb{E}[\epsilon_{t}|{\mathcal{H}^{0}_{t}}]=0$. Data generated from (1)
automatically satisfies the sub-Gaussian condition:
$\displaystyle\mathbb{E}\left[\exp({\lambda\epsilon_{t}})|{\mathcal{H}^{0}_{t}}\right]\leq\exp\left({\frac{\lambda^{2}\hat{\sigma}^{2}}{2}}\right).$
(3)
Throughout the paper, we denote $\hat{\sigma}>0$ as the sub-Gaussian parameter
of the noise $\epsilon_{t}$.
###### Remark 1
In this paper, we focus on the GLM with exponential family (1). In general,
one can work with model (2) under the sub-Gaussian assumption (3). Our
analysis will still hold by considering maximum quasi-likelihood estimator for
(2). See more explanations in Section 3.1.
### 2.2 The Delay Model
Unlike the traditional setting where each reward is immediately observed, here
we consider the case where stochastic and unbounded delays are present in
revealing the rewards. Let $T$ be the number of total rounds. At round $t$,
after the agent takes action $a_{t}$, the reward $y_{t,a_{t}}$ may not be
available immediately. Instead, it will be observed at the end of round
$t+D_{t}$ where $D_{t}$ is the delay at time $t$. We assume $D_{t}$ is a non-
negative random number which is independent of $\\{D_{s}\\}_{s\leq t-1}$ and
$\\{x_{s},y_{s,a_{s}},a_{s}\\}_{s\leq t}$. First, we define the available
information for the agent at each round.
#### Information structure under delays.
At any round $t$, if $D_{s}+s\leq t-1$ (reward occurred in round $s$ is
available at the beginning of round $t$), then we call
$(s,x_{s},y_{s,a_{s}},a_{s})$ the complete information tuple at round $t$. If
$D_{s}+s\geq t$, we call $(s,x_{s},a_{s})$ the incomplete information tuple at
the beginning of round $t$. Define
$\mathcal{H}_{t}=\left\\{(s,x_{s},y_{s,a_{s}},a_{s})\,\,|\,\,s+D_{s}\leq
t-1\right\\}\cup\left\\{(s,x_{s},a_{s})\,\,|\,\,s\leq t-1,s+D_{s}\geq
t\right\\}\cup\left\\{x_{t}\right\\},$
then $\mathcal{H}_{t}$ is the information (filtration) available at the
beginning of round $t$ for the agent to choose action $a_{t}$. In other words,
$\mathcal{H}_{t}$ contains all the incomplete and complete information tuples
up to round $t-1$ and the content vector $x_{t}$ at round $t$.
Moreover define
$\displaystyle\mathcal{F}_{t}=\\{(s,x_{s},a_{s},y_{s,a_{s}})\,\,|\,\,s+D_{s}\leq
t\\}.$ (4)
Then $\mathcal{F}_{t}$ contains all the complete information tuples
$(s,x_{s},a_{s},y_{s,a_{s}})$ up to the end of round $t$. Denote
$\mathcal{I}_{t}=\mathcal{F}_{t}-\mathcal{F}_{t-1}$, $\mathcal{I}_{t}$ is the
new complete information tuples revealed at the end of round $t$.
#### Performance criterion.
Under the frequentist setting, assume there exists an unknown true parameter
$\theta^{*}\in\mathbb{R}^{d}$. The agent’s strategy can be evaluated by
comparing her rewards to the best reward. To do so, define the optimal action
at round $t$ by $a_{t}^{*}=\arg\max_{a\in[K]}g(x_{t,a}^{\prime}\theta^{*})$.
Then, the agent’s total regret of following strategy $\pi$ can be expressed as
follows
$R_{T}(\pi):=\sum_{t=1}^{T}\left(g\left(x_{t,a^{*}_{t}}^{\prime}\theta^{*}\right)-g\left(x_{t,a_{t}}^{\prime}\theta^{*}\right)\right),$
where $a_{t}\sim\pi_{t}$ and policy $\pi_{t}$ maps $\mathcal{H}_{t}$ to the
probability simplex
$\Delta^{K}:=\\{(p_{1},\cdots,p_{K})\,\,|\,\,\sum_{i=1}^{K}p_{i}=1,p_{i}\geq
0\\}$. Note that $R_{T}(\pi)$ is in general a random variable due to the
possible randomness in $\pi$.
#### Assumptions.
Throughout the paper, we assume the following assumption on distribution
$\gamma$ and function $g$, which is standard in the generalized linear bandit
literature ([FCGS2010, LLZ2017, JBNW2017]).
###### Assumption 1 (GLCB)
* •
$\lambda_{\min}(\mathbb{E}[\frac{1}{K}\sum_{a\in[K]}x_{t,a}x_{t,a}^{\prime}])\geq\sigma_{0}^{2}$
for all $t\in[T]$.
* •
$\kappa:=\inf_{\\{\|x\|\leq 1,\|\theta-\theta^{*}\|\leq
1\\}}\dot{g}(x^{\prime}\theta)>0$.
* •
$g$ is twice differentiable. $\dot{g}$ and $\ddot{g}$ are upper bounded by
$L_{g}$ and $M_{g}$, respectively.
In addition, we assume the delay sequence $\\{D_{t}\\}_{t=1}^{T}$ satisfies
the following assumption.
###### Assumption 2 (Delay)
Assume $\\{D_{t}\\}_{t=1}^{T}$ are independent non-negative random variables
with tail-envelope distribution $(\xi,\mu,M)$. That is, there exists a
constant $M>0$ and a distribution $\xi$ with mean $\mu<\infty$ such that for
any $m\geq M$ and $t\in[T]$,
$\mathbb{P}(D_{t}\geq m)\leq\mathbb{P}(D\geq m),$
where $D\sim\xi$. Furthermore, assume there exists $q\geq 0$ such that
$\mathbb{P}(D-\mu\geq x)\leq\exp\left(\frac{-x^{1+q}}{2\sigma^{2}}\right),$
where $\mathbb{E}[D]=\mu$.
Assumption 2 includes the most common delay patterns in real-world
applications. $D$ is sub-Gaussian when $q=1$ and $D$ has exponential delays
when $q=0$. When $D_{t}$’s are iid, the following condition guarantees
Assumption 2:
$\mathbb{P}(D_{t}-\mathbb{E}[D_{t}]\geq
x)\leq\exp\left(\frac{-x^{1+q}}{2\tilde{\sigma}^{2}}\right),$
with some $\tilde{\sigma}>0$ and $q\geq 0$. We summarize the parameter
definition in Table LABEL:tab:parameters. (See Section LABEL:app:table.)
Note that with Assumption 2, we do not need to assume all delays have
identical distributions, as long as they are independent over time. Since
there exists an envelop distribution $\xi$ uniformly dominating the tail
probability of all delays, we can get a handle on the tail of all the delay
distributions. This can be viewed as the regularity condition on the delays.
## 3 Delayed Upper Confidence Bound (DUCB) for GLCB
In this section, we propose a UCB type of algorithm for GLCB adapting the
delay information in an online version. Let us first introduce the maximum
likelihood estimator we adopt and then state the main algorithm.
### 3.1 Maximum Likelihood Estimators (MLEs).
Denote $T_{t}=\\{s:s\leq t-1,D_{s}+s\leq t-1\\}$ as the set containing
timestamps with complete information tuples at the beginning of round $t$. We
use data with timestamps in $T_{t}$ to construct the MLE. Suppose we have
independent samples of $\\{Y_{s}:s\in T_{t}\\}$ condition on $\\{X_{s}:s\in
T_{t}\\}$. The log-likelihood function of $\theta$ under (1) is
$\displaystyle\log l\left(\theta\,\,|\,\,T_{t}\right)$ $\displaystyle=$
$\displaystyle\sum_{s\in
T_{t}}\left[\frac{Y_{s}X_{s}^{\prime}\theta-m(X_{s}^{\prime}\theta)}{v(\eta)}+B(Y_{s},\eta)\right]$
$\displaystyle=$ $\displaystyle\frac{1}{v(\eta)}\sum_{s\in
T_{t}}\left[Y_{s}X_{s}^{\prime}\theta-m(X_{s}^{\prime}\theta)\right]+\text{constant}.$
Therefore, the MLE can be defined as
$\hat{\theta}_{t}\in\arg\max_{\theta\in\Theta}\sum_{s\in
T_{t}}\left[Y_{s}X_{s}^{\prime}\theta-m(X_{s}^{\prime}\theta)\right].$
Since $m$ is differentiable with $\ddot{m}\geq 0$, the MLE can be written as
the solution of the following equation
$\displaystyle\sum_{s\in T_{t}}(Y_{s}-g(X_{s}^{\prime}\theta))X_{s}=0,$ (5)
which is the estimator we use in Step 4 of Algorithm 1.
Note that, the general GLCB, a semi-parametric version of the GLM, is obtained
by assuming only that $\mathbb{E}[Y|X]=g(X^{\prime}\theta^{*})$ (see (2))
without further assumptions on the conditional distribution of $Y$ given $X$.
In this case, the estimator obtained by solving (5) is referred to as the
maximum quasi-likelihood estimator. It is well-documented that this estimator
is consistent under very general assumptions as long as matrix $\sum_{s\in
T_{t}}X_{s}X_{s}^{\prime}$ tends to infinity as $t\rightarrow\infty$
([CHY1999, FCGS2010]).
### 3.2 Algorithm: DUCB-GLCB
Denote $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}\\{s+D_{s}\geq t\\}$ as the number of
missing reward when the agent is making a prediction at round $t$. Further
denote $W_{t}=\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ as the matrix consisting
feature information with timestamps in $T_{t}$ and
$V_{t}=\sum_{s=1}^{t-1}X_{s}X_{s}^{\prime}$ as the matrix consisting all
available features at the end of round $t-1$. Then the main algorithm is
defined as follows.
Algorithm 1 DUCB-GLCB
1: Input: the total rounds $T$ , model parameters $d$ and $\kappa$, and tuning
parameters $\tau$ and $\delta$.
2: Initialization: randomly choose $\alpha_{t}\in[K]$ for $t\in[\tau]$, set
$V_{\tau+1}=\sum_{i=1}^{\tau}X_{s}X_{s}^{\prime}$,
$T_{\tau+1}:=\\{s\,:\,s\leq\tau,s+D_{s}\leq\tau\\}$,
$G_{\tau+1}=\tau-|T_{\tau+1}|$ and $W_{\tau+1}=\sum_{s\in
T_{\tau+1}}X_{s}X_{s}^{\prime}$
3: for $t=\tau+1,\tau+2,\cdots,T$ do
4: Update Statistics: calculate the MLE $\hat{\theta}_{t}$ by solving
$\sum_{s\in T_{t}}(Y_{s}-g(X_{s}^{\prime}\theta))X_{s}=0$
5: Update Parameter:
$\beta_{t}=\frac{\hat{\sigma}}{\kappa}\sqrt{\frac{d}{2}\log\left(1+\frac{2(t-G_{t})}{d}\right)+\log(\frac{1}{\delta})}+{\sqrt{G_{t}}}$
6: Select Action: choose
$a_{t}=\arg\max_{a\in[K]}\left(x_{t,a}^{\prime}\hat{\theta}_{t}+\beta_{t}\|x_{t,a}\|_{V_{t}^{-1}}\right)$
7: Update Observations: $X_{t}\leftarrow x_{t,a_{t}}$, $V_{t+1}\leftarrow
V_{t}+X_{t}X_{t}^{\prime}$ and $T_{t+1}\leftarrow
T_{t}\cup\\{s\,:\,s+D_{s}=t\\}$, $G_{t+1}=t-|T_{t+1}|$, and
$W_{\tau+1}=W_{\tau}+\sum_{s:s+D_{s}=t}X_{s}X_{s}^{\prime}$
8: end for
###### Remark 2 (Comparison to UCB-GLM Algorithm in [LLZ2017])
We make several adjustments to the UCB-GLM Algorithm in [LLZ2017]. First, in
step 4 (statistics update), we only use data with timestamps in $T_{t}$ to
calculate the estimator using MLE. In this step, using data without reward
will cause bias in the estimation. Second, when selecting the action in step
5, parameter $\beta_{t}$ is updated adaptively at each round whereas in
[LLZ2017], the corresponding parameter is constant over time. Moreover, in
step 4, we choose to use $V_{t}$ to normalize the context vector $X_{t,a}$
instead of $W_{t}$.
### 3.3 Preliminary Analysis
Denote $G_{t}^{*}=\max_{1\leq s\leq t}G_{s}$ as the running maximum number of
missing reward up to round $t$. The property of $G_{t}$ and $G^{*}_{t}$ is the
key to analyze the regret bound for both UCB and Thompson sampling algorithms.
We next characterize the tail behavior of $G_{t}$ and $G^{*}_{t}$.
###### Proposition 1 (Properties of $G_{t}$ and $G_{t}^{\star}$)
Assume Assumption 2. Denote $\sigma_{G}=\sigma\sqrt{2+q}$. Then,
1. 1.
$G_{t}$ is sub-Gaussian. Moreover, for all $t\geq 1$, with probability
$1-\delta$
$\displaystyle{G}_{t}\leq
2(\mu+M)+\sigma_{G}\sqrt{2\log\left(\frac{1}{\delta}\right)}+2\sigma_{G}^{2}\log
C_{3}+1,$ (6)
where $C_{3}=2\sigma^{2}+1$.
2. 2.
With probability $1-\delta$,
$\displaystyle{G}_{T}^{*}$ $\displaystyle\leq$ $\displaystyle
2(\mu+M)+\sigma_{G}\sqrt{2\log T}+2\sigma_{G}^{2}\log C_{3}$ (7)
$\displaystyle+\sigma_{G}\sqrt{2\log\left(\frac{1}{\delta}\right)+2\log
C_{3}\sigma_{G}\sqrt{2\log T}+2\log C_{3}}+1,$
where $G_{T}^{*}=\max_{1\leq s\leq T}G_{s}$.
3. 3.
Define $W_{t}=\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ where $X_{t}$ is drawn
iid. from some distribution $\gamma$ with support in the unit ball
$\mathbb{B}_{d}$. Furthermore, let $\Sigma:=\mathbb{E}[X_{t}X_{t}^{\prime}]$
be the second moment matrix, and $B$ and $\delta>0$ be two positive constants.
Then there exist positive, universal constants $C_{1}$ and $C_{2}$ such that
$\lambda_{\min}(W_{t})\geq B$ with probability at least $1-2\delta$, as long
as
$\displaystyle
t\geq\left(\frac{C_{1}\sqrt{d}+C_{2}\sqrt{\log(\frac{1}{\delta})}}{\lambda_{\min}(\Sigma)}\right)^{2}+\frac{2B}{\lambda_{\min}(\Sigma)}+2(\mu+M)+\sigma_{G}\sqrt{2\log\left(\frac{1}{\delta}\right)}+2\sigma_{G}^{2}\log
C_{3}+1.$ (8)
A special case of Proposition 1-1 is when $D_{i}$’s are iid and $q=0$. Now
assume $D_{i}\sim D$ are iid with exponential-decays:
$\displaystyle\mathbb{P}(D-\mu_{I}\geq
t)\leq\exp(-\frac{t}{2\sigma_{I}^{2}}),$ (9)
and $\mu_{I}=\mathbb{E}D$. Then with probability $1-\delta$, we have
$\displaystyle G_{t}-\mu_{I}\leq
2\sigma_{I}\sqrt{\log\left(\frac{1}{\delta}\right)}+1+4\sigma_{I}^{2}\log(2\sigma_{I}^{2}).$
(10)
At a high level, the proof utilizes the fact that, with high probability,
there will be a lot of zero terms in the summation
$G_{t}=\sum_{s=1}^{t-1}\mathbb{I}(s+D_{s}\geq s)$ when $t$ is large. This is
done by designing a sequence of stopping times for the successes. We highlight
the idea by showing result (10) for the special case when $D_{t}$’s are iid
and $q=0$. The full version of the proof is deferred to Appendix LABEL:proof.
###### Sketch of the proof..
Define $V=\sum_{i=1}^{\infty}\mathbb{I}(D_{i}-\mu_{I}\geq i)$ where $D_{i}\sim
D$ are iid that satisfies (9).
Now let us define the following sequence of stopping times, $(k\geq 1)$,
$T(k)=\inf\\{t>T(k-1):D_{t}\geq t\\},$
where $T(k)$ is the time of the $k^{\text{t}h}$ success. Therefore,
$\displaystyle\mathbb{P}(V\geq j)$ $\displaystyle=$
$\displaystyle\mathbb{P}(T(1)<\infty,T(2)<\infty,\cdots,T(j-1)<\infty,T(j)<\infty)$
(11) $\displaystyle=$
$\displaystyle\Pi_{k=1}^{j}\mathbb{P}\left(T(k)<\infty|T(i)<\infty\,\,\text{
for }\,\,i\leq k-1\right)$ $\displaystyle=$
$\displaystyle\Pi_{k=2}^{j}\mathbb{P}\left(T(k)<\infty|T(k-1)<\infty\right)\mathbb{P}\left(T(1)<\infty\right)$
(12) $\displaystyle\leq$
$\displaystyle\Pi_{k=1}^{j}\left(\sum_{i=k}^{\infty}\exp\left(-\frac{i}{2\sigma_{I}^{2}}\right)\right)$
(13) $\displaystyle\leq$
$\displaystyle\Pi_{k=1}^{j}\left(2\sigma_{I}^{2}\exp\left(-\frac{k-1}{2\sigma_{I}^{2}}\right)\right)$
(14) $\displaystyle=$
$\displaystyle(2\sigma_{I}^{2})^{j}\exp\left(-\frac{(j-1)j}{4\sigma_{I}^{2}}\right)$
(15)
(11) holds by tower property. (12) holds since event
$\\{T(k)<\infty|T(k-1)<\infty\\}$ is equivalent to event
$\\{T(k)<\infty|T(j)<\infty\,\,{\text{f}or}\,\,j\leq k-1\\}$. Condition on
$T(k-1)<\infty$, we have
$\mathbb{P}\left(T(k)<\infty|T(k-1)<\infty\right)\leq\mathbb{P}(\cup_{j\geq
k}\mathbb{I}(D_{j}\geq
j))\leq\sum_{i=k}^{\infty}\exp\left(-\frac{i}{2\sigma_{I}^{2}}\right)$. The
last inequality holds by the union bound. Therefore (13) holds. Finally, 14
holds by integration.
Given (14), $V$ is sub-Gaussian and with probability $1-\delta$,
$V\leq
2\sigma_{I}\sqrt{\log\left(\frac{1}{\delta}\right)}+1+4\sigma_{I}^{2}\log(2\sigma_{I}^{2}).$
Similarly, we can show that, for any $t\geq 1$, $G_{t}$ is sub-Gaussian. With
probability $1-\delta$, we have
$G_{t}-\mu_{I}\leq
2\sigma_{I}\sqrt{\log\left(\frac{1}{\delta}\right)}+1+4\sigma_{I}^{2}\log(2\sigma_{I}^{2}).$
∎
Note that $G_{t}$ is sub-Gaussian even when $D$ has near-heavy-tail
distribution ($p\in[0,1)$).
###### Remark 3
The proof of Proposition 1 is simple but essential. It fully utilizes the
property that the sequence in $V$ has a lot of zero terms (with high
probability). In particular, one will not be able to fully obtain the result
if one uses the standard approach and directly works at the level of “the-sum-
of-sub-Guassians-is-sub-Gaussian" and thereafter analyzing sum of sub-Gaussian
constants, which is the method used in [zhou2019]. In order to drive this
point home, we provide an approach in this direction using Hoeffding bound
(Theorem LABEL:thm9). See Appendix LABEL:further_G. With such a approach, one
can only handle the case when $q>0$, which excludes the most difficult
scenario with exponential delays. With Hoeffding bound, the sub-Gaussian
parameter for $V$ is of the form
$\sigma=\sqrt{\sum_{i=1}^{\infty}\sigma_{i}^{2}}$ where $\sigma$ is the sub-
Gaussian parameter for indicator function $\mathbb{I}(G_{i}\geq i)$.
Intuitively speaking, this Hoeffding bound does not take into consideration of
the sparsity in the sequence. Therefore, the argument cannot reach the limit
for $q=0$.
### 3.4 Regret Bounds
###### Theorem 1
Assume Assumptions 1-2. Fix any $\delta$. There exists a universal constant
$C:=C(C_{1},C_{2},M,\mu,\sigma_{0},\hat{\sigma},\,\sigma,\kappa)>0,$
such that if we run DUCB-GLCB with
$\tau:=C\left(d+\log(\frac{1}{\delta})\right)$ and
$\beta_{t}=\frac{\hat{\sigma}}{\kappa}\sqrt{\frac{d}{2}\log\left(1+\frac{2(t-G_{t})}{d}\right)+\log(\frac{1}{\delta})}+G_{t}$,
then, with probability at least $1-5\delta$, the regret of the algorithm is
upper bounded by
$\displaystyle R_{T}$ $\displaystyle\leq$
$\displaystyle\tau+L_{g}\left[4\sqrt{\mu+M}\sqrt{Td\log\left(\frac{T}{d}\right)}+2^{7/4}\sqrt{\sigma_{G}}(\log
T)^{1/4}\sqrt{d\log\left(\frac{T}{d}\right)T}+\frac{2d\hat{\sigma}}{\kappa}\log\left(\frac{T}{d\delta}\right)\sqrt{T}\right.$
(16)
$\displaystyle\,\,+\left.2\sqrt{2Td\log\left(\frac{T}{d}\right)}\left(\sqrt{\sigma_{G}}\left({2\log\left(\frac{1}{\delta}\right)+2\log
C_{3}\sigma_{G}\sqrt{2\log T}+2\log C_{3}}\right)^{1/4}\right.\right.$
$\displaystyle\,\,\left.\left.+\sqrt{1+2\sigma_{G}^{2}\log
C_{3}}\right)\right]$
For parameter definition, we refer to Table LABEL:tab:parameters in Section
LABEL:app:table.
The proof of Theorem 1 consists of three steps. The first step is to construct
a confidence ball associated with the adaptive parameter $\beta_{t}$ and show
that the true parameter falls into the confidence ball with high probability.
The second step is to upper bound the normalized context sequence
$\sum_{t=\tau+1}^{\tau+n}\|X_{t}\|_{V_{t}^{-1}}$. And the last step is to
utilize the property of $G_{t}$ and $G^{*}_{t}$ proved in Proposition 1. The
details is deferred to Appendix LABEL:proof.
Given the high probability bound in Theorem 1, one can show the expected
regret bound without much of work.
###### Corollary 1 (Expected regret)
Assume Assumptions 1-2. The expected regret is bounded by
$\displaystyle\mathbb{E}[R_{T}]={O\left(d\sqrt{T}\log(T)+\sqrt{\sigma_{G}}\sqrt{Td}(\log(T))^{3/4}+(\sqrt{\mu+M}+\sigma_{G})\sqrt{Td\log\left({T}\right)}\right).}$
(17)
Given the result in (16), (17) holds by choosing $\delta=\frac{1}{T}$ and
using the fact that $R_{T}\leq T$.
The highest order term $O(d\sqrt{T}\log(T))$ does not depend on delays. This
result is in line with the non-contextual stochastic bandit literature
([JGS2013]). Delay impacts the expected regret bound in two folds. First, the
sub-Gaussian parameter $\sigma_{G}$ and the mean-related parameter ${\mu+M}$
appears in the second-highest order term. Second, the sub-Gaussian parameter
${\sigma_{G}}$ appears in the third-order term. Note that here we include the
log factors in deciding the highest order term, the second highest order term
and so on. If we exclude the log terms, then both delay parameters impact the
regret bound multiplicatively.
### 3.5 Tighter Regret Bounds for Special Cases
When the sequence $\\{D_{s}\\}_{s=1}^{T}$ satisfies some specific assumptions,
we are able to provide tighter high probability bounds on the regret.
###### Proposition 2
Given Assumptions 1-2, we have the following results.
1. 1.
If there exists a constant $D_{\max}>0$ such that $\mathbb{P}(D_{s}\leq
D_{\max})=1$ for all $s\in[T]$. Fix $\delta$. There exists a universal
constant $C>0$ such that by taking
$\tau=D_{\max}+C(d+\log(\frac{1}{\delta}))$, with probability $1-3\delta$, the
regret of the algorithm is upper bounded by
$\displaystyle
R_{T}\leq\^{A}~\tau+L_{g}\left(2{\sqrt{D_{\max}}}\sqrt{2Td\log\left(\frac{T}{d}\right)}+\frac{2d\hat{\sigma}}{\kappa}\log\left(\frac{T}{d\delta}\right)\sqrt{T}\right).$
(18)
2. 2.
Assume $D_{1},\cdots,D_{T}$ are iid non-negative random variables with mean
$\mu_{I}$. There exists $C>0$ such that by taking
$\tau:=C\left(d+\log(\frac{1}{\delta})\right)$, with probability $1-5\delta$,
the regret of the algorithm is upper bounded by
$\displaystyle R_{T}\leq$
$\displaystyle\tau+L_{g}\left[{4\sqrt{\mu_{I}}}\sqrt{Td\log\left(\frac{T}{d}\right)}+{2^{7/4}\sqrt{\sigma_{G}}(\log
T)^{1/4}}\sqrt{d\log\left(\frac{T}{d}\right)T}+\frac{2d\hat{\sigma}}{\kappa}\log\left(\frac{T}{d\delta}\right)\sqrt{T}\right.$
$\displaystyle\,\,+\left.2\sqrt{2Td\log\left(\frac{T}{d}\right)}\left(\sqrt{\sigma_{G}}\left({2\log\left(\frac{1}{\delta}\right)+2\log
C_{3}\sigma_{G}\sqrt{2\log T}+2\log C_{3}}\right)^{1/4}\right.\right.$
$\displaystyle\,\,\left.\left.+\sqrt{1+2\sigma_{G}^{2}\log
C_{3}}\right)\right]$
When delays $\\{D_{s}\\}_{s=1}^{T}$ are bounded by $D_{\max}$, the delay
paramter $D_{\max}$ only appears in the term $\sqrt{Td\log{{T}}}$ and does not
affect the highest order term $d\log(\frac{T}{d\delta})\sqrt{T}$. Compared to
(17), there is no regret term on the order of
$O(\sqrt{Td}\left(\log(T)\right)^{3/4})$ in (18). This is because we can
provide a smaller number on the right hand side of (8) when delays are
bounded. When delays are iid, $\mu+M$ is replaced by $\mu_{I}$, which is the
common expectation of all the random delays.
We refer to Appendix LABEL:proof for the proof of Proposition 2.
## 4 Delayed Thompson Sampling (DTS) for GLCB
In section 3, under the frequentist set-up, we assume there exists a true
parameter $\theta^{*}$ and use UCB to encourage exploration and construct the
confidence interval for $\theta^{*}$. On the contrary, posterior sampling does
not make use of upper confidence bounds to encourage exploration and instead
relies on randomization. In this section, we operate in the Bayesian decision
making setting and assume the decision maker is equipped with a prior
distribution on $\theta^{*}$. In this setting, the standard performance metric
is Bayesian regret, defined as follows:
$R^{B}_{T}(\pi)=\mathbb{E}_{\theta^{*},x}[R_{T}(\pi,\theta^{*})]=\sum_{t=1}^{T}\mathbb{E}_{\theta^{*},x}\left[g\left(x_{t,a^{*}_{t}(\theta^{*})}^{\prime}\theta^{*}\right)-g\left(\
x_{t,a_{t}}^{\prime}\theta^{*}\right)\right],$
where $a_{t}\sim\pi_{t}$. Next, we present the Thompson sampling algorithm
when adapted to the delayed setting. Algorithm 2 provides a formal
description.
Algorithm 2 DTS-GLCB
1: Input: the total rounds $T$ , tuning parameter $\tau$, prior $Q_{0}$
2: Initialization: randomly choose $\alpha_{t}\in[K]$ for $t\in[\tau]$
3: Update information: $\mathcal{F}_{\tau}$ according to (4).
4: if $\mathcal{I}_{\tau}=\emptyset$ then
5: $Q_{1}(\theta)=Q_{0}(\theta)$
6: else
7: $Q_{1}(\theta)\propto
Q_{0}(\theta|\mathcal{F}_{\tau})\Pi_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{\tau}}\mathbb{P}(y_{s,a_{s}}|\theta,x_{s,a_{s}})$
8: end if
9: for $t=1,2,\cdots,T-\tau$ do
10: Sample Model: $\hat{\theta}_{t+\tau}\sim Q_{t}$
11: Select Action: $\bar{a}_{t+\tau}\in\arg\max_{a\in[K]}\left\langle
x_{t+\tau,a},\hat{\theta}_{t+\tau}\right\rangle$
12: Update information: $\mathcal{F}_{t+\tau}$ according to (4). Define
$\mathcal{I}_{t+\tau}:=\mathcal{F}_{t+\tau}-\mathcal{F}_{t+\tau-1}$ as the new
information at round $t+\tau$
13: if $\mathcal{I}_{t+\tau+1}=\emptyset$ then
14: $Q_{t+1}(\theta)=Q_{t}(\theta)$
15: else
16: $Q_{t+1}(\theta)\propto
Q_{t}(\theta|\mathcal{F}_{\tau+t})\Pi_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t+\tau+1}}\mathbb{P}(y_{s,a_{s}}|\theta,x_{s,a_{s}})$
17: end if
18: end for
###### Remark 4
Note that in Algorithm 2, there is an exploration period of length $\tau$. The
posterior distribution employed at round $\tau+1$ is conditioned on
observations made over the first $\tau$ time rounds. Another point to note is
that Algorithm 2 is kept at an abstract level. The exact computation depends
on the prior chosen and the exponential family. Note that every exponential
family has a conjugate prior ([DY1979]), which admits efficient posterior
update. Section 4.1 provides a concrete example on linear contextual bandits,
which is a simple special case. We use this special case to illustrate how one
can perform efficient incremental update in the presence of delays.
### 4.1 Delayed Thompson Sampling For Linear Contextual Bandits
When $g(x)=x$ and $m(x)=\frac{x^{2}}{2}$, (1) reduces to
$\displaystyle\mathbb{P}(Y|X)=\exp\left(\frac{YX^{\prime}\theta^{*}-(X^{\prime}\theta^{*})^{2}/2}{h(\eta)}+A(Y,\eta)\right).$
(19)
Recall, from Bayes’ theorem, the posterior distribution is equal to the
product of the likelihood function $\theta\rightarrow\mathbb{P}(y|\theta)$ and
prior $\mathbb{P}(\theta)$, normalized by the probability of the data
$\mathbb{P}(y)$:
$\mathbb{P}(\theta|y)=\frac{\mathbb{P}(y|\theta)\mathbb{P}(\theta)}{\int\mathbb{P}(y|\theta)\mathbb{P}(\theta^{\prime})d\theta^{\prime}}$
Different choices of the prior distribution $\mathbb{P}(\theta)$ may make the
integral more or less difficult to calculate. Moreover, the product
$\mathbb{P}(y|\theta)\mathbb{P}(\theta)$ may take one form or another. But for
certain choices of the prior, the posterior will have the same form as the
prior, with possibly different parameter values. Such a choice is a conjugate
prior. The conjugate prior, giving a closed-form expression for the posterior,
makes Thompson sampling efficient to update. Further notice that, every
exponential family has a conjugate prior ([DY1979]).
Now we consider the normal conjugate prior for the linear model (19). Let
$B_{t}=aI_{d}+\sum_{s\in T_{t}}x_{s,a_{s}}x_{s,a_{s}}^{\prime}$ and
$\theta_{t}=B_{t}^{-1}\left(\sum_{s\in T_{t}}x_{s,a_{s}}y_{s,a_{s}}\right)$.
Given the linear model (19), suppose we have $Y|X$ is Gaussian with
$\mathcal{N}(X^{\prime}{\theta},v^{2})$. If the prior for $\theta$ at round
$t$ is given by $\mathcal{N}(\theta_{t},v^{2}B_{t}^{-1})$, then it is easy to
verify that the posterior distribution at around $t+1$ is
$\mathcal{N}(\theta_{t+1},v^{2}B_{t+1}^{-1})$. Then Algorithm 2 becomes
Algorithm 3 DTS-LCB
1: Input: the total rounds $T$ , constant $v>0$ and $a\geq 0$, tuning
parameter $\tau$, conjugate prior $\mathcal{N}(\theta_{0},v^{2}B_{0}^{-1})$
with $\theta_{0}=0$ and $B_{0}=aI_{d}$, $f_{0}=0$
2: Initialization: randomly choose $\alpha_{t}\in[K]$ for $t\in[\tau]$
3: Update information: $\mathcal{F}_{\tau}$ according to (4).
4: if $\mathcal{I}_{t}=\emptyset$ then
5: $\theta_{1}=\theta_{0}$, $B_{1}=B_{0}$
6: else
7:
$B_{1}=\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t}}x_{s,a_{s}}x_{s,a_{s}}^{T}$,
$f_{1}=\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t}}x_{s,a_{s}}y_{s,a_{s}}$
and $\theta_{1}=B_{1}^{-1}f_{1}$
8: end if
9: for $t=1,2,\cdots,T-\tau$ do
10: Sample Model:
$\hat{\theta}_{t+\tau}\sim\mathcal{N}(\theta_{t},v^{2}B_{t}^{-1})$
11: Select Action: $\bar{a}_{t+\tau}\in\arg\max_{a\in[K]}\left\langle
x_{t+\tau,a},\hat{\theta}_{t+\tau}\right\rangle$
12: Update information: $\mathcal{F}_{t+\tau}$ according to (4). Define
$\mathcal{I}_{t+\tau}:=\mathcal{F}_{t+\tau}-\mathcal{F}_{t+\tau-1}$ as the new
information at round $t+\tau$
13: if $\mathcal{I}_{t+\tau+1}=\emptyset$ then
14: $B_{t+1}=B_{t}$
15: $\theta_{t+1}=\theta_{t}$
16: else
17:
$B_{t+1}=B_{t}+\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t+\tau}}x_{s,a_{s}}x_{s,a_{s}}^{T}$,$f_{t+1}=f_{t}+\sum_{(s,x_{s},a_{s},y_{s,a_{s}})\in\mathcal{I}_{t+\tau}}x_{s,a_{s}}y_{s,a_{s}}$,
and $\theta_{t+1}=B_{t+1}^{-1}f_{t+1}$
18: end if
19: end for
Note that the update (line 17) is on the incremental form which is practically
efficient.
### 4.2 Regret Bounds
Denote $\pi_{\tau}^{\text{PS}}$ as the posterior sampling policy described in
Algorithm 2 with an exploration period $\tau$. We have the following result.
###### Theorem 2
Assume Assumptions 1-2. There exists a universal constant
$C:=C(C_{1},C_{2},M,\mu,\sigma_{0},\sigma_{G},\,\sigma,\kappa)>0$, such that
if we run exploration with $\tau:=C\left(d+\log(\frac{1}{\delta})\right)$,
$\displaystyle R^{B}_{T}(\pi_{\tau}^{\text{PS}})={O\left(d\log
T\sqrt{T}+\sqrt{\sigma_{G}}\sqrt{Td}(\log(T))^{3/4}+(\sqrt{\mu+M}+\sigma_{G})\sqrt{dT\log\left(T\right)}\right).}$
(20)
For parameter definition, we refer to Table LABEL:tab:parameters in Section
LABEL:app:table.
We follow the steps in [RV2014] to prove the Bayesian regret bound in Theorem
2. The idea is the follows. We first decompose the Bayesian regret and the UCB
the regret and build a connection between them. We then provide the Bayesian
regret bound by utilizing a sequence of upper confidence bounds. We defer the
details to Appendix LABEL:proof.
When $\\{D_{s}\\}_{s=1}^{T}$ satisfies some specific assumptions, we are able
to provide tighter Bayesian regret bounds.
###### Corollary 2
Assume Assumptions 1-2, we have the following result:
1. 1.
If there exists a constant $D_{\max}>0$ such that $\mathbb{P}(D_{s}\leq
D_{\max})=1$ for all $s\in[T]$. Then,
$R^{B}_{T}(\pi_{\tau}^{\text{PS}})=O\left(d\log
T\sqrt{T}+{\sqrt{D_{\max}}}\sqrt{dT\log T}\right).$
2. 2.
Assume $D_{1},\cdots,D_{T}$ are iid non-negative random variables with mean
$\mu_{I}$. Then,
$R^{B}_{T}(\pi_{\tau}^{\text{PS}})={O\left(d\log
T\sqrt{T}+\sqrt{\sigma_{G}}\sqrt{Td}(\log(T))^{3/4}+(\sqrt{\mu_{I}}+\sigma_{G})\sqrt{dT\log\left(T\right)}\right).}$
We defer the proof of Corollary 2 to Appendix LABEL:proof. The results in
Theorem 2 and Corollary 2 are comparable to the results in Section 3.
## 5 Extensions: History-dependent Delays
In previous sections, we have analyzed the regret bounds for both DUCB-GLCB
and DTS-GLCB when delays are independent. In practice, such independence
assumption may not hold and current delays may depend on historical delays. In
this section, we explore two types of dependency structures for the delays. In
section 5.1, we discuss Markov delays where the stationary distribution is
near-heavy-tail. In section LABEL:sec:random_delay, we discuss delays with
random dependency structures but under a stronger assumption on the stationary
distribution, which is lighter-than-sub-Gaussian.
### 5.1 Markov Delays
###### Assumption 3 (Markov Delay)
Let $\\{D_{t}\\}_{t=1}^{T}$ be a stationary Markov chain on the general state
space $\mathcal{X}=\mathbb{N}^{+}$ with invariant distribution $\pi$. Given
$D\sim\pi$ with $\mu_{M}=\mathbb{E}[D]$, we further assume that
$\mathbb{P}(D-\mu_{M}\geq
x)\leq\exp\left(\frac{-x^{1+q}}{2\sigma_{M}^{2}}\right),$
for some $q>0$ and $\sigma_{M}>0$.
Under Assumption 3, the stationary distribution $\pi$ can have near-heavy-tail
property when $q$ is small.
Recall that $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}\\{s+D_{s}\geq t\\}$ is the
number of missing reward and $G_{t}^{*}=\max_{1\leq s\leq t}G_{t}$ is the
running maximum number of missing reward. Under Assumption 3, $G_{t}$ and
$G_{t}^{*}$ has the following properties and again this is the key to analyze
regret bounds for both DUCB and DTS.
###### Proposition 3 (Properties of $G_{t}$ and $G_{t}^{\star}$ under Markov
delays)
Assume Assumption 3 and $l_{2}$-spectral gap $1-\lambda\in(0,1]$. Then,
1. 1.
For any $0<\delta<1$ and any $t$ we have, with probability at least
$1-\delta$,
$\displaystyle G_{t}-\mu_{M}\leq
A_{2}(\lambda)\log\left(\frac{1}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{1}{\delta}\right)},$
(21)
where $A_{1}(\lambda)=\frac{1+\lambda}{1-\lambda}$ and
$A_{2}(\lambda)=\frac{1}{3}\mathbb{I}(\lambda=0)+\frac{5}{1-\lambda}\mathbb{I}(\lambda>0)$.
2. 2.
With probability at least $1-\delta$,
$\displaystyle
G_{T}^{*}\leq\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)},$
(22)
where $G_{T}^{*}=\max_{1\leq t\leq T}G_{t}$.
3. 3.
Define $W_{t}=\sum_{s\in T_{t}}X_{s}X_{s}^{\prime}$ where $X_{t}$ is drawn
iid. from some distribution $\gamma$ with support in the unit ball
$\mathbb{B}_{d}$. Furthermore, let $\Sigma:=\mathbb{E}[X_{t}X_{t}^{\prime}]$
be the second moment matrix, and $B$ and $\delta>0$ be two positive constants.
Then there exist positive, universal constants $C_{1}$ and $C_{2}$ such that
$\lambda_{\min}(W_{t})\geq B$ with probability at least $1-2\delta$, as long
as
$\displaystyle
t\geq\left(\frac{C_{1}\sqrt{d}+C_{2}\sqrt{\log(\frac{1}{\delta})}}{\lambda_{\min}(\Sigma)}\right)^{2}+\frac{2B}{\lambda_{\min}(\Sigma)}+\mu_{M}+A_{2}(\lambda)\log\left(\frac{1}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{1}{\delta}\right)}.$
(23)
$\lambda$ is the $l_{2}$-spectral gap of the transition probability. We refer
the formal concepts and the definition of $l_{2}$-spectral gap to [JSF2018,
Section 2.2]. Proposition 3-1 is proved by utilizing the Berstein’s inequality
for general Markov chains ([JSF2018, Theorem 1.1]) and Proposition 3-2 is
proved by applying union bound.
###### Proof.
Proof of Proposition 3. Recall $G_{t}=\sum_{s=1}^{t-1}\mathbb{I}\\{D_{s}\geq
t-s\\}$. Define $f_{i}(D_{i})=\mathbb{I}\\{D_{s}\geq t-s\\}-p_{i}$ with
$p_{i}=\mathbb{P}(D_{s}\geq t-s)$. Then $\mathbb{E}[f_{i}(D_{i})]=0$,
$\mathbb{V}[f_{i}(D_{i})]=p_{i}(1-p_{i})\leq p_{i}$, and
$\sum_{s=1}^{t-1}\mathbb{V}[f_{i}(D_{i})]\leq\sum_{s=1}^{t-1}p_{s}<\mu_{M}$.
From [JSF2018, Theorem 1.1], we have
$\displaystyle\mathbb{P}\left(\sum_{s=1}^{t-1}f_{i}(D_{i})>x\right)\leq\exp\left(-\frac{x^{2}}{2(A_{1}(\lambda)\mu_{M}+A_{2}(\lambda)x)}\right).$
(24)
Note that the right hand side in (24) is independent of $t$. Technically
speaking, this is because the summation of the variance
$\sum_{s=1}^{t-1}\mathbb{V}[f_{i}(D_{i})]$ is upper bounded by $\mu_{D}$ which
is independent of $t$. Therefore, Property 1 in Proposition 3 holds for any
$t\geq 1$.
Property 2 holds by the union bound and Property 1,
$\displaystyle\mathbb{P}\left(\max_{1\leq t\leq
T}G_{t}>\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)}\right)$
$\displaystyle\leq\sum_{t=1}^{T}\mathbb{P}\left(G_{t}>\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)}\right)\leq
T\times\frac{\delta}{T}.$
Therefore, the following holds with probability no smaller than $1-\delta$,
$\mathbb{P}\left(\max_{1\leq t\leq
T}G_{t}<\mu_{M}+A_{2}(\lambda)\log\left(\frac{T}{\delta}\right)+\sqrt{2A_{1}(\lambda)\mu_{M}\log\left(\frac{T}{\delta}\right)}\right).$
∎
|
2024-09-04T02:54:59.095205 | 2020-03-11T10:12:35 | 2003.05196 | {
"authors": "Nicolas Riesterer, Daniel Brand, Marco Ragni",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26157",
"submitter": "Nicolas Riesterer",
"url": "https://arxiv.org/abs/2003.05196"
} | arxiv-papers | # Uncovering the Data-Related Limits of Human Reasoning Research:
An Analysis based on Recommender Systems
Nicolas Riesterer Cognitive Computation Lab, University of Freiburg, email:
<EMAIL_ADDRESS>Daniel Brand Cognitive Computation Lab, University
of Freiburg, email<EMAIL_ADDRESS>Marco Ragni Cognitive
Computation Lab, University of Freiburg, email<EMAIL_ADDRESS>
###### Abstract
Understanding the fundamentals of human reasoning is central to the
development of any system built to closely interact with humans. Cognitive
science pursues the goal of modeling human-like intelligence from a theory-
driven perspective with a strong focus on explainability. Syllogistic
reasoning as one of the core domains of human reasoning research has seen a
surge of computational models being developed over the last years. However,
recent analyses of models’ predictive performances revealed a stagnation in
improvement. We believe that most of the problems encountered in cognitive
science are not due to the specific models that have been developed but can be
traced back to the peculiarities of behavioral data instead.
Therefore, we investigate potential data-related reasons for the problems in
human reasoning research by comparing model performances on human and
artificially generated datasets. In particular, we apply collaborative
filtering recommenders to investigate the adversarial effects of
inconsistencies and noise in data and illustrate the potential for data-driven
methods in a field of research predominantly concerned with gaining high-level
theoretical insight into a domain.
Our work (i) provides insight into the levels of noise to be expected from
human responses in reasoning data, (ii) uncovers evidence for an upper-bound
of performance that is close to being reached urging for an extension of the
modeling task, and (iii) introduces the tools and presents initial results to
pioneer a new paradigm for investigating and modeling reasoning focusing on
predicting responses for individual human reasoners.
## 1 Introduction
The goal of human-level AI is currently approached from two directions:
solving tasks with a performance similar to or even exceeding the one of
humans [3], and understanding human cognition to a level that allows for an
application to real-world problems [18]. While the first direction has seen
major progress mainly fueled by the development of high-performant data-driven
methods over the course of the last years, the second lags behind.
Gaining insight into the processes underlying human cognition is the core
focus of cognitive science, the inter-disciplinary research area at the
junction of artificial intelligence, cognitive psychology, and neuroscience.
Currently, this field of research is focused mainly on the psychological
questions related to cognition. As a consequence, there is a distinct lack of
readily available computational models developed for use in real-world
applications such as human-like assistant systems. In this article we propose
the use of methods from information retrieval to perform data analyses for
investigating the remaining potential in modeling human cognition. In
particular, we apply models from the family of collaborative filtering
recommendation systems (for introduction see [15, 12]) to re-evaluate the
theory-focused state of the art and illustrate the potential of a more data-
driven approach to modeling human reasoning in one of its core domains:
syllogistic reasoning.
Syllogisms are one of the core domains of human reasoning research. They are
concerned with categorical assertions of the form “All A are B; All B are C”
consisting of two premises featuring a quantifier out of “All”, “Some”, “Some
not”, and “No”, and three terms, A, B, and C, two of which are uniquely tied
to their respective premise. Depending on the arrangement of terms, the
syllogism is said to be in one of four figures (_A-B;B-C_ , _B-A;C-B_ ,
_B-A;B-C_ , _A-B;C-B_).
When presented with syllogistic problems, the goal is to determine the
logically valid conclusion out of the nine possibilities constructed by
relating the two end terms of the premises via one of the four quantifiers
(eight options), or to respond with “No Valid Conclusion” (NVC) if nothing
else can be concluded. Featuring 64 distinct problems with nine conclusion
options, the domain is well-defined, small enough to gain interpretable
insight, but more detailed than most of its alternatives such as conditional
reasoning (“If it rains, then the street is wet; It rains”).
The domain of human syllogistic reasoning has seen an increase of interest in
modeling over the last years. A meta-analysis compiled a list consisting of
twelve accounts trying to provide explanations for the behavior of humans
which differs drastically from formal logics [5]. However, since cognitive
science follows a strongly theory-driven perspective on modeling, the focus of
interest often rests on analyzing and comparing specific properties of models
instead of their general predictive performance. Recent work identified a lack
of predictive accuracy of cognitive models which raises concerns about their
general expressiveness [13].
In this article, we briefly analyze the predictive accuracy of the state of
the art in modeling human syllogistic reasoning and compare the results with
data-driven models. In particular, we apply collaborative filtering-based
recommender systems which exhibit properties making them promising tools for
cognitive research. We leverage these properties to test structural
assumptions about the syllogistic domain to analyze the data’s information
content and the impact of noise on model performance. Finally, the
implications for modeling reasoning and working with human data in general are
discussed and ideas for improving the cognitive modeling problem are proposed.
## 2 Related Work
Computational modeling has become one of the prime choices for formalizing
knowledge and understanding about a domain of interest. By implementing
intuition and assumptions into computationally tractable models, competing
theories can be evaluated, progress in the understanding of a domain can be
monitored, and finally, real-world applications can be solved [9].
The field of syllogistic reasoning has seen a rise of computational models.
From initially only verbally described abstract theories [16], a recent meta-
analysis compiled a list of twelve theoretical accounts for syllogistic
reasoning, seven of which could be specified via tables relating syllogistic
problems with sets of possible conclusions [5]. While these prediction tables
still are far off fully specified implementations of the theoretical
foundations, they can serve as a starting point for conducting model
evaluation and comparison. The authors of the meta-analysis used the
prediction data in order to determine strengths and weaknesses of the
competing approaches when compared to dichotomized human response data via
classification metrics (hits, misses, and false alarms). They found that while
the approaches all exhibit distinct properties with respect to predictive
precision, no single model could be determined as an overall winner.
A recent analysis focusing on combining individual models’ strengths while
avoiding their weaknesses took the evaluation of models one step further by
avoiding the data aggregation step and focusing on the performance obtained
from querying models for individual response predictions instead [13]. Their
work revealed substantial lack of predictive performance of state-of-the-art
models for syllogistic reasoning. Simultaneously, the authors demonstrated
that data-driven modeling in form of a predictor portfolio, could be applied
successfully to increase the predictive accuracy on the task.
Information systems and machine learning as the fields concerned with data-
driven model construction and optimization have seen an astonishing increase
in popularity over the last years. Parts of this success have been due to an
integration of features related to personalities of individuals [10]. Still,
even though they share methods such as clustering, principle component
analyses or mixed models, they have yet to enter the domain of cognitive
research. Collaborative filtering as one of the default methods in the field
of recommender systems has been successfully applied to model human reasoning
before [6]. What makes this kind of memory-based collaborative filtering
approaches promising for cognitive research in general is their high
predictive capabilities paired with the similarity to the core assumption of
cognitive science, that groups of people share similar reasoning patterns.
Since recommendations are extracted from similarities between different
features of the data or the users themselves, they allow both for an analysis
of the data underlying the recommendation process, and an analysis of high-
level theoretical assumptions which can be integrated directly into the
model’s algorithmic structure (e.g., the integration of user personality [10,
7, 4]).
The following sections contrast the models from cognitive science with
collaborative filtering-based approaches in a general benchmarking setting for
syllogistic reasoning based on predictive accuracy.
## 3 Benchmarking Syllogistic Models
Figure 1: Overview over the model evaluation procedure. The benchmark selects
a task which is fed to the model in order to obtain a prediction (black
arrows). Simultaneously, by being based on experimental data, it simulates
querying a human for a response (red arrows). After obtaining the model
prediction, the true response is revealed to the model in an adaption step.
The true (human) and model conclusions are collected and ultimately evaluated
in terms of predictive accuracy.
To gain an overview over the state-of-the-art’s performance in the prediction
task, we performed a benchmark analysis using data obtained from an online
experiment conducted on Amazon Mechanical Turk consisting of $139$ reasoners
which responded to all $64$ syllogisms. Evaluations were computed relying on
leave-one-out crossvalidation, i.e., by testing one reasoner and supplying the
remaining $138$ as training data.
The model evaluation procedure is inspired by a live prediction scenario where
model predictions are retrieved simultaneously to the human reasoner selecting
a conclusion. This is illustrated by Figure 1. In particular, our benchmark
simulates this experiment by passing the tasks to a model generating
predictions (black arrows). After a prediction is obtained, the model is
supplied with the true response obtained from the human reasoner (red arrows).
This allows models to perform an adaptation to an individual’s reasoning
processes. Predictions and true responses are collected and finally compared
to compute the predictive accuracy as the average number of hits.
We included the cognitive models (matching, atmosphere, probability heuristics
model, PHM; mental models theory, MMT; PSYCOP, conversion, verbal models)
supplied with the meta-analysis on syllogistic reasoning by extracting the
prediction tables [5]. Additionally, we included two baseline models, _Random_
and _MFA_. Random represents a lower bound of predictive performance defined
by the strategy that always picks a random response out of the nine options.
MFA denotes the most-frequent answer strategy which generates predictions by
responding with the conclusion most frequently occurring in the training data.
Finally, we included two variants of memory-based collaborative filtering. The
user-based variant (UBCF) generates its prediction based on the responses of
other users weighted by the similarity computed as the number of matching
responses. The item-based variant (IBCF) compiles an item x item matrix
$\mathbf{M}$ of corresponding responses (i.e., who responded with $x$ to
syllogism $A$ also responded with $y$ to $B$) and a user vector $\mathbf{u}$
consisting of the user’s previous responses. The prediction is generated by
selecting the highest-rated response for a syllogism from the result of the
matrix-vector multiplication $\mathbf{M}\times\mathbf{u}$.
Figure 2: Accuracies of models for human syllogistic reasoning. The plot
includes cognitive models based on prediction tables reported by a recent
meta-analysis by Khemlani & Johnson-Laird (2012; Probability Heuristics Model,
PHM; Mental Models Theory, MMT; Matching, Atmosphere, PSYCOP, Conversion,
VerbalModels), baseline models (Most Frequent Answer, MFA; Random), as well as
user-based collaborative filtering (UBCF) and item-based collaborative
filtering (IBCF).
Figure 2 depicts the result of the benchmark analysis. The image highlights
the difference between cognitive models and the recommenders. This is not too
surprising since most cognitive models were not introduced with predictive
performance in mind. They were originally based on some statistical effect
(e.g., illicit conversion, a bias towards misinterpreting the direction of the
input premises [1]) or a high-level cognitive theory (e.g., PSYCOP which
assumes that reasoning is the result of interactions between different mental
rules [14]) and are analyzed with respect to their qualities in reproducing
aggregate effects of data. Still, the gap between cognitive models and data-
driven approaches calls for a re-thinking of the goals of cognitive science.
If the high-level insight cannot be integrated into successful models, their
analysis is of limited use for advancing the understanding of human cognition.
When observing the plot, special emphasis should be placed on MFA, the
baseline model responding with the most-frequent answer of the training
dataset. In terms of data-driven approaches, the MFA represents an upper bound
of performance for models which do not take inter-individual differences into
consideration. Since the cognitive models we considered for our analysis lack
computational mechansisms for handling differences between reasoners, they are
not expected to score higher than the $45\%$ achieved by MFA. In general,
models can only hope to score higher if they rely on an active adaption to
information about an individual’s reasoning processes such as previous
responses or other personality traits known to influence cognition such as
working memory capacity [17].
Being defined on an explicit database of information, collaborative filtering
is an ideal tool for data analysis and modeling. They allow researchers to
directly incorporate knowledge about the domain into the recommendation
process and thereby to directly evaluate the value of findings in rigorous
modeling scenarios. However, since this transformation of abstract findings is
out of scope for this article and remains a challenge for future research, we
do not focus on proposing an optimal recommender. We rather intend to
highlight the method’s potential for future research in the domain by
illustrating the levels of performance standard domain-agnostic
implementations can achieve.
Our benchmark shows that even domain-agnostic recommenders outperform
cognitive models. Still, they do not manage to significantly surpass MFA. This
could mean (i) that these models fail to recognize the reasoning strategies
underlying the data, or (ii) that human reasoning is too irregular, i.e., too
prone to uncontrollable noise for the approaches to succeed. In the following
section we analyze artificially generated data in order to gain further
information about the reasons behind the limited predictive performance of
syllogistic models.
## 4 Simulation Analysis
A core assumption of cognitive science is that reasoning is the result of
different processes [2]. Depending on the individual state of the reasoner
(e.g., previous experience or concentration), thorough inferences based on the
rules underlying formal logics can be conducted or simple heuristic rules can
be applied to reach a conclusion. Consequently, when assessing reasoning data,
it is usually assumed that the data at hand is the result of multiple
interleaved strategies which need to be disentangled in order to allow for an
interpretable analysis.
### 4.1 Entropy Analysis
High information content in data is essential for the success of data-driven
methods. If the data consists mostly of random effects with little structure,
patterns cannot be recognized to base future predictions on.
A common measure of information is the Shannon entropy $S$:
$S=-\sum_{i}p_{i}\log_{2}p_{i}$
Entropy can be understood as a measure of unpredictability of a state defined
via the probabilities $p_{i}$. In the case of syllogisms, entropy has
previously been applied to quantify the difficulty of the 64 problems [5].
Higher entropy results from a more uniform spread of probability mass over the
nine conclusion options and thus serves as an indicant for a more difficult
task.
Figure 3: Relationship between syllogistic problems of varying entropy and
model performances. Dotted lines represent interpolations between the data
points.
Figure 3 depicts the entropies of syllogistic problems with corresponding
model performances. It shows a distinct gap in performance between the
recommenders (IBCF, UBCF) and the remaining models. For low entropies, the
recommenders are able to leverage the information encoded in the data
resulting in high predictive accuracies. For higher entropies they are unable
to maintain their initial distance to the cognitive models which are much more
stable overall.
Entropy in reasoning data can originate from (i) inconsistencies in the
response behavior of individual human reasoners or (ii) interactions between
independent reasoning strategies. The former point is a general issue of
psychological and cognitive research since human participants are prone to
lose attention due to boredom or fatigue. As a result, inconsistent and even
conflicting data of single individuals can emerge [11]. Especially for
collaborative filtering-based models this introduces substantial problems
since users might not even be useful predictors for themselves. The latter
point is a core challenge of cognitive science. Since reasoners differ with
respect to their levels of education and experience with the task [8],
recorded datasets are likely to be the result of a large number of individual
strategies. For modeling purposes, the implications of both points differ
greatly. Since inconsistencies due to lack of attention lead to behavior
similar to guessing, it is unlikely for models to capture these effects by
relying on behavioral data alone. Interactions between different strategies,
on the other hand, are much more likely to be disentangled given additional
insight into the domains and inter-individual differences between reasoners.
Unfortunately, though, with the limited features currently contained in
reasoning datasets, i.e., the responses, it is impossible to safely attribute
the entropy of the data to either point. In the following sections, we
therefore focus on collaborative filtering to shed light on the general
capabilities of data-driven models in trying to uncover additional information
about the problems of the domain.
### 4.2 Strategy Simulation
Even though data-driven recommenders are able to achieve higher accuracies
when compared to cognitive models, they are still far from perfectly
predicting an individual reasoner. To investigate the remaining potential in
the syllogistic domain, we need to gain an understanding of potential issues
with the data.
This second analysis considers artificial data with controlled levels of
noise. Four of the cognitive models from the literature (Atmosphere, Matching,
First-Order Logic, Conversion) were implemented and assigned to one of the
four figures, respectively. By permuting the model-figure assignment and
generating the corresponding response data we obtain $256$ artificial
reasoners featuring interleaving strategies. The informativeness of this data
is reduced by additionally introducing varying levels of random noise obtained
from replacing conclusions with a random choice out of the nine conclusion
options. With increasing levels of noise, the data should be less accessible
for data-driven models.
Figure 4: Strategy reconstruction performance of models based on artificial
reasoning data with different levels of noise added by replacing a certain
proportion of responses with a random choice from the nine conclusion options.
The left and right images contrast performance with the raw noise proportions
and entropies, respectively.
Figure 4 depicts the performance of the baseline and data-driven models on the
artificial data. The left image plots the different noise levels against
predictive accuracies. It shows that a decrease in response consistency has
drastic effects on the models’ capabilities to correctly predict responses due
to the lack of information contained in the training data. The nearly linear
relationship between the levels of noise and performance suggests that the
models are stable in performance given the amount of reconstructable
information. Consequently, they allow for a data-analytic assessment of
“noise” in the data they are supplied with. In the case of syllogistic
reasoning this means that close to $50\%$ of the data would effectively be
indistinguishable from random noise. Explanations for this could be numerous
ranging from too little data with respect to the number of possible reasoning
strategies, over a lack of descriptive features, to guessing-like behavior,
i.e. strategy-less decision-making on the side of study participants. The
right image of Figure 4 presents a different perspective on the impact of
noisy data by computing corresponding entropies. Again, it shows that entropy
is tightly linked to predictive accuracies.
By comparison with the Figure 3, some interesting conclusions can be drawn. In
general, IBCF scores lower on the artificial data than on human data. Since
IBCF is based on item-item dependencies, it is unable to directly exploit
structural patterns of the data. It bases its predictions solely on
information about “reasoners responding x to problem A also respond with y to
problem B”. Higher performance on the human data therefore suggests the
existence of preferential clusters of reasoners which exhibit similar response
behavior. Since the artificial data does not feature such groups but puts more
focus on the structural information by evenly distributing the permutations of
model-figure combinations, IBCF is at a disadvantage. While we cannot formally
attribute the entropy observed in the human data to inconsistencies due to
random noise, or varying overlap between distinct reasoning strategies, the
properties of IBCF suggest the existence of key responses or groups exhibiting
similar research patterns in the data which allow the method to perform some
form of clustering to boost its accuracy. This can be interpreted as soft
evidence for the second hypothesis, that the current problem with modeling
syllogistic reasoning stems from the fact that features allowing for a
disentanglement of strategies are scarce.
A possibility to overcome these problems for the short-term progress of the
field is by explicitly integrating assumptions about the structural properties
of the data into models. If accurate enough, they should be able to boost
models’ capabilities to disentangle the overlapping strategies and allow for a
general improvement of performance. Additionaly, the converse is true: if
high-level theoretical assumptions lead to a significant improvement of the
predictor, the theory is on the right track.
### 4.3 Potential for Better Predictions
It appears as if a lack of information preventing the identification of
strategies limits the potential of modeling in the domain of syllogistic
reasoning. In general, there are two options to tackle this problem: improving
models and extending the problem domain.
There exist many possibilities to increase the predictive capabilities of
models. On the one hand, additional features known for influencing reasoning
patterns such as education [8] or working memory [17] can be integrated into
the data to boost a model’s ability to identify patterns. On the other hand,
the model can be supplied with background information about the problem
domain. Since cognitive science has a history of in-depth data analysis there
is a lot of potential for integrating theoretical findings into models. We
propose the use of collaborative filtering as an accessible tool for cognitive
scientists to transform abstract insight into testable models.
Figure 5 illustrates the potential of recommenders for insight-driven research
by contrasting item-based collaborative filtering (IBCF) and user-based
collaborative filtering (UBCF) with variants of them tuned to the structure of
the artificially generated data, i.e., the observation that syllogisms of the
same figure rely on the same inference mechanism. The plot highlights that
this additional information about the data is able to push both IBCF and UBCF
far beyond their initial performance. Especially for IBCF, the explicit
integration of the structural foundation of the data lifts its performance to
the same levels of UBCF. The gap between the domain-agnostic and tuned
variants remains clearly visible even for high levels of noise. Even though
explicit information about the structure of human data can only be
approximated from theoretical insight into the domain, this shows that
recommenders would be a useful tool for assessing the quality of assumptions.
The second option to improve modeling of human reasoning is to extend the
domain in question. If information about individuals is accumulated even
across the borders of reasoning domains, models have more data to recognize
descriptive patterns in. Additionally, it is possible to include distinctive
background information about individuals such as personality traits. This
approach has proven to boost performance in recommendation scenarios before
and is likely to generalize to the reasoning domain [4, 7]. However, since the
extension of the domain is out of scope for this work, we leave this idea open
for future research.
For research on human reasoning this final analysis shows that there exist
data-driven methods which benefit from the integration of the kind of
information that is usually uncovered in cognitive science and psychology. By
integrating correlative insight into these kinds of models, the value of the
findings can be directly assessed in benchmarking evaluations. Paired with
more informative problem domains obtained from a unification of multiple
domains of reasoning, or the addition of personality features about individual
reasoners, data-driven and theory-driven research can collaborate to overcome
the distance between the current state of the art and the goal of human-level
AI.
## 5 Conclusion
Cognitive models for human syllogistic reasoning achieve unsatisfying
accuracies when applied in a prediction setting. While the reasons for this
could be numerous, it is interesting to see that data-driven recommenders
based on collaborative filtering do not perform substantially better on an
absolute scale. This raises concerns about the data foundation of reasoning
research which is usually composed solely of reasoning problems along with the
corresponding human responses.
Our results obtained from comparison with artificially generated data suggest
that data-driven models are unable to identify and successfully exploit
patterns in the structure of human reasoning datasets when, in theory, they
should be able to. The two most likely explanations for this are noise in form
of inconsistencies in the response behavior of humans, or a lack of
distinctive features preventing data-driven approaches to identify the
patterns required for successful predictions.
Figure 5: Comparison of item-based collaborative filtering (IBCF) and user-
based collaborative filtering (UBCF) variants on artificially generated
reasoning data. Fit-versions denote implementations where structural
properties of the artificial data were actively integrated.
In order to advance the predictive performance to levels which are relevant
for applications in the era of human-level AI, reasoning research needs to
address its current shortcomings. Potential solutions include the improvement
of models by a better integration of domain-specific insight as well as an
active consideration of inter-individual differences, and the extension of the
task for example by including other domains of reasoning, recording more
comprehensive datasets, and leaving behind the current focus on data
aggregation.
For integrating insight into models, we propose collaborative filtering
recommenders as a general-purpose research method. On a technical level, they
are easy to implement and understand, and outperform the current state of the
art even in their domain-agnostic form. By integrating additional information
about the domain (even if just on the level of correlations by weighting the
dependencies between different features of the data), they allow for a
transformation of abstract hypotheses into testable assumptions for modeling.
Consequently, recommenders exhibit useful properties with respect to
comprehensibility, especially in contrast to other methods from machine
learning such as neural networks. As an example, they can naturally be applied
to clustering contexts where stereotypical users are sought after.
Generally, we see a need for an increased focus on predictive accuracies for
individual reasoners to allow more comprehensive benchmarking, to allow for a
more accessible interpretation of the results, and ultimately to enable the
models for real-world application. To facilitate this shift in perspective for
other researchers, we released the tools driving our predictive analysis as a
general benchmarking
framework111https://github.com/CognitiveComputationLab/ccobra. Only if the
different disciplines of cognitive science find together to compete in
modeling on unified informative domains using expressive and standardized
metrics such as predictive performance, will human reasoning enter a level of
progress relevant for human-level AI applications.
This paper was supported by DFG grants RA 1934/3-1, RA 1934/2-1 and RA
1934/4-1 to MR.
## References
* [1] Loren J Chapman and Jean P Chapman, ‘Atmosphere effect re-examined.’, Journal of Experimental Psychology, 58(3), 220, (1959).
* [2] Jonathan St.B.T. Evans, ‘Dual-process theories of reasoning: Contemporary issues and developmental applications’, Developmental Review, 31(2-3), 86–102, (2011).
* [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, ‘Delving deep into rectifiers: Surpassing human-level performance on imagenet classification’, in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, pp. 1026–1034, Washington, DC, USA, (2015). IEEE Computer Society.
* [4] Rong Hu and Pearl Pu, ‘Enhancing collaborative filtering systems with personality information’, in Proceedings of the Fifth ACM Conference on Recommender Systems, RecSys ’11, pp. 197–204, New York, NY, USA, (2011). ACM.
* [5] Sangeet Khemlani and P. N. Johnson-Laird, ‘Theories of the syllogism: A meta-analysis.’, Psychological Bulletin, 138(3), 427–457, (2012).
* [6] Ilir Kola and Marco Ragni, ‘Predict the individual reasoner: A new approach’, in Lecture Notes in Computer Science, 401–414, Springer International Publishing, (2018).
* [7] Orestis Nalmpantis and Christos Tjortjis, ‘The 50/50 recommender: A method incorporating personality into movie recommender systems’, in Engineering Applications of Neural Networks, 498–507, Springer International Publishing, (2017).
* [8] M. F. Nehrke, ‘Age, sex, and educational differences in syllogistic reasoning’, Journal of Gerontology, 27(4), 466–470, (1972).
* [9] Allen Newell, ‘You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium’, in Visual Information Processing, 283–308, Elsevier, (1973).
* [10] David M. Pennock, Eric Horvitz, Steve Lawrence, and C. Lee Giles, ‘Collaborative filtering by personality diagnosis: A hybrid memory- and model-based approach’, in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, UAI’00, pp. 473–480, San Francisco, CA, USA, (2000). Morgan Kaufmann Publishers Inc.
* [11] Marco Ragni, Nicolas Riesterer, Sangeet Khemlani, and Phil Johnson-Laird, ‘Individuals become more logical without feedback’, in Proceedings of the 40th Annual Conference of the Cognitive Science Society, eds., Tim Rogers, Marina Rau, Jerry Zhu, and Chuck Kalish, pp. 1584–1589, Austin, TX, (2018). Cognitive Science Society.
* [12] Francesco Ricci, Lior Rokach, and Bracha Shapira, ‘Introduction to recommender systems handbook’, in Recommender Systems Handbook, 1–35, Springer US, (2010).
* [13] Nicolas Riesterer, Daniel Brand, and Marco Ragni, ‘The predictive power of heuristic portfolios in human syllogistic reasoning’, in Proceedings of the 41st German Conference on AI, eds., Frank Trollmann and Anni-Yasmin Turhan, pp. 415–421, Berlin, Germany, (2018). Springer.
* [14] Lance J Rips, The psychology of proof: Deductive reasoning in human thinking, Mit Press, 1994.
* [15] Badrul Munir Sarwar, George Karypis, Joseph A Konstan, John Riedl, et al., ‘Item-based collaborative filtering recommendation algorithms.’, Www, 1, 285–295, (2001).
* [16] Ron Sun, ‘Theoretical status of computational cognitive modeling’, Cognitive Systems Research, 10(2), 124–140, (2009).
* [17] Heinz-Martin Süß, Klaus Oberauer, Werner W Wittmann, Oliver Wilhelm, and Ralf Schulze, ‘Working-memory capacity explains reasoning ability—and a little bit more’, Intelligence, 30(3), 261–288, (2002).
* [18] Ingo J. Timm, Steffen Staab, Michael Siebers, Claudia Schon, Ute Schmid, Kai Sauerwald, Lukas Reuter, Marco Ragni, Claudia Niederée, Heiko Maus, Gabriele Kern-Isberner, Christian Jilek, Paulina Friemann, Thomas Eiter, Andreas Dengel, Hannah Dames, Tanja Bock, Jan Ole Berndt, and Christoph Beierle, ‘Intentional forgetting in artificial intelligence systems: Perspectives and challenges’, in Lecture Notes in Computer Science, 357–365, Springer International Publishing, (2018).
|
2024-09-04T02:54:59.108757 | 2020-03-11T10:42:40 | 2003.05208 | {
"authors": "Mohammad A. Hoque, Ashwin Rao, Sasu Tarkoma",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26158",
"submitter": "Mohammad Ashraful Hoque",
"url": "https://arxiv.org/abs/2003.05208"
} | arxiv-papers | # In Situ Network and Application Performance Measurement on Android Devices
and the Imperfections
Mohammad A. Hoque University of Helsinki, Finland
<EMAIL_ADDRESS>, Ashwin Rao University of Helsinki, Finland
<EMAIL_ADDRESS>and Sasu Tarkoma University of Helsinki, Finland
<EMAIL_ADDRESS>
###### Abstract.
Understanding network and application performance are essential for debugging,
improving user experience, and performance comparison. Meanwhile, modern
mobile systems are optimized for energy-efficient computation and
communications that may limit the performance of network and applications. In
recent years, several tools have emerged that analyze network performance of
mobile applications in situ with the help of the VPN service. There is a
limited understanding of how these measurement tools and system optimizations
affect the network and application performance. In this study, we first
demonstrate that mobile systems employ energy-aware system hardware tuning,
which affects application performance and network throughput. We next show
that the VPN-based application performance measurement tools, such as Lumen,
PrivacyGuard, and Video Optimizer, aid in ambiguous network performance
measurements and degrade the application performance. Our findings suggest
that sound application and network performance measurement on Android devices
requires a good understanding of the device, networks, measurement tools, and
applications.
## 1\. Introduction
In situ Internet traffic measurement tools, such as Video Optimizer (VoP)
(Qian:2011:PRU, ), Lumen (Razaghpanah:2017, ), PrivacyGuard (PvG)
(Song:2015:PVP, ), and MopEye (Wu:2017:MOM:3154690, ), are essential for
debugging, improving user experience, and performance comparison of mobile
applications. The alternative is rooting the device and using _tcpdump_ for
offline analysis.
The above traffic measurement tools shed light on the network and application
performance. However, they may also contribute to imperfect and ambiguous
results, as we might measure something which we do not intend to measure.
Studying the sources of these imperfections is vital to calibrate the
measurement procedures and to improve the tools. At present, there is a
limited understanding of the impact of in situ mobile Internet traffic
measurement tools and how device hardware optimization affects the network and
application performance.
In this work, we quantify the performance impact of system hardware
optimization and also evaluate the impact of VoP, Lumen, and PvG on network
performance metrics, and application traffic. We focus on these three
applications, as they exemplify state-of-the-art traffic measurement and
analysis tools. These tools have similar designs and use the Android VPN
interface. However, they do not route the traffic to a remote VPN server. VoP
(vop, ), formerly known as ARO (Qian:2011:PRU, ), is a popular open-source
tool for collecting traffic from mobile devices without rooting the device,
and it also enables various diagnosis and optimization of applications,
network, CPU and GPU (vop, ) through offline analysis. In contrast, Lumen and
PvG are two online traffic analysis tool helping users to find privacy leaking
incidents. Lumen also provides insights on the TLS usage of mobile
applications (Razaghpanah:2017, ), the CDN usage by mobile applications
(8485872, ), and the DNS (Almeida2017DissectingDS, ). MopEye is another
similar application. It is currently unavailable in the Google Play Store and
also in popular source code hosting websites, such as GitHub.
This article investigates the imperfections in traffic measurements on Android
devices due to system optimization and in situ traffic measurement tools. We
demonstrate that sound Internet traffic measurement requires a thorough
understanding of the device, tools, and applications. Note that we do not aim
to establish whether a particular tool is the best or worst. Our key
observations are as follows.
_(1)_ Mobile systems employ CPU and WiFi transmit power optimization triggered
by the battery level. We observe that the CPU optimization techniques, such as
CPU hot-plugging and dynamic frequency scaling, mostly affect network I/O,
while WiFi optimization, i.e., dynamic modulation scheme, affects the uplink
throughput. These optimizations deteriorate application performance and
network throughput. Charging the device, when the battery level is below 20%,
does not improve the network performance. Therefore, one must be aware of the
adaptive performance characteristics of mobile devices while conducting
experiments (Section 2).
_(2)_ Although it is expected that VPN-based tools would provide degraded
network performance as the packets spend more time on the device
(Qian:2011:PRU, ; Razaghpanah:2017, ; Song:2015:PVP, ), we may estimate
ambiguous latency and throughput in the presence of the VPN-based tools. For
example, in the presence of PvG, SpeedCheck (speedcheck, ) estimates on-device
latency instead of the network latency. Similarly, VoP doubles the uplink
throughput estimates. The sources of these ambiguities are the implementation
of the measurement tools, as we present in Section 3. VoP also delays the
outgoing traffic, and PvG delays the incoming traffic. Therefore, to avoid
such pitfalls in network and application performance measurements, one must
have a good understanding of these applications and tools.
_(3)_ Furthermore, all these VPN-based applications fail to apply the
application intended optimization through socket options and thus affect the
application performance, as we demonstrate for the outgoing TCP traffic in
Section 3.
Finally, we summarize the sources of the above ambiguous or imperfect
measurement results (Section 4).
Figure 1. Impact of battery level. We consider two battery level (L) ranges,
L$\leq$20% & L$>$20%, on Nexus 6 over WiFi (W) and LTE (4G).
## 2\. Impact of System Optimization
Android devices may come with advanced CPU governors that save energy by hot
plugging and unplugging of CPU cores, as supported by modern Linux kernels
(cpuhotplug, ). Apart from workload characteristics, the devices may also
consider the status of the battery to employ the CPU cores. We look into the
impact of such off-the-shelf system optimization on network latency and
throughput on Nexus 6.
During our measurements with Nexus 6, we have found that two of the four cores
remain offline when the battery discharges to below 20%, and the active cores
operate at the maximum frequency of 1.73 GHz. When the battery level is above
20%, all the four cores become active, and their maximum operating frequency
increases to 2.65 GHz. Therefore, the battery level also prompts dynamic
frequency scaling.
We performed the following measurements to quantify the impact of this
optimization on the network traffic characteristics. Specifically, we used
SpeedCheck (speedcheck, ) (paid) and measured the latency and throughput on
Nexus 6 (Android 7.0) when the battery levels were above 20% and below 20%. We
performed the measurements using both WiFi and LTE. Each of the above four
scenarios was repeated ten times, and the results are presented in Figure 1.
Figure 1 shows that while hot unplugging of CPU cores on Android has a
negligible impact on the latency, its impacts on throughput is significant.
The availability of additional CPU cores, when the battery level is above 20%,
improves the I/O performance across the two access technologies, WiFi and LTE.
Furthermore, WiFi uplink throughput improves almost four times when the
battery level is above 20% compared to when it is below 20%. The closer
inspections of the MAC layer frames revealed that WiFi radio of the Nexus 6
switches from _802.11ac_ to _802.11g_ mode when the battery level drops below
20%. These performance limiting optimizations also affected the device
responsiveness for various applications, such as browsing and streaming. This
also implies that modern Android devices adapt the physical layer mechanisms
similar to the iOS
devices111https://www.forbes.com/sites/ewanspence/2017/12/20/apple-iphone-
kill-switch-ios-degrade-cripple-performance-battery/ to avoid unexpected
shutdown of the devices (8720247, ) and to improve battery life.
Figure 2. Impact of battery level on LTE modulation scheme. These snapshots
are from a single uplink and downlink throughput measurement.
Figure 1 also depicts that the downloading speed of SpeedCheck over LTE
doubles when the battery level is higher than 20%. Similar to WiFi, we further
looked into the physical layer modulation scheme used by the mobile device in
the LTE network. We rooted Nexus 6 and installed Network Signal Guru
(netsiguru, ) that samples LTE physical layer parameters after every 500 ms.
Figure 2 shows that the modulation schemes were always 16QAM (Quadrature
Amplitude Modulation) and 64QAM for uplink and downlink, respectively, during
the throughput measurements. The other attributes in the figure are discussed
in section 6. Nexus 6 employs three optimization techniques, triggered by the
battery level, which affect the network and application performance. Charging
the device, when the battery level is below 20%, does not improve the
throughput either on WiFi or LTE and application performance. The optimization
may vary from device to device.
## 3\. Impact of Measurement Tools
Figure 3. The system components of VoP, Lumen, and PvG for Android.The newly
created sockets are protected so that the Forwarder generated packets are not
in a loop. Figure 4. Impact on LTE network latency and throughput. We used
SpeedCheck and SpeedTest on Nexus 6 in the presence of Lumen (Lum.), VoP, PvG,
and Baseline, i.e., without any localhost VPN.
### 3.1. In-situ Traffic Measurement Tools
The forwarder and the packet inspector are two components of the VPN-based in
situ traffic measurement tools exemplified by VoP, Lumen, and PvG, as shown in
Figure 3.
The primary role of the forwarder is to forward (i) the packets received from
Android applications to the Internet, and (ii) the packets received from the
Internet to the Android applications. The forwarder also copies those packets
to the inspection queue to isolate traffic analysis from the path of the
packet.
The forwarder essentially creates a new TCP socket on seeing a TCP SYN packet
from the VPN interface. The forwarder in Lumen and VoP establish a socket
connection with the remote server using connect() API before sending SYN-ACK
to the application. PvG, on the other hand, establishes socket connection
after replying with SYN-ACK. Later, we demonstrate how these implementations
affect network performance measurements. The forwarder creates a new UDP
socket when it detects a new UDP flow. These newly created sockets are
protected so that packets from the newly created flows do not loop the _tun_
interface (vpnprot, ).
A packet inspector is responsible for inspecting the packets in its queue. In
the case of Lumen and PvG, the packet inspector performs the privacy analysis
on the packets, whereas the VoP’s inspector sends packets to the desktop
application.
In the later sections, we quantify the impact of VoP, Lumen, and PvG on (a)
the network performance, and (b) the network characteristics of applications.
### 3.2. Addressing Biases
We took the following steps to ensure that the measurement results presented
in the upcoming sections are not the artifacts of misconfigured tools and the
measurement setup.
(i) Battery level. For the upcoming measurements, we ensured that the devices
had more than 80% charge. This is because mobile devices might restrict
resources based on the battery level, as we have shown in section 2.
(ii) Throughput throttling. VoP also offers to throttle downlink and uplink
traffic. All the measurements in this paper were conducted without any
throughput throttling.
(iii) Software Auto Update. During the experiments, application and the auto
system updates were disabled on mobile devices.
(iv) Advertisements. We have purchased without ad subscriptions of SpeedCheck
and SpeedTest to avoid any biases caused by the free versions.
(a) Baseline
(b) VoP
(c) Lumen
Figure 5. Inter-packets gaps of the VoIP applications. Baseline refers to the
measurements without any localhost VPN.
### 3.3. Impact on Network Performance
This section explores the network performance using SpeedCheck (speedcheck, )
and SpeedTest (speedtest, ). These two applications work as the traffic load
generator without any VPN-based tools and in the presence of the listed VPN
applications. Without any VPN scenario gives the baseline performance.
SpeedCheck connects to its servers in Germany, and SpeedTest connects to the
severs in the LTE operator network within a few kilometers from the mobile
device. The measurements were repeated ten times.
_(1) Latency._ Figure 4 (left) compares the network latency reported by two
applications in the presence of the VPN-based tools. From the _tcpdump_
traces, we have identified that SpeedTest uses 10-12 requests/responses of few
bytes (less than 100 Bytes) over a TCP connection to estimate the latency.
SpeedTest estimates the baseline latency of 16-18 ms. This is expected, as the
server was located at the operator’s network. It experiences 3-5 ms additional
latency in the presence of Lumen and PvG, whereas VoP increases the latency by
three-fold. This is due to the energy optimization strategy adopted by VoP,
which we discuss in the upcoming sections.
In contrast, SpeedCheck reports the median baseline network latency of about
45 ms. From the corresponding _tcpdump_ traces, we have identified 10 empty
and consecutive TCP flows (without any data exchange) for each latency
measurements. These flows suggest that SpeedCheck uses TCP connect() API to
measure the latency. Both VoP and Lumen increase the median latency
significantly. We speculate that these two take more time to set up new TCP
flows. However, SpeedCheck underestimates the latency in the presence of PvG,
which is the consequence of the sending SYN-ACK by the PvG forwarder before
the connection is established with the remote server, as discussed in Section
3.1.
_(2) Uplink Throughput._ Figure 4 (center) depicts that SpeedTest estimates
higher uplink baseline throughput, as the server is in the LTE operator
network. It uses multiple parallel TCP connections to estimate the throughput.
Both Lumen and PvG reduce the throughput of SpeedTest/SpeedCheck by half
compared to the baseline measurements. However, Lumen severely affects the
uplink throughput measurements of the SpeedCheck. It uses a single TCP
connection and sends a large amount of data. From an exception in the debug
log, we characterized that Lumen’s forwarder cannot handle such volume.
Interestingly, VoP doubles the uplink throughput of both applications.
_(3) Downlink Throughput._ Figure 4 (right) demonstrates that SpeedTest
measures similar downlink throughput in the presence of the VPN tools to the
baseline. Lumen aids the highest throughput measurements with SpeedCheck.
However, VoP and PvG degrade the throughput of SpeedCheck significantly.
The typical network measurement tools, such as SpeedCheck and SpeedTest, can
have different methods to estimate the latency and throughput. While their
baseline estimates are reasonable, their estimates vary according to the
implementation of the VPN tools.
### 3.4. Impact on Realtime Application (UDP)
In this section, we investigate the traffic from three realtime applications;
IMO, WhatsApp, and Skype. The versions of the apps used are presented in Table
1. While these applications fall into the broad category of messaging
applications, their varying traffic characteristics help us to study the
impact of the design of VoP and Lumen. We could not use these applications in
the presence of PvG in several trials. We used a rooted Nexus 6 (Android 7.0)
and a non-rooted LG G5 (Android 8.0) for these measurements.
These apps exchange bi-directional encrypted UDP traffic. The conversations
were two minutes long over LTE, and we ran 3 iterations in each of the
following scenarios. We investigate their inter-packet gaps and bitrates. As
the baseline, we initiated conversations between Nexus 6 and LG G5 using these
apps without VoP or Lumen and captured traffic using _tcpdump_ on Nexus 6. We
then repeated the experiments with VoP running on Nexus 6 and collected
traffic from VoP. Finally, we used Lumen. Since Lumen does not store traffic,
we captured traffic with _tcpdump_ on Nexus 6.
| Baseline | VoP | Lumen
---|---|---|---
Application | (in/out) | (in/out) | (in/out)
WhatsApp (v2.18) | 21/24 kbps | 23/16 kbps | 20/22 kbps
IMO (v9.8) | 14/15 kbps | 14/13 kbps | 13/14 kbps
Skype (v8.41) | 60/50 kbps | 55/44 kbps | 48/44 kbps
Table 1. Average bitrates of UDP traffic flows from VoIP applications.
_Baseline Results._ Figure 5(a) shows that IMO has the highest inter-packet
gaps, and Skype packets have the smallest gaps. These apps also have distinct
data rates with Skype having the highest data rate, as shown in Table 1.
_Impact of VoP._ Compared to the baseline packet-gaps in Figure 5(a), VoP
significantly alters the inter-packet gaps of outgoing UDP packets, as shown
in Figure 5(b). Most of the outgoing packets across all applications have an
inter-packet gap of about 100 ms. In contrast, the incoming packets have had
similar distributions to the baseline. This delay is similar to the latency
measurements with VoP discussed earlier. Table 1 shows that the outgoing data
rates of Skype and Whatsapp reduce significantly, which we speculate to be a
consequence of the delays introduced by VoP.
_Impact of Lumen._ Figure 5(c) shows that with Lumen the inter-packet gaps of
the outgoing packets are similar to the baseline measurements. Besides, the
applications experience similar bitrates to the baseline and when using Lumen
as shown in Table 1.
### 3.5. Impact on Realtime Application (TCP)
We used Periscope (v1.24) to study the impact of VoP and Lumen on realtime TCP
flows. Periscope’s live broadcast did not work in the presence of PvG.
Periscope broadcasts over LTE across three different scenarios. We capture
traffic on Nexus 6 using _tcpdump_ for baseline and Lumen scenarios.
Similar to our observations for UDP traffic, we observed 100 ms inter-packet
gap, as shown in Figure 6 (left). From the distribution of packet size in
Figure 6 (right) (collected by VoP), we notice that more than 70% packets
captured by VoP are larger than 1500 bytes. From Traffic traces, we have
identified that VoP creates packets of a maximum of 65549 bytes for Periscope,
and the uplink throughput measurements flow from SpeedCheck.
Figure 6. Properties of uplink Periscope TCP flows.
From the source code in Github, we have identified that VoP forwarder
implements the maximum segment of 65535 bytes for the TCP flows. It
accumulates traffic from the client application, and the segments reach the
maximum size very quickly with very high bitrate traffic. This also explains
how VoP aids in higher uplink throughput measurements presented in Section
3.3. Nevertheless, these massive TCP segments are eventually fragmented once
written to the socket. Lumen has a very negligible impact on packets.
### 3.6. Analysis with Socket Options
In this section, we investigate the performance of the VPN-based tools in
processing the flows with TCP_NODELAY (Nagel’s algorithm) socket option on
Nexus 6. We specifically look into this option, as it has a direct impact on
the local delay and thus affects the performance of web browsing and other
realtime applications, such as live broadcasting, crypto/stock exchange
applications, on mobile devices. We developed a separate traffic generating
application that creates two blocking TCP sockets enabled and disabled Nagle’s
algorithm. The application sends 1300 bytes data over LTE after every 20 ms to
a remote server at the university campus. The application also receives data
from the remote server after every 20 ms in separate TCP sessions.
Figure 7. Distributions of the outgoing packet gaps observed at the network
interface. Figure 8. Distributions of incoming packet gaps observed at the
network interface and application.
_Performance of VPN-based Tools._ Figure 7(a) compares the outgoing inter-
packet gap of the application flows; having Nagel’s algorithm enabled and
disabled. When Nagel’s algorithm is enabled, more than 70% of the packets sent
from the application have more than 20 ms delays at the network layer. In the
presence of VPN applications, disabling Nagel’s algorithm by the application
does not improve the delay compared to the baseline (Figure 7(b)).
Interestingly, VoP’s packet gap reduces, as it receives packets from the local
TCP/IP stack without delay. From traffic traces, we have identified that these
VPN-based tools do not disable Nagel’s algorithm while establishing socket
connections.
Figure 8 shows the performance of the VPN applications for incoming traffic.
The application receives data at almost similar gaps observed at the network
interface. However, in the presence of PvG, the application receives 40%
packets at late. The packet-gaps patterns suggest that it uses a fixed
interval to read the VPN interface similar to VoP.
The investigations in this section reveal that the VPN-based tools do not set
the TCP/IP socket options as intended by the other user applications.
Consequently, they can misguide the developers and degrade application
performance. For example, SpeedTest disables Nagel’s algorithm or sets the
TCP_NODELAY socket option to send tiny packets to measure the network latency.
Findings in this section explain the higher latency experienced by SpeedTest
in Section 3.3.
## 4\. Sources of Imperfection
Mobile system optimizations affect downlink and uplink throughput, whereas the
VPN-based tools mostly affect the uplink throughput and latency, i.e., they
mostly affect the outgoing traffic. In this section, we summarize the sources
of such measurement results.
_Energy-Aware Optimization._ Energy-aware system optimization can affect the
network performance by limiting the network I/O and by applying adaptive
modulation schemes. Therefore, it is wise to perform such measurements when
the battery is fully charged. VoP, Lumen, and PvG rely on different sleeping
techniques to optimize their energy usage. The additional latency introduced
by VoP on outgoing packets is the artifact of using a fixed sleep interval of
100 ms in the main VPN thread. This delay further contributes to large
outgoing packets for higher bitrate uplink traffic and energy consumption for
fragmentation. PvG also introduces a fixed delay for the incoming traffic.
Regardless, these delays affect not only the quality of the measurements but
also the quality of experience when using other user applications.
_Forwarder._ In situ VPN-based measurement tools are middleboxes that tap the
packets using the VPN interface. These applications, therefore, implement a
forwarder which primarily consists of three threads: the main VPN thread, and
two-socket reader/writer threads. The reader/writer threads continuously
iterate through a list of live sockets, which contributes to the delays. The
forwarder also implements a flow state machine for each flow and
constructs/de-constructs the packets. The implementation of the forwarder
affects the latency and throughput measurements. We have also shown that the
characteristics of the newly created flows and their packet headers might not
be the same as those generated by the applications. The reason is that the
socket options must be set before the connection establishment.
## 5\. Conclusions
In this preliminary work, we investigated the challenges in measuring network
performance in the presence of system optimizations and state-of-the-art
application performance measurement tools on Android devices. System
optimizations limit the performance of the hardware components and thus the
applications, which in turn result in confusing measurement results. It can be
argued that VoP is mostly for the developers, and therefore, incurring higher
delays should not a problem. Similarly, frequent massive content uploading is
rare, and 3-4 ms additional latency is acceptable. Nevertheless, these
imperfections can significantly affect the outcome of traffic measurement
studies. An acceptable latency also depends on the application type. A user
can benefit significantly from 1-millisecond latency improvement for the
financial and other realtime applications. Therefore, there is still room for
improvement in such tools. For instance, VoP and PvG can follow Lumen’s
adaptive sleeping algorithm for reducing the gaps in the outgoing and incoming
packets, respectively. All of them can adopt some default socket options to
mitigate the performance issues with the outgoing TCP traffic. Along with the
measurement tools, it is necessary to understand the presence of various
system optimization techniques which may affect network performance.
## References
* [1] SPEEDCHECK - Speed Test. https://play.google.com/store/apps/details?id=org.speedspot.speedanalytics. [Online; accessed 7-August-2019].
* [2] Speedtest by Ookla. https://play.google.com/store/apps/details?id=org.zwanoo.android.speedtest.gworld. [Online; accessed 11-August-2019].
* [3] VPN - Android Developers. https://developer.android.com/guide/topics/connectivity/vpn. [Online; accessed 23-January-2019].
* [4] AT&T Video Optimizer. https://developer.att.com/video-optimizer, 2019. [Online; accessed 7-August-2019].
* [5] Network Signal Guru. https://play.google.com/store/apps/details?id=com.qtrun.QuickTest, 2019\.
* [6] Mario Almeida, Alessandro Finamore, Diego Perino, Narseo Vallina-Rodriguez, and Matteo Varvello. Dissecting DNS Stakeholders in Mobile Networks. In Proceedings of CoNEXT ’17, pages 28–34. ACM, 2017.
* [7] Mohammad Kawser, Nafiz Imtiaz Bin Hamid, Md Nayeemul Hasan, M Shah Alam, and M Musfiqur Rahman. Downlink snr to cqi mapping for different multiple antenna techniques in lte. International Journal of Information and Electronics Engineering, 2:756–760, 09 2012.
* [8] The kernel development community. CPU hotplug in the Kernel. https://www.kernel.org/doc/html/latest/core-api/cpu_hotplug.html, 2019. [Online; accessed 11-September-2019].
* [9] F. Michclinakis, H. Doroud, A. Razaghpanah, A. Lutu, N. Vallina-Rodriguez, P. Gill, and J. Widmer. The Cloud that Runs the Mobile Internet: A Measurement Study of Mobile Cloud Services. In IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, pages 1619–1627, April 2018.
* [10] Feng Qian, Zhaoguang Wang, Alexandre Gerber, Zhuoqing Mao, Subhabrata Sen, and Oliver Spatscheck. Profiling Resource Usage for Mobile Applications: A Cross-layer Approach. In Proceedings of MobiSys’11, pages 321–334. ACM, 2011.
* [11] Abbas Razaghpanah, Arian Akhavan Niaki, Narseo Vallina-Rodriguez, Srikanth Sundaresan, Johanna Amann, and Phillipa Gill. Studying TLS Usage in Android Apps. In Proceedings of CoNEXT ’17, pages 350–362. ACM, 2017.
* [12] Yihang Song and Urs Hengartner. Privacyguard: A vpn-based platform to detect information leakage on android devices. In Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, SPSM ’15, pages 15–26, New York, NY, USA, 2015. ACM.
* [13] Y. Sun, L. Kong, H. Abbas Khan, and M. G. Pecht. Li-ion battery reliability – a case study of the apple iphone®. IEEE Access, 7:71131–71141, 2019.
* [14] Daoyuan Wu, Rocky K. C. Chang, Weichao Li, Eric K. T. Cheng, and Debin Gao. MopEye: Opportunistic Monitoring of Per-app Mobile Network Performance. In Proceedings of USENIX ATC ’17, pages 445–457. USENIX Association, 2017.
* [15] Jim Zyren. Overview of the 3GPP long term evolution physical layer. 01 2007.
## 6\. LTE Radio Resource Allocation
In LTE networks, Physical Resource Block (RB) is considered as the unit of the
radio resource. With 5 MHz bandwidth, there are 25 RBs. In an RB, there are 12
sub-carriers in the frequency domain. Each of the RBs can have either $7\times
12$ or $14\times 12$ resource elements (REs), where 7 and 14 are the symbols,
in the time domain, over 0.5 and 1 ms respectively using normal cyclic prefix
(CP) [15].
Now the amount of bits an RB can carry depends on the channel quality
indicator (CQI) notification from the UE. Essentially, each CQI maps to a
modulation and coding scheme according to Table 2. CQI indicates not only the
channel quality but also a device’s capability whether the device can receive
data of a particular modulation and coding scheme or not.
The equations to compute the number bits an RB can hold for a certain CQI, and
the number of RBs is required by an eNB to transmit a packet can be expressed
as the followings.
(1) $RB_{bits}=RE_{bits}\times n\times t_{s}\\\ =C_{CQI}\times M_{bits}\times
n\times t_{s}$
In equation1, $M_{bits}$ is the bits for a modulation scheme, $n$ is the
number of usable REs, and $t_{s}$ is the duration of time slot (0.5 or 1 ms).
(2) $RB_{n}=(PacketSize_{bits}+RLC_{bits}+MAC_{bits})/RE_{bits}$
CQI | Modulation | Real Bits ($N_{m}$) | $C_{CQI}=N/1024$
---|---|---|---
1 | QPSK | 78 | 0.0762
2 | QPSK | 120 | 0.1171
3 | QPSK | 193 | 0.1884
4 | QPSK | 308 | 0.3
5 | QPSK | 449 | 0.4384
6 | QPSK | 602 | 0.5879
7 | 16QAM | 378 | 0.3691
8 | 16QAM | 490 | 0.4785
9 | 16QAM | 616 | 0.6015
10 | 64QAM | 466 | 0.4550
11 | 64QAM | 567 | 0.5537
12 | 64QAM | 666 | 0.6503
13 | 64QAM | 772 | 0.7539
14 | 64QAM | 873 | 0.8525
15 | 64QAM | 948 | 0.9258
Table 2. Channel Quality Index (CQI), Modulation Scheme, and Coding Rate
mapping [7].
Figure 9 shows the usage of the Modulation Scheme and the number of resource
blocks for a large file download on Nexus 6 with CQI11. LTE supports QPSK,
16QAM, and 64QAM, i.e., each RE can carry a maximum of 2, 4, and 6 bits
accordingly. Let us consider the duration of 1 RB is 1 ms ($t_{s}$), and there
are 168 REs. Nevertheless, mostly 120 REs ($n$) are available for carrying
data traffic. For CQI11, the modulation scheme is 64QAM and the effective code
rate $C_{CQI}=N_{m}/1024=0.55$. Therefore, an RE can hold only,
$RE_{bits}=C_{CQI}\times M_{bits}=0.55\times 6$, 3.32 bits and an RB can hold
$n\times RE_{bits}=398$ bits.
Figure 9. LTE throughput and other network parameter observed on a mobile
device using Network Signaling Guru [5].
The number of RBs required for a packet in a downlink can be computed using
equation 2 by considering the additional bits for RLC and MAC headers.
However, the network may not allocate the RBs according to the CQI. It may
have other complex resource scheduling algorithms, as it has to deal with
various types of traffic and users. The number of uplink RBs also may vary.
## 7\. Application
⬇
1int val = 1;
2// Disabling Nagel’s Algorithm
3setsockopt(sockfd,SOL_TCP,TCP_NODELAY,&one,sizeof(one));
4if (connect(sockfd, &servaddr, sizeof(servaddr)) < 0)
5 LOGE("[***Server Connect Error***");
6for (int i = 0; i < 5000; i++) {
7 usleep(20000);
8 char *daat = rand_string(1300);
9 gettimeofday(&tv, NULL);
10 times[i] = (tv.tv_sec*1000000LL+tv.tv_usec)/1000;
11 n = write(sockfd,daat, 1300);
12 if (n < 0){
13 LOGE("Error sendto %s", strerror(errno));
14 break;
15 }
16}
Listing 1: TCP sending code with/without Nagel’s algorithm.
⬇
1int BUFSIZE = 4096;
2if (connect(sockfd, &servaddr, sizeof(servaddr)) < 0)
3 LOGE("[***Server Connect Error***");
4while (true) {
5 bzero(buf, BUFSIZE);
6 n = read(sockfd, buf, BUFSIZE);
7 if (n > 0) {
8 gettimeofday(&tvo, NULL);
9 times[i]=(tvo.tv_sec*1000000LL+tvo.tv_usec)/1000;
10 i = i+1;}
11 else
12 break;
13 if (i==5000)
14 break;
15}
Listing 2: TCP receiving code.
|
2024-09-04T02:54:59.119667 | 2020-03-11T11:40:55 | 2003.05227 | {
"authors": "Oscar Rodriguez de Rivera, Antonio L\\'opez-Qu\\'ilez, Marta Blangiardo\n and Martyna Wasilewska",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26159",
"submitter": "Oscar Rodriguez De Rivera Ortega",
"url": "https://arxiv.org/abs/2003.05227"
} | arxiv-papers |
A spatio-temporal model to understand forest fires causality in Europe
Oscar Rodriguez de Rivera1,*, Antonio López-Quílez2, Marta Blangiardo3,
Martyna Wasilewska1
1 Statistical Ecology @ Kent, National Centre for Statistical Ecology. School
of Mathematics, Statistics and Actuarial Science, University of Kent.
Canterbury, UK.
2 Dept. Estadística i Investigació Operativa; Universitat de València.
Valencia, Spain.
2 Faculty of Medicine, School of Public Health; Imperial College of London.
London, UK.
*<EMAIL_ADDRESS>
###### Abstract
Forest fires are the outcome of a complex interaction between environmental
factors, topography and socioeconomic factors (Bedia _et al_ , 2014).
Therefore, understand causality and early prediction are crucial elements for
controlling such phenomenon and saving lives.
The aim of this study is to build spatio-temporal model to understand
causality of forest fires in Europe, at NUTS2 level between 2012 and 2016,
using environmental and socioeconomic variables. We have considered a disease
mapping approach, commonly used in small area studies to assess the spatial
pattern and to identify areas characterised by unusually high or low relative
risk.
_K_ eywords Hierarchical Bayesian models $\cdot$ disease mapping $\cdot$
integrated nested laplace approximation $\cdot$ forest fires $\cdot$ causality
$\cdot$ spatio-temporal model
## 1 Introduction
Nowadays, wildfires have become one of the most significant disturbances
worldwide (Flannigan _et al_ , 2009; Pechony and Shindell, 2010; Pausas _et
al_ , 2012; Cardil _et al_ , 2014; Boer _et al_ , 2017; Molina _et al_ ,
2018). The combination of a longer drought period and a higher woody biomass
and flammability of dominant species creates an environment conducive to fire
spread (Piñol _et al_ , 1998; Millán _et al_ , 2005). Furthermore, vegetation
pattern changes with the abandonment of traditional rural activities plays a
direct role in the increase of fire severity and ecological and economic fire
impacts (Flannigan _et al_ , 2009; Chuvieco _et al_ , 2014). Fire behavior
exceeds most frequently firefighting capabilities and fire agencies have
trouble in suppressing flames while providing safety for both firefighters and
citizens (Werth _et al_ , 2016).
Many areas across the world have seen a rise in extreme fires in recent years.
Those include South America and southern and western Europe. They also include
unexpected places above the Arctic Circle, like the fires in Sweden during the
summer of 2018 (de Groot _et al_ , 2013; European Commission, 2019).
Extreme fire events, which are also referred to as “megafires”, are becoming
frequent on a global scale; recent fires in Portugal, Greece, Amazone and
other areas confirm this fact. There is not complete agreement on the term
“megafires”, which often refers to catastrophic fire events in terms of human
casualties, economic losses or both (San-Miguel-Ayanz _et al_ , 2013b).
Climate change will reduce fuel moisture levels from present values around the
Mediterranean region and the region will become drier, increasing the weather-
driven danger of forest fires. The countries in highest danger are Spain,
Portugal, Turkey, Greece, parts of central and southern Italy, Mediterranean
France and the coastal region of the Balkans, according to recent research of
the Joint Research Centre (JRC) (De Rigo _et al_ , 2017).
Most international reports on biomass burning recognize the importance of the
human factors in fire occurrence (Food, 2007). Although fire is a natural
factor in many ecosystems, human activities play a critical role in altering
natural fire conditions, either by increasing ignitions (Leone _et al_ ,
2003), or by suppressing natural fires (Johnson _et al_ , 2001; Keeley _et al_
, 1999). Both factors are contradictory, and act mainly through the mixture of
fire policy practices, on one hand, and land uses and demographic changes on
the other. Most developed countries have maintained for several decades a fire
suppression policy, which has lead to almost total fire exclusion. The long
term impact of that policy has implied an alteration of traditional fire
regimes, commonly by increasing average burn severity and size, as a result of
higher fuel accumulation (Pyne, 2001), although other authors are more
critical about the real implication of fire suppression policy (Johnson _et
al_ , 2001), or they tend to put more emphasis on the impact of climate
changes (Westerling _et al_ , 2006). For developing countries, fire is still
the most common tool for land clearing, and therefore it is strongly
associated to deforestation, especially in Tropical areas (Cochrane _et al_ ,
1999; DeFries _et al_ , 2002). The traditional use of fire in shifting
cultivation has turned in the last decades to permanent land use change, in
favour of cropland and grasslands. In addition, fire is a traditional tool to
manage permanent grasslands, which are burned annually to favour new shoots
and improve palatability (Hobbs _et al_ , 1991; Chuvieco _et al_ , 2010).
Global and local implications of changing natural fire circumstances have been
widely recognized, with major effects on air quality, greenhouse gas
emissions, soil degradation and vegetation succession (Goetz _et al_ , 2006;
Parisien _et al_ , 2006; Randerson _et al_ , 2005). The role of human
activities in changing those conditions has not been assessed at global scale.
Several local studies have identified factors that are commonly associated to
human fire ignition, such as distance to roads, forest-agricultural or forest-
urban interfaces, land use management, and social conflicts (unemployment,
rural poverty, hunting disputes,) (Leone _et al_ , 2003; Martínez _et al_ ,
2009; Vega-García _et al_ , 1995). On the other hand, humans not only cause
fires, but they suffer their consequences as well. Fire is recognized as a
major natural hazard (Food, 2007), which imply severe losses of human lives,
properties and other socio-economic values (Radeloff _et al_ , 2005; Reisen
and Brown, 2006).
Fire is no longer a significant part of the traditional systems of life;
however, it remains strongly tied to human activity (Leone _et al_ , 2009).
Knowledge of the causes of forest fires and the main driving factors of
ignition is an indispensable step toward effective fire prevention (Ganteaume
_et al_ , 2013). It is widely recognized that current fire regimes are
changing as a result of environmental and climatic changes (Pausas and Keeley
2009) with increased fire frequency in several areas in the Mediterranean
Region of Europe (Rodrigues _et al_ , 2013). In Mediterranean-type ecosystems,
several studies have indicated that these changes are mainly driven by fire
suppression policies (Minnich, 1983), climate (Pausas _et al_ , 2012), and
human activities (Bal _et al_ , 2011). Human drivers mostly have a temporal
dimension, which is why an historical/temporal perspective is often required
(Zumbrunnen _et al_ , 2011; Carmona _et al_ , 2012). In Mediterranean Europe,
increases in the number of fires have been detected in some countries,
including Portugal and Spain (San-Miguel-Ayanz _et al_ , 2013a; Rodrigues _et
al_ , 2013). In addition, a recent work by Turco _et al_ (2016) suggests huge
spatial and temporal variability in fire frequency trends especially in the
case of Spain, where increasing and decreasing trends were detected depending
on the analysis period and scale. This increase in wildfire frequency and
variability, with its associated risks to the environment and society (Moreno
_et al_ , 2011, 2014), calls for a better understanding of the processes that
control wildfire activity (Massada _et al_ , 2013). In recent decades, major
efforts have been made to determine the influence of climate change on natural
hazards, and to develop models and tools to properly characterize and quantify
changes in climatic patterns. While physical processes involved in ignition
and combustion are theoretically simple, understanding the relative influence
of human factors in determining wildfire is an ongoing task (Mann _et al_ ,
2016). It is clear that human-caused fires that occur repeatedly in a given
geographical area are not simply reducible to individual personal factors, and
thus subject to pure chance. They are usually the result of a spatial pattern,
whose origin is in the interaction of environmental and socioeconomic
conditions (Koutsias _et al_ , 2015). This is particularly true in human-
dominated landscapes such as Spain, where anthropogenic ignitions surpass
natural ignitions, and humans interact to a large degree with the landscape,
changing its flammability, and act as fire initiators or suppressors. In such
cases, human influence may cause sudden changes in fire frequency, intensity,
and burned area size (Pezzatti _et al_ , 2013).
Fire is an integral component of Mediterranean ecosystems since at least the
Miocene (Dubar _et al_ , 1995). Although humans have used fires in the region
for tens of thousands of years (Goren-Inbar _et al_ , 2004), it is only in the
last 10,000 or so that man has significantly influenced fire regime (Daniau
_et al_ , 2010). The use of fire as a management tool has persisted until
these days, although the second half of the past century saw a major change
and a regime shift due to abandonment of many unproductive lands (Moreno _et
al_ , 1988; Pausas _et al_ , 2012). Although fire still is a traditional
management tool in some rural areas for control of vegetation and enhancement
of pastures for cattle feed, most fires these days are no longer related to
the management of the land (San-Miguel-Ayanz _et al_ , 2012, 2013a, 2013b).
The European Mediterranean region is a highly populated area where nearly 200
Million people live in just 5 European Union countries, Portugal, Spain,
France, Italy and Greece. Population density varies but remains very high with
about 2500 inhabitants/km2 in the French Riviera (with peaks of up to 750,000
tourists per day during the summer) (Corteau, 2007) versus an average of 111
inhabitants/km2 in the region. The region is characterized by an extensive
wildland urban interface (WUI). Large urban areas have expanded into the
neighboring wildland areas, where expensive households are built. The WUI has
been further increased by the construction of second holiday homes in the
natural environment. Fire prone areas along the Mediterranean coast have been
extensively built up, reducing in some cases the availability of fuels, but
increasing largely the probability of fire ignition by human causes (Ganteaume
_et al_ , 2013). In other areas of the same region, abandonment of the rural
environment has lead to low utilization of forests, which are generally of
limited productivity, and the subsequent accumulation of fuel loads (San-
Miguel-Ayanz _et al_ , 2012, 2013a, 2013b; Moreira _et al_ , 2011). The
combination of the above factors converts the European Mediterranean region in
a high fire risk area (Sebastián-López _et al_ , 2008), especially during the
summer months when low precipitations and very high temperatures favor fire
ignition and spread.
About 65,000 fires take place every year in the European region, burning, on
average, around half a million ha of forest areas (European Commission, 2011).
Approximately 85% of the total burnt area occurs in the EU Mediterranean
region (San-Miguel-Ayanz _et al_ , 2010). Although fires ignite and spread
under favorable conditions of fuel availability and low moisture conditions,
ignition is generally caused by human activities. Over 95% of the fires in
Europe are due to human causes. An analysis of fire causes show that the most
common cause of fires is “agricultural practices”, followed by “negligence”
and “arson” (Vilar Del Hoyo _et al_ , 2009; Reus Dolz _et al_ , 2003).
Most fires in the region are small, as a fire exclusion (extinction) policy
prevails in Europe. Fires are thus extinguished as soon as possible, and only
a small percentage escapes the initial fire attack and the subsequent
firefighting operations. An enhanced international collaboration for
firefighting exists among countries in the European Mediterranean region. This
facilitates the provision of additional firefighting means to those in a given
country from the neighbouring countries in case of large fire events. The
trend of large fires, those larger than 500 ha, is shown quite stable in the
last decades (San-Miguel-Ayanz _et al_ , 2010). However, among these large
fires, several fire episodes caused catastrophic damages and the loss of human
lives (San-Miguel-Ayanz _et al_ , 2013a).
A first step is to identify all the factors linked to human activity,
establishing their relative importance in space and time (Martínez _et al_ ,
2009, 2013). According to Moreno _et al_ (2014), the number of fires over the
past 50 years in Spain has increased, driven by climate and land-use changes.
However, this tendency has been recently reversed due to fire prevention and
suppression policies. This highlights the influence of changes in the role of
human activities as some of the major driving forces. For instance, changes in
population density patterns—both rural and urban—and traditional activities
have been linked to an increase in intentional fires. In this sense, several
works have previously investigated the influence of human driving factors of
wildfires in Spain. These works have explored in detail a wide range of human
variables (Martínez _et al_ , 2009; Chuvieco _et al_ , 2010) and methods.
Specifically, Generalized Linear Models (Vilar Del Hoyo _et al_ , 2009;
Martínez _et al_ , 2009; Moreno _et al_ , 2014), machine learning methods (Lee
_et al_ , 1996; Rodrigues and de la Riva, 2014), and more spatial-explicit
models like Geographically Weighted Regression (Martínez _et al_ , 2013;
Rodrigues _et al_ , 2014) have previously been employed. However, all these
approaches could be considered as stationary from a temporal point of view,
since they are based on ‘static’ fire data information summarized or
aggregated for a given time span. However, the influence of human drivers
cannot be expected to be stationary (Rodrigues _et al_ , 2016). Zumbrunnen _et
al_ (2011) stress the importance of dealing with the temporal dimension of
human drivers of wildfires. Therefore, exploring temporal changes in
socioeconomic or anthropogenic drivers of wildfire will enhance our
understanding of both current and future patterns of fire ignition, and thus
help improve suppression and prevention policies (Rodrigues _et al_ , 2016).
Disease risk mapping analyses can help to better understand the spatial
variation of the disease, and allow the identification of important public
health determinants (Moraga, 2018). Spatio-temporal disease mapping models are
a popular tool to describe the pattern of disease counts and to identify
regions with an unusual incidence levels, time trend or both (Schrödle and
Held, 2011). This class of models is usually formulated within a hierarchical
Bayesian framework with latent Gaussian model (Besag _et al_ , 1991). Several
proposals have been made including a parametric (Bernardinelli _et al_ , 2014)
and nonparametric (Knorr-Held, 2000; Lagazio _et al_ , 2003; Schmid and Held,
2004) formulation of the time trend and the respective space-time
interactions.
Areal disease data often arise when disease outcomes observed at point level
locations are aggregated over subareas of study region due to constraints such
as population confidentiality. Producing disease risk estimates at area level
is complicated by the fact that raw rates can be very unstable in areas with
small population and for rare circumstances, an also by the presence of
spatial autocorrelation that may exist due to spatially correlated risk
factors (Leroux _et al_ , 2000). Thus, generalised linear mixed models are
often used to obtain disease risk estimates since they enable to improve local
estimates by accommodating spatial correlation and the effects of explanatory
variables. Bayesian inference in these models can be performed using
Integrated Nested Laplace Approximation (INLA) approach (Rue _et al_ , 2009)
which is a computational alternative to the commonly used Markov chain Monte
Carlo methods (MCMC); INLA allows to run fast approximate Bayesian inference
in latent Gaussian models.
INLA is implemented in the INLA package for the R programming language, that
provides an easy way to fit models via inla() function, which works in a
similar way as other functions to fit models, such as glm() or gam() (Palmi-
Perales _et al_ , 2019).
Statistical reporting in the European Union is done according to the
Nomenclature of Units for Territorial Statistics (NUTS) system. The NUTS is a
five-level hierarchical classification based on three regional levels and two
local levels. Each member state is divided into a number of NUTS-1 regions,
which in turn are divided into a number of NUTS-2 regions and so on. There are
78 NUTS-1 regions, 210 NUTS-2 and 1093 NUTS-3 units within the current 15 EU
countries (Eurostat, 2002).
In this paper, we explore the application of these models to understand forest
fires causality using environmental and socio-economic variables. We will work
with areal data using the number of forest fires at NUTS-2 regional level in
Europe and consider forest fires between 2012 and 2016.
## 2 Material and Methods
We extend the analysis of globalization to the NUTS-2 regions of the 27
countries of the European Union (EU-27), as not all the regions have been
included due to absence of information (forest fires or socio-economic data)
(Figure 1).
Figure 1: Study area, in grey administrative areas included in the analysis.
Our main data set comprises the number of fires in Europe at NUTS-2 level,
requested to the European Forest Fire Information System (EFFIS) (San-Miguel-
Ayanz _et al_ , 2012). We have chosen this level due to the variables that we
are interested to analyse (socioeconomic and environmental).
In order to summarise the forest fires in Europe we can see that the number of
forest fires and the area affected have decreased between 2012 and 2014.
However, the minimum was achieved in 2014, with a subsequent increase during
2015 and 2016 (Figure 2).
Figure 2: Summary of number of forest fires (left) and area affected between
2012 and 2016 (right)
The following environmental variables were obtained from the AGRI4CAST
Resources Portal: Maximum air temperature (∘C); Minimum air temperature (∘C);
Mean air temperature (∘C); Mean daily wind speed at 10 m. (m/s); Vapour
pressure (hPa); Daily precipitation (mm/day); Potential evaporation from the
water surface (mm/day); Potential evaporation from moist bare soil surface
(mm/day); Potential evapotranspiration from crop canopy (mm/day); Total global
radiation (kJ/m2/day). For each region we have the average by year. In Figure
3 we can see the average of the different variables by year for all the NUTS 2
regions.
Figure 3: Trend of average of NUTS 2 regions by year of environmental
variables between 2012 and 2015. (a) Maximum air temperature by year; (b)
Miminum air temperature;(c) Mean air temperature; (d) Mean daily wind speed;
(e) Vapour pressure; (f) Daily precipitation; (g) Potential evaporation from
the water surface; (g) Potential evaporation from moist bare soil surface; (h)
Potential evapotranspiration from crop canopy; (j)Total global radiation.
The following socio-economic variables were obtained from Eurostat: Active
population (*1000 employed persons), Woodland (*1000 hectares of Woodland in
the area), Manufactured (*1000 employed persons working in manufactured
products from woodland); Forestry (*1000 employed persons working in Forest
sector); Economic aggregates of forestry (million euro) and Unemployment (%).
In this case we have included in our model totals values by year and region.
In order to summarise the different variables we have included in Figure 4 the
total values by year for all the variables except for Unemployment where we
have done the average for all regions by year.
Figure 4: Trend the socioeconomic variables between 2012 and 2016. Totalisers
by year in the following graphs: (a) Active population; (c) Economic
aggregates of forestry; (d) Employed persons working in Forest sector; (e)
Employed persons working in manufactured products from woodland); and (b)
Average of Unemployment.
### 2.1 Spatio-Temporal model
Here we consider a disease mapping approach, commonly used in small area
studies to assess the spatial pattern of a particular outcome and to identify
areas characterised by unusually high or low relative risk (Lawson, 2013;
Pascutto _et al_ , 2000).
For the _i-th_ area, the number of forest fires $y_{i}$ is modelled as
$y_{it}\sim Poisson(\lambda_{it});\lambda_{it}=E_{it}\rho_{it}$ (1)
where the $E_{it}$ are the expected number of forest fires and $\rho_{it}$ is
the rate.
We specify a log-linear model on $\rho_{i}$ and include spatial, temporal and
a space-time interaction, which would explain differences in the time trend
for different areas. We use the following specification to explain these
differences:
$\rho_{it}=\alpha+\upsilon_{i}+\nu_{i}+\gamma_{t}+\phi_{t}+\delta_{it},$ (2)
There are several ways to define the interaction term: here, we assume that
the two unstructured effects $\nu_{i}$ and $\phi_{t}$ interact. We re-write
the precision matrix as the product of the scalar $\tau_{\nu}$ (or
$\tau_{\phi}$) and the so called structure matrix $\textbf{\emph{F}}_{\nu}$
(or $\textbf{\emph{F}}_{\phi}$), which identifies the neighboring structure;
here the structure matrix $\textbf{\emph{F}}_{\delta}$ can be factorised as
the Kronecker product of the structure matrix for $\nu$ and $\phi$ (Clayton,
1996):
$\textbf{\emph{F}}_{\phi}=\textbf{\emph{F}}_{\nu}\otimes\textbf{\emph{F}}_{\phi}=\textbf{\emph{I}}\otimes\textbf{\emph{I}}=\textbf{\emph{I}}$
(because both $\nu$ and $\phi$ are unstructured). Consequently, we assume no
spatial and/or temporal structure on the interaction and therefore
$\delta_{it}\sim Normal(0,\tau_{\phi})$ — see Knorr-Held (2000) for a detailed
description of other specifications.
In the model presented we assume the default specification of R-INLA for the
distribution of the hyper-parameters; therefore, log$\tau_{\upsilon}$ $\sim$
logGamma(1,0.0005) and log$\tau_{\nu}$ $\sim$ logGamma(1,0.0005). In addition
we specify a logGamma(1,0.0005) prior on the log-precision of the random walk
and of the two unstructured effects (Blangiardo and Cameletti, 2015).
To evaluate the fit of this model, we have applied the Watanabe-Akaike
information criterion (WAIC) (Watanabe, 2010). WAIC was suggested as an
appropriate alternative for estimating the out-of-sample expectation in a
fully Bayesian approach. This method starts with the computed log pointwise
posterior predictive density and then adds a correction for the effective
number of parameters to adjust for overfitting (Gelman and Shalizi, 2013).
Watanabe-Akaike information criterion works on predictive probability density
of detected variables rather than on model parameter; hence, it can be applied
in singular statistical models (i.e. models with non-identifiable
parameterization) (Li _et al_ , 2016).
We have used Integrated Nested Laplace Approximation (INLA) implemented in
R-INLA within the R statistical software (version 3.6.0).
## 3 Results
In this section, we show how the forest fires have evolved between 2012 and
2016.
Analysing the temporal trend, we can see graphically (Figure 5), the posterior
temporal trend for forest fires in Europe. In this graph we show how the
number of forest fires tend to be reduced over time.
Figure 5: Global linear temporal trend for number of forest fires in Europe at
NUTS2 region level. The solid line identifies the posterior mean for
$\beta_{t}$ , while the dashed lines are the 95% credibility intervals.
Analysing the posterior distribution of forest fires (Figure 6) in Europe we
can see that there is a “hot point” in western of the continent (North of
Portugal and North West of Spanish peninsula). Also, as we can see, in
general, the predicted number of forest fires is low in central Europe.
Figure 6: Map of the number of forest fires posterior distribution by region.
Comparing the different years, as we pointed previously, during 2014 the
number of forest fires decreased in all areas except in some regions of Spain
and Sicily. In addition, analysing the number of forest fires by region we can
see that the region with stronger variations is the North region from
Portugal.
In Figure 7 we can see a more detailed map focused in Mediterranean countries
(France, Greece, Italy and Spain). In this case it is clear that variability
in France is almost inexistent only with some increase in the number of forest
fires in Southern regions in 2016. The results from the data available for
Greece, show that there are not big changes during the time analysed. However,
Italy and Spain show more fluctuations during this period. The Southern part
of Italy shows great changes along the time, starting with almost 150 forest
fires in Sicily in 2012 to reduce until about 30 forest fires in 2015 and
increase again in 2016 (67 forest fires). Similarly, in Spain the Northwest
region shows several fluctuations. However, in Spain higher number of forest
fires affects more regions.
Figure 7: Detail of posterior distribution of forest fires in the
Mediterranean region.
As we can see in Table1, several variables are affecting the quantity of
forest fires. But two of them have more impact (have higher values) than the
others. Evaporation in water surface (EvaporationW) is affecting positively
the volume of forest fires at region level. On the other hand, and with
similar magnitude but negative sign, Evapotranspiration from crop canopy
(Evapotrans.) is affecting negatively the presences of forest fires.
Table 1: Posterior estimates summary (Mean, Standard deviation and 95% Credible Interval). Fixed effects and hyperparameters for spatio-temporal model. Fixed effects | | | |
---|---|---|---|---
| mean | sd | 0.025quant | 0.975quant
Active | 0.3045 | 0.1739 | -0.0375 | 0.6468
Aggregates | -0.2919 | 0.1568 | -0.6008 | 0.0152
Forestry | 0.6073 | 0.3321 | -0.058 | 1.2472
Manufactured | -0.9303 | 0.3944 | -1.717 | -0.1669
MaxTemperature | 0.1761 | 0.14 | -0.0999 | 0.4503
MinTemperature | 0.584 | 0.2153 | 0.1631 | 1.0083
AvgTemperature | -0.1396 | 0.4931 | -1.1094 | 0.8277
Wind | 0.5609 | 0.2512 | 0.0655 | 1.0522
Presion | -0.28 | 0.2978 | -0.8653 | 0.3046
Precipitation | -0.0224 | 0.0984 | -0.2158 | 0.1707
Evapotrans | -23.6247 | 9.8466 | -43.0572 | -4.3758
EvaporationW | 24.5911 | 10.907 | 3.2597 | 46.0829
EvaporationS | 0.9008 | 0.6832 | -0.4398 | 2.2435
Radiation | -0.2677 | 0.9264 | -2.0907 | 1.5486
Woodland | 0.7649 | 0.2249 | 0.3331 | 1.2172
Model hyperparameters | | | |
| mean | sd | 0.025quant | 0.975quant
Precision for AREA_ID | 2.02E-01 | 3.85E-02 | 0.1347 | 2.85E-01
Precision for Year | 1.14E-01 | 6.61E-02 | 0.0299 | 2.80E-01
Precision for AREA_ID.YEAR | 1.59E+00 | 2.66E-01 | 1.1262 | 2.17E+00
However, evapotranspiration from crop canopy is having a negative effect in
forest fires quantity. In this group we need to highlight variables having
more impact (higher values) than the others. Evaporation in water surface
(EvaporationW) is affecting positively the volume of forest fires at region
level. On the other hand, and with similar magnitude but negative sign,
Evapotranspiration from crop canopy (Evapotrans) is affecting negatively the
presences of forest fires. The rest of the variables that are affecting
positively the amount of forest fires are Minimum temperature at 10 m.
(MinTemperature) and Mean daily wind speed at 10 m (WIND). Finally,
Manufactured is affecting negatively the quantity of forest fires.
Graphical representation of estimation for the fixed effects is presented in
Figure 8. This chart presents the variables and their relationships with
forest fires. Variables distributed in a positive side contribute to higher
number of forest fires; the opposite, with variables with negative
distribution. Variables present in both areas (positive and negative) do not
have a clear relationship with answer.
Figure 8: Graphical representation of fixed effecs. Evaporation in water
surface (EvaporationW) and Evapotranspiration from crop canopy (Evapotrans)
were excluded in order to obtain more detail of the rest of the variables.
## 4 Conclusions
We have built spatio-temporal models to predict the quantity of forest fires
in Europe at NUTS-2 regional level. We have shown the relationship between the
different variables and the number of forest fires by region. We have shown
that this relationship not only is between some of the variables (fixed
effects), but also the evolution of forest fires along the time is affected
not only by time and spatial effects but also by the combination of both
(Precision for AREA_ID.YEAR).
Initially our main objective in this project was to apply these models to
Europe with more granularity, assuming that more local information will help
to understand better the causality of forest fires. However due to data
availability it was not possible to develop the project in that way. Currently
not all the socioeconomic data is available for all the NUTS-3 regions in a
continuous timestamp, being this characteristic necessary to carry a spatio-
temporal analysis.
Also, several factors can affect in different ways depending of the area. In
our case, variables have been assumed in a scale that in some of the cases
local information can help to understand cause-effect of forest fires.
Analysing the models, we believe that the use of spatio-temporal models is an
advantage for the understanding of the different dynamics, given that the
temporal and spatio-temporal perspective is not very frequent analysing forest
hazards.
Summarising, we can generalise that not only environmental factors but also
socioeconomic variables are affecting the causality of forest fires. However,
more data and more granularity in the analysis in needed in order to
understand this causality.
Landscapes became more hazardous with the time, since land abandonment led to
an increase in forest area. Treeless areas burned proportionally more than
treed ones (Urbieta _et al_ , 2019). Fires in southern Europe have more
preference shrublands than for forest types (Moreira _et al_ , 2011; Oliveira
_et al_ , 2014), but may vary along locations (Moreno _et al_ , 2011). This
could be due to a change in the ignition patterns owing to shifts in the
wildland-agricultural and wildland-urban interfaces (Rodrigues _et al_ , 2014;
Modugno _et al_ , 2016). The most vulnerable landscapes were those with
diversity of land uses, with forest-agriculture mixtures (Ortega _et al_ ,
2012). For these reasons, inclusion of vegetation to analyse causality needs
to be studied.
Fire trends can be affected by changes in ignition cause. In European
Mediterranean countries, a minor percentage of fires are caused by lightning,
and most are caused by people. Fires of these two sources tend to occur at
different locations (Vázquez and Moreno, 1998), which could affect the
vegetation they burn and the difficulty of extinction. However, no changes
between these two sources have been observed (Ganteaume _et al_ , 2013).
Regarding people-caused fires, the majority of them are voluntary, followed by
negligence (Urbieta _et al_ , 2019). In recent times, negligence fires are
increasing and voluntary ones decreasing (Ganteaume _et al_ , 2013). Whether
this is differentially affecting the number of fires trends, is something that
needs research (Urbieta _et al_ , 2019).
Spatio-temporal models and the R-INLA package appear to offer additional
benefits beyond the traditional analysis used to understand the causes of this
hazards. The combination of using a complex spatial latent field to capture
spatial processes and an underlying simple additive regression model for the
response variables relationship to the different factors, means that the fixed
effects are potentially more straightforward to interpret (Golding and Purse,
2016). R-INLA models are extremely flexible in their specifications, with
spatial autocorrelation and observer bias being straightforwardly incorporated
as random effects, while standard error distributions, such as Gaussian,
Poisson, Binomial, and a variety of zero-inflated models, can be used
interchangeably (Rue _et al_ , 2009).
## References
* Bal _et al_ (2011) Bal, M.C., Pelachs, A., Perez-Obiol, R., Julia, R. and Cunill, R., 2011. Fire history and human activities during the last 3300 cal yr BP in Spain’s Central Pyrenees: the case of the Estany de Burg. Palaeogeography, Palaeoclimatology, Palaeoecology, 300(1-4), pp.179-190.
* Biavetti _et al_ (2014) Biavetti I, Karetsos S, Ceglar A, Toreti A, Panagos P. 2014. European meteorological data: contribution to research, development, and policy support, Proc. SPIE 9229, Second International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2014), 922907 (12 August 2014); https://doi.org/10.1117/12.2066286
* Bedia _et al_ (2014) *Bedia, J., Herrera, S., Gutiérrez, J.M., Zavala, G., Urbieta, I.R. and Moreno, J.M., 2012. Sensitivity of fire weather index to different reanalysis products in the Iberian Peninsula. Natural Hazards and Earth System Sciences, 12(3), pp.699-708.
* Bernardinelli _et al_ (2014) *Bernardinelli, L., Clayton, D., Pascutto, C., Montomoli, C., Ghislandi, M. and Songini, M., 1995. Bayesian analysis of space—time variation in disease risk. Statistics in medicine, 14(21-22), pp.2433-2443.
* Besag _et al_ (1991) Besag J, York J, Mollie A. 1991. Bayesian image restoration with two applications in spatial statistics. Annals of the Institute of Statistical Mathematics 43(1):1–59.
* Blangiardo and Cameletti (2015) Blangiardo, M. and Cameletti, M., 2015. Spatial and spatio-temporal Bayesian models with R-INLA. John Wiley & Sons.
* Boer _et al_ (2017) Boer, M.M., Nolan, R.H., De Dios, V.R., Clarke, H., Price, O.F. and Bradstock, R.A., 2017. Changing weather extremes call for early warning of potential for catastrophic fire. Earth’s Future, 5(12), pp.1196-1202.
* Cardil _et al_ (2014) Cardil, A., Molina, D.M. and Kobziar, L.N., 2014. Extreme temperature days and their potential impacts on southern Europe. Natural Hazards and Earth System Sciences, 14(11), pp.3005-3014.
* Carmona _et al_ (2012) Carmona, A., González, M.E., Nahuelhual, L. and Silva, J., 2012. Spatio-temporal effects of human drivers on fire danger in Mediterranean Chile. Bosque, 33(3), pp.321-328.
* Chuvieco _et al_ (2010) Chuvieco, E., Aguado, I., Yebra, M., Nieto, H., Salas, J., Martín, M.P., Vilar, L., Martínez, J., Martín, S., Ibarra, P. and De La Riva, J., 2010. Development of a framework for fire risk assessment using remote sensing and geographic information system technologies. Ecological Modelling, 221(1), pp.46-58.
* Chuvieco _et al_ (2014) Chuvieco, E., Martínez, S., Román, M.V., Hantson, S. and Pettinari, M.L., 2014. Integration of ecological and socio-economic factors to assess global vulnerability to wildfire. Global Ecology and Biogeography, 23(2), pp.245-258.
* Cochrane _et al_ (1999) Cochrane, M.A., Alencar, A., Schulze, M.D., Souza, C.M., Nepstad, D.C., Lefebvre, P. and Davidson, E.A., 1999. Positive feedbacks in the fire dynamic of closed canopy tropical forests. Science, 284(5421), pp.1832-1835.
* Corteau (2007) Corteau, R., 2007. Report No. 117 (2007–2008) for the French Parliament Office for the Evaluation of Scientific and Technological Choices, 60p.
* Daniau _et al_ (2010) Daniau, A.L., d’Errico, F. and Goñi, M.F.S., 2010. Testing the hypothesis of fire use for ecosystem management by Neanderthal and Upper Palaeolithic modern human populations. Plos one, 5(2), p.e9157.
* de Groot _et al_ (2013) de Groot, W.J., Flannigan, M.D. and Cantin, A.S., 2013. Climate change impacts on future boreal fire regimes. Forest Ecology and Management, 294, pp.35-44.
* De Rigo _et al_ (2017) De Rigo, D., Libertà, G., Houston Durrant, T., Artés Vivancos, T. and San-Miguel-Ayanz, J., 2017. Forest fire danger extremes in Europe under climate change: variability and uncertainty. European Union: Luxembourg.
* DeFries _et al_ (2002) DeFries, R.S., Houghton, R.A., Hansen, M.C., Field, C.B., Skole, D. and Townshend, J., 2002. Carbon emissions from tropical deforestation and regrowth based on satellite observations for the 1980s and 1990s. Proceedings of the National Academy of Sciences, 99(22), pp.14256-14261.
* Dubar _et al_ (1995) Dubar, M., Ivaldi, J.P. and Thinon, M., 1995. Mio-pliocene fire sequences in the valensole basin (Southern France)-paleoclimatic and paleogeographic interpretation. Comptes Rendus De L Academie Des Sciences Serie Ii, 320(9), pp.873-879.
* European Commission (2011) European Commission, 2011. Forest Fires in Europe 2010. Official Publication of the European Communities, EUR 24910.
* European Commission (2019) EU parliament’s debate: Climate change and forest fires in Europe. Available online: https://eustafor.eu/climate-change-and-forest-fires-in-europe/ (accessed on 10 September 2019).
* Eurostat (2002) European Commission. Eurostat database. 2019. http://ec.europa.eu/eurostat/Eurostat, 2002. Main characteristics of the NUTS. Available from: http://europa.eu.int/comm/eurostat/ramon/nuts/mainchar_regions_en.html.
* Food (2007) Food, U.N., 2007. Fire management–Global assessment 2006.
* Flannigan _et al_ (2009) Flannigan, M.D., Krawchuk, M.A., de Groot, W.J., Wotton, B.M. and Gowman, L.M., 2009. Implications of changing climate for global wildland fire. International journal of wildland fire, 18(5), pp.483-507.
* Ganteaume _et al_ (2013) Ganteaume, A., Camia, A., Jappiot, M., San-Miguel-Ayanz, J., Long-Fournel, M. and Lampin, C., 2013. A review of the main driving factors of forest fire ignition over Europe. Environmental management, 51(3), pp.651-662.
* Garcia _et al_ (1995) Garcia, C.V., Woodard, P.M., Titus, S.J., Adamowicz, W.L. and Lee, B.S., 1995. A logit model for predicting the daily occurrence of human caused forest-fires. International Journal of Wildland Fire, 5(2), pp.101-111.
* Gelman and Shalizi (2013) Gelman, A. and Shalizi, C.R., 2013. Philosophy and the practice of Bayesian statistics. British Journal of Mathematical and Statistical Psychology, 66(1), pp.8-38.
* Goetz _et al_ (2006) Goetz, S.J., Fiske, G.J. and Bunn, A.G., 2006. Using satellite time-series data sets to analyze fire disturbance and forest recovery across Canada. Remote Sensing of Environment, 101(3), pp.352-365.
* Golding and Purse (2016) Golding, N. and Purse, B.V., 2016. Fast and flexible Bayesian species distribution modelling using Gaussian processes. Methods in Ecology and Evolution, 7(5), pp.598-608.
* Goren-Inbar _et al_ (2004) Goren-Inbar, N., Alperson, N., Kislev, M.E., Simchoni, O., Melamed, Y., Ben-Nun, A. and Werker, E., 2004. Evidence of hominin control of fire at Gesher Benot Yaaqov, Israel. Science, 304(5671), pp.725-727.
* Hobbs _et al_ (1991) Hobbs N.T., Schimel D.S., Owensby C.E., Ojima D.S., 1991. Fire and grazing in the tallgrass prairie – contingent effects on nitrogen budgets. Ecology 72:1374–1382.
* Johnson _et al_ (2001) Johnson, E. A., Miyanishi, K., & Bridge, S. R. J., 2001. Wildfire regime in the boreal forest and the idea of suppression and fuel buildup. Conservation Biology, 15(6), 1554-1557.
* Keeley _et al_ (1999) Keeley, J.E., Fotheringham, C.J. and Morais, M., 1999. Reexamining fire suppression impacts on brushland fire regimes. Science, 284(5421), pp.1829-1832.
* Knorr-Held (2000) Knorr-Held, L., 2000. Bayesian modelling of inseparable space-time variation in disease risk. Statistics in medicine, 19(17-18), pp.2555-2567.
* Koutsias _et al_ (2015) Koutsias, N., Allgöwer, B., Kalabokidis, K., Mallinis, G., Balatsos, P. and Goldammer, J.G., 2015. Fire occurrence zoning from local to global scale in the European Mediterranean basin: implications for multi-scale fire management and policy. iForest-Biogeosciences and Forestry, 9(2), p.195.
* Lagazio _et al_ (2003) Lagazio, C., Biggeri, A. and Dreassi, E., 2003. Age–period–cohort models and disease mapping. Environmetrics: The official journal of the International Environmetrics Society, 14(5), pp.475-490.
* Lawson (2013) Lawson, A.B., 2013. Bayesian disease mapping: hierarchical modeling in spatial epidemiology. Chapman and Hall/CRC.
* Lee _et al_ (1996) Lee, B.S., Woodard, P.M. and Titus, S.J., 1996. Applying neural network technology to human-caused wildfire occurrence prediction. AI applications.
* Leone _et al_ (2003) Leone, V., Koutsias, N., Martínez, J., Vega-García, C., Allgöwer, B. and Lovreglio, R., 2003. The human factor in fire danger assessment. In Wildland Fire Danger Estimation and Mapping: The Role of Remote Sensing Data (pp. 143-196).
* Leone _et al_ (2009) Leone, V., Lovreglio, R., Martín, M.P., Martínez, J. and Vilar, L., 2009. Human factors of fire occurrence in the Mediterranean. In Earth observation of wildland fires in Mediterranean ecosystems (pp. 149-170). Springer, Berlin, Heidelberg.
* Leroux _et al_ (2000) Leroux, B.G., Lei, X. and Breslow, N., 2000. Estimation of disease rates in small areas: a new mixed model for spatial dependence. In Statistical models in epidemiology, the environment, and clinical trials (pp. 179-191). Springer, New York, NY.
* Li _et al_ (2016) Li, L., Qiu, S., Zhang, B. and Feng, C.X., 2016. Approximating cross-validatory predictive evaluation in Bayesian latent variable models with integrated IS and WAIC. Statistics and Computing, 26(4), pp.881-897.
* Mann _et al_ (2016) Mann, M.L., Batllori, E., Moritz, M.A., Waller, E.K., Berck, P., Flint, A.L., Flint, L.E. and Dolfi, E., 2016. Incorporating anthropogenic influences into fire probability models: Effects of human activity and climate change on fire activity in California. PLoS One, 11(4), p.e0153589.
* Martínez _et al_ (2009) Martínez, J., Vega-Garcia, C. and Chuvieco, E., 2009. Human-caused wildfire risk rating for prevention planning in Spain. Journal of environmental management, 90(2), pp.1241-1252.
* Martínez _et al_ (2013) Martínez-Fernández, J., Chuvieco, E. and Koutsias, N., 2013. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression. Natural Hazards and Earth System Sciences, 13(2), pp.311-327.
* Massada _et al_ (2013) Massada, A.B., Syphard, A.D., Stewart, S.I. and Radeloff, V.C., 2013. Wildfire ignition-distribution modelling: a comparative study in the Huron–Manistee National Forest, Michigan, USA. International journal of wildland fire, 22(2), pp.174-183.
* Millán _et al_ (2005) Millán, M.M., Estrela, M.J., Sanz, M.J., Mantilla, E., Martín, M., Pastor, F., Salvador, R., Vallejo, R., Alonso, L., Gangoiti, G. and Ilardia, J.L., 2005. Climatic feedbacks and desertification: the Mediterranean model. Journal of Climate, 18(5), pp.684-701.
* Minnich (1983) Minnich, R.A., 1983. Fire mosaics in southern California and northern Baja California. Science, 219(4590), pp.1287-1294.
* Modugno _et al_ (2016) Modugno, S., Balzter, H., Cole, B. and Borrelli, P., 2016. Mapping regional patterns of large forest fires in Wildland–Urban Interface areas in Europe. Journal of environmental management, 172, pp.112-126.
* Molina _et al_ (2018) Molina, J.R., Moreno, R., Castillo, M. and y Silva, F.R., 2018. Economic susceptibility of fire-prone landscapes in natural protected areas of the southern Andean Range. Science of the Total Environment, 619, pp.1557-1565.
* Moraga (2018) Moraga, P., 2018. Small Area Disease Risk Estimation and Visualization Using R. R J, 10, pp.495-506.
* Moreira _et al_ (2011) Moreira, F., Viedma, O., Arianoutsou, M., Curt, T., Koutsias, N., Rigolot, E., Barbati, A., Corona, P., Vaz, P., Xanthopoulos, G. and Mouillot, F., 2011. Landscape–wildfire interactions in southern Europe: implications for landscape management. Journal of environmental management, 92(10), pp.2389-2402.
* Moreno _et al_ (2014) Moreno, M.V., Conedera, M., Chuvieco, E. and Pezzatti, G.B., 2014. Fire regime changes and major driving forces in Spain from 1968 to 2010. Environmental Science & Policy, 37, pp.11-22.
* Moreno _et al_ (2011) Moreno, J.M., Zuazua, E., Pérez, B., Luna, B., Velasco, A. and Resco de Dios, V., 2011. Rainfall patterns after fire differentially affect the recruitment of three Mediterranean shrubs. Biogeosciences, 8(12), pp.3721-3732.
* Moreno _et al_ (1988) Moreno, J.M., Vázquez, A. and Vélez, R., 1998. Recent history of forest fires in Spain. Large forest fires, pp.159-185.
* Oliveira _et al_ (2014) Oliveira, S., Pereira, J.M., San-Miguel-Ayanz, J. and Lourenço, L., 2014. Exploring the spatial patterns of fire density in Southern Europe using Geographically Weighted Regression. Applied Geography, 51, pp.143-157.
* Ortega _et al_ (2012) Ortega, M., Saura, S., González-Avila, S., Gómez-Sanz, V. and Elena-Rosselló, R., 2012. Landscape vulnerability to wildfires at the forest-agriculture interface: half-century patterns in Spain assessed through the SISPARES monitoring framework. Agroforestry systems, 85(3), pp.331-349.
* Palmi-Perales _et al_ (2019) Palmi-Perales, F., Gomez-Rubio, V. and Martinez-Beneito, M.A., 2019. Bayesian Multivariate Spatial Models for Lattice Data with INLA. arXiv preprint arXiv:1909.10804.
* Parisien _et al_ (2006) Parisien, M.A., Peters, V.S., Wang, Y., Little, J.M., Bosch, E.M. and Stocks, B.J., 2006. Spatial patterns of forest fires in Canada, 1980–1999. International Journal of Wildland Fire, 15(3), pp.361-374.
* Pascutto _et al_ (2000) Pascutto, C., Wakefield, J.C., Best, N.G., Richardson, S., Bernardinelli, L., Staines, A. and Elliott, P., 2000. Statistical issues in the analysis of disease mapping data. Statistics in medicine, 19(17-18), pp.2493-2519.
* Pausas _et al_ (2012) Pausas, J.G. and Fernández-Muñoz, S., 2012. Fire regime changes in the Western Mediterranean Basin: from fuel-limited to drought-driven fire regime. Climatic change, 110(1-2), pp.215-226.
* Pausas and Keeley (2009) Pausas, J.G. and Keeley, J.E., 2009. A burning story: the role of fire in the history of life. Bioscience, 59(7), pp.593-601.
* Pechony and Shindell (2010) Pechony, O. and Shindell, D.T., 2010. Driving forces of global wildfires over the past millennium and the forthcoming century. Proceedings of the National Academy of Sciences, 107(45), pp.19167-19170.
* Pezzatti _et al_ (2013) Pezzatti, G.B., Zumbrunnen, T., Bürgi, M., Ambrosetti, P. and Conedera, M., 2013. Fire regime shifts as a consequence of fire policy and socio-economic development: an analysis based on the change point approach. Forest Policy and Economics, 29, pp.7-18.
* Piñol _et al_ (1998) Piñol, J., Terradas, J. and Lloret, F., 1998. Climate warming, wildfire hazard, and wildfire occurrence in coastal eastern Spain. Climatic change, 38(3), pp.345-357.
* Pyne (2001) Pyne, S.J., 2001. The fires this time, and next. Science, 294(5544), pp.1005-1006.
* Radeloff _et al_ (2005) Radeloff, V.C., Hammer, R.B., Stewart, S.I., Fried, J.S., Holcomb, S.S. and McKeefry, J.F., 2005. The wildland–urban interface in the United States. Ecological applications, 15(3), pp.799-805.
* Randerson _et al_ (2005) Randerson, J.T., Van der Werf, G.R., Collatz, G.J., Giglio, L., Still, C.J., Kasibhatla, P., Miller, J.B., White, J.W.C., DeFries, R.S. and Kasischke, E.S., 2005. Fire emissions from C3 and C4 vegetation and their influence on interannual variability of atmospheric CO2 and $\delta$13CO2. Global Biogeochemical Cycles, 19(2).
* Reisen and Brown (2006) Reisen, F. and Brown, S.K., 2006. Implications for community health from exposure to bushfire air toxics. Environmental Chemistry, 3(4), pp.235-243.
* Reus Dolz _et al_ (2003) Reus Dolz, M.L. and Irastorza, F., 2003. Estado del Conocimiento de causas sobre los incendios forestales en España. APAS & IDEM Estudio sociologico sobre la percepcion de la pobalcion española hacia los incendios forestales.< www. idem21. com/descargas/pdfs/IncediosForestales. pdf.
* Rodrigues and de la Riva (2014) Rodrigues, M. and de la Riva, J., 2014. An insight into machine-learning algorithms to model human-caused wildfire occurrence. Environmental Modelling & Software, 57, pp.192-201.
* Rodrigues _et al_ (2013) Rodrigues, M., San Miguel, J., Oliveira, S., Moreira, F. and Camia, A., 2013. An insight into spatial-temporal trends of fire ignitions and burned areas in the European Mediterranean countries. Journal of Earth Science and Engineering, 3(7), p.497.
* Rodrigues _et al_ (2014) Rodrigues, M., de la Riva, J. and Fotheringham, S., 2014. Modeling the spatial variation of the explanatory factors of human-caused wildfires in Spain using geographically weighted logistic regression. Applied Geography, 48, pp.52-63.
* Rodrigues _et al_ (2016) Rodrigues, M., Jiménez, A. and de la Riva, J., 2016. Analysis of recent spatial–temporal evolution of human driving factors of wildfires in Spain. Natural Hazards, 84(3), pp.2049-2070.
* Rue _et al_ (2009) Rue, H., Martino, S. and Chopin, N., 2009. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. Journal of the royal statistical society: Series b (statistical methodology), 71(2), pp.319-392.
* San-Miguel-Ayanz _et al_ (2010) San-Miguel-Ayanz, J. and Camia, A., 2010. Forest Fires. In ‘Mapping the Impacts of Natural Hazards and Technological Accidents in Europe: an Overview of the Last Decade’. European Environment Agency Technical Report N, 13, pp.47-53.
* San-Miguel-Ayanz _et al_ (2012) San-Miguel-Ayanz, J., Rodrigues, M., de Oliveira, S.S., Pacheco, C.K., Moreira, F., Duguy, B. and Camia, A., 2012. Land cover change and fire regime in the European Mediterranean region. In Post-fire management and restoration of southern European forests (pp. 21-43). Springer, Dordrecht.
* San-Miguel-Ayanz _et al_ (2013a) San-Miguel-Ayanz, J., Schulte, E., Schmuck, G. and Camia, A., 2013. The European Forest Fire Information System in the context of environmental policies of the European Union. Forest Policy and Economics, 29, pp.19-25.
* San-Miguel-Ayanz _et al_ (2013b) San-Miguel-Ayanz, J., Moreno, J.M. and Camia, A., 2013. Analysis of large fires in European Mediterranean landscapes: lessons learned and perspectives. Forest Ecology and Management, 294, pp.11-22.
* Schmid and Held (2004) Schmid, V. and Held, L., 2004. Bayesian extrapolation of space–time trends in cancer registry data. Biometrics, 60(4), pp.1034-1042.
* Schrödle and Held (2011) Schrödle, B. and Held, L., 2011. Spatio-temporal disease mapping using INLA. Environmetrics, 22(6), pp.725-734.
* Sebastián-López _et al_ (2008) Sebastián-López, A., Salvador-Civil, R., Gonzalo-Jiménez, J. and SanMiguel-Ayanz, J., 2008. Integration of socio-economic and environmental variables for modelling long-term fire danger in Southern Europe. European Journal of Forest Research, 127(2), pp.149-163.
* Team, R.C. (2013) Team, R.C., 2013. R: A language and environment for statistical computing.
* Turco _et al_ (2016) Turco, M., Bedia, J., Di Liberto, F., Fiorucci, P., von Hardenberg, J., Koutsias, N., Llasat, M.C., Xystrakis, F. and Provenzale, A., 2016. Decreasing fires in Mediterranean Europe. PLoS one, 11(3), p.e0150663.
* Urbieta _et al_ (2019) Urbieta, I.R., Franquesa, M., Viedma, O. and Moreno, J.M., 2019. Fire activity and burned forest lands decreased during the last three decades in Spain. Annals of Forest Science, 76(3), p.90.
* Vázquez and Moreno (1998) Vázquez, A. and Moreno, J.M., 1998. Patterns of lightning-, and people-caused fires in peninsular Spain. International Journal of Wildland Fire, 8(2), pp.103-115.
* Vega-García _et al_ (1995) Vega-García, C., Woodard, P.M. and Lee, B.S., 1995. How GIS Can Help in Human Risk Rating and Daily Human-caused Forest Fire Occurrence Prediction. European Association of Remote Sensing Laboratories. Universidad de Alcalá.
* Vilar Del Hoyo _et al_ (2009) Vilar Del Hoyo, L., Martin, P. and Camia, A., 2009. Analysis of human-caused wildfire occurrence and land use changes in France, Spain and Portugal. In Proceedings of the VII International EARSeL Workshop–Advances on Remote Sensing and GIS applications in Forest Fire Management. Potenza (Italy) (pp. 85-89).
* Watanabe (2010) Watanabe, S., 2010. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11(Dec), pp.3571-3594.
* Werth _et al_ (2016) Werth, P.A., Potter, B.E., Alexander, M.E., Clements, C.B., Cruz, M.G., Finney, M.A., Forthofer, J.M., Goodrick, S.L., Hoffman, C., Jolly, W.M. and McAllister, S.S., 2016. Synthesis of knowledge of extreme fire behavior: volume 2 for fire behavior specialists, researchers, and meteorologists. Gen. Tech. Rep. PNW-GTR-891. Portland, OR: US Department of Agriculture, Forest Service, Pacific Northwest Research Station. 258 p., 891.
* Westerling _et al_ (2006) Westerling, A.L., Hidalgo, H.G., Cayan, D.R. and Swetnam, T.W., 2006. Warming and earlier spring increase western US forest wildfire activity. science, 313(5789), pp.940-943.
* Zumbrunnen _et al_ (2011) Zumbrunnen, T., Pezzatti, G.B., Menéndez, P., Bugmann, H., Bürgi, M. and Conedera, M., 2011. Weather and human impacts on forest fires: 100 years of fire history in two climatic regions of Switzerland. Forest Ecology and Management, 261(12), pp.2188-2199.
|
2024-09-04T02:54:59.132455 | 2020-03-11T11:46:07 | 2003.05230 | {
"authors": "Yang Huang, Yongtao Li, Lihua Feng, Weijun Liu",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26160",
"submitter": "Yongtao Li",
"url": "https://arxiv.org/abs/2003.05230"
} | arxiv-papers | # Inequalities for generalized matrix function and inner product
Yongtao Li Yongtao Li, College of Mathematics and Econometrics, Hunan
University, Changsha, Hunan, 410082, P.R. China<EMAIL_ADDRESS>, Yang
Huang Yang Huang, School of Mathematics and Statistics, Central South
University, Changsha, Hunan, 410083, P.R. China<EMAIL_ADDRESS>,
Lihua Feng Lihua Feng, School of Mathematics and Statistics, Central South
University, Changsha, Hunan, 410083, P.R. China<EMAIL_ADDRESS>and Weijun
Liu† Weijun Liu, School of Mathematics and Statistics, Central South
University, Changsha, Hunan, 410083, P.R. China<EMAIL_ADDRESS>
###### Abstract.
We present inequalities related to generalized matrix function for positive
semidefinite block matrices. We introduce partial generalized matrix functions
corresponding to partial traces and then provide an unified extension of the
recent inequalities due to Choi [6], Lin [14] and Zhang et al. [19, 5]. We
demonstrate the applications of a positive semidefinite $3\times 3$ block
matrix, which motivates us to give a simple alternative proof of Dragomir’s
inequality and Krein’s inequality.
###### Key words and phrases:
Block matrices; Positive semidefinite; Generalized matrix function; Partial
traces; Partial determinants; Dragomir’s inequality; Krein’s inequality.
###### 2010 Mathematics Subject Classification:
47B65, 15B42, 15A45
## 1\. Introduction
Let $G$ be a subgraph of the symmetric group $S_{n}$ on $n$ letters and let
$\chi$ be an irreducible character of $G$. For any $n\times n$ complex matrix
$A=[a_{ij}]_{i,j=1}^{n}$, the generalized matrix function of $A$ (also known
as immanant) afforded by $G$ and $\chi$ is defined as
$\mathrm{d}_{\chi}^{G}(A):=\sum\limits_{\sigma\in
G}\chi(\sigma)\prod\limits_{i=1}^{n}a_{i\sigma(i)}.$
Some specific subgroups $G$ and characters $\chi$ lead to some acquainted
functionals on the matrix space. For instance, If $G=S_{n}$ and $\chi$ is the
signum function with value $\pm 1$, then the generalized matrix function
becomes the usual matrix determinant; By setting $\chi(\sigma)\equiv 1$ for
each $\sigma\in G=S_{n}$, we get the permanent of the matrix; Setting
$G=\\{e\\}\subset S_{n}$ defines the product of the main diagonal entries of
the matrix (also known as the Hadamard matrix function).
Let $A$ and $B$ be $n\times n$ positive semidefinite matrices. It is easy to
prove by simultaneous diagonalization argument that
$\det(A+B)\geq\det(A)+\det(B).$ (1)
There are many extensions and generalizations of (1) in the literature. For
example, a remarkable extension (e.g., [17, p. 228]) says that
$\mathrm{d}_{\chi}^{G}(A+B)\geq\mathrm{d}_{\chi}^{G}(A)+\mathrm{d}_{\chi}^{G}(B).$
(2)
Recently, Paksoy, Turkmen and Zhang [19] provided a natural extension of (2)
for triple matrices by embedding the vectors of Gram matrices into a
“sufficiently large” inner product space and using tensor products. More
precisely, if $A,B$ and $C$ are positive semidefinite, they showed
$\mathrm{d}_{\chi}^{G}(A+B+C)+\mathrm{d}_{\chi}^{G}(C)\geq\mathrm{d}_{\chi}^{G}(A+C)+\mathrm{d}_{\chi}^{G}(B+C).$
(3)
Their approach to establish (3) is algebraic as well as combinatorial. Soon
after, Chang, Paksoy and Zhang [5, Theorem 3] presented a further improvement
of (3) by considering the tensor products of operators as words on certain
alphabets, which states that
$\displaystyle\mathrm{d}_{\chi}^{G}(A+B+C)+\mathrm{d}_{\chi}^{G}(A)+\mathrm{d}_{\chi}^{G}(B)+\mathrm{d}_{\chi}^{G}(C)$
(4)
$\displaystyle\quad\geq\mathrm{d}_{\chi}^{G}(A+B)+\mathrm{d}_{\chi}^{G}(A+C)+\mathrm{d}_{\chi}^{G}(B+C).$
We remark here that (4) is indeed an improvement of (3) since
$\displaystyle\mathrm{d}_{\chi}^{G}(A+B+C)+\mathrm{d}_{\chi}^{G}(C)-\mathrm{d}_{\chi}^{G}(A+C)-\mathrm{d}_{\chi}^{G}(B+C)$
$\displaystyle\quad\geq\mathrm{d}_{\chi}^{G}(A+B)-\mathrm{d}_{\chi}^{G}(A)-\mathrm{d}_{\chi}^{G}(B)\geq
0.$
We use the following standard notation. The set of $m\times n$ complex
matrices is denoted by $\mathbb{M}_{m\times n}$. If $m=n$, we use
$\mathbb{M}_{n}$ instead of $\mathbb{M}_{n\times n}$ and if $n=1$, we use
$\mathbb{C}^{m}$ instead of $\mathbb{M}_{m\times 1}$. The identity matrix of
$\mathbb{M}_{n}$ is denoted by $I_{n}$, or simply by $I$ if no confusion is
possible. We use $\mathbb{M}_{m}(\mathbb{M}_{n})$ for the set of $m\times m$
block matrices with each block being $n$-square. By convention, if
$X\in\mathbb{M}_{n}$ is positive semidefinite, we write $X\geq 0$. For two
Hermitian matrices $A$ and $B$ of the same size, $A\geq B$ means $A-B\geq 0$.
It is easy to verify that $\geq$ is a partial ordering on the set of Hermitian
matrices, referred to Löwner ordering.
On the other hand, Lin and Sra [16] gave the following extension of (1), i.e.,
if $A=[A_{ij}],B=[B_{ij}]\in\mathbb{M}_{m}(\mathbb{M}_{n})$ are block positive
semidefinite matrices, then
${\det}_{2}(A+B)\geq{\det}_{2}(A)+{\det}_{2}(B),$ (5)
where $\det_{2}(A)=[\det A_{ij}]_{i,j=1}^{m}\in\mathbb{M}_{m}$ and $\geq$
stands for the Löwner ordering.
The paper is organized as follows. In Section 2, we briefly review some basic
definitions and properties of tensor product in Multilinear Algebra Theory. In
Section 3, we extend the above-cited results (2), (3), (4) and (5) to block
positive semidefinte matrices (Theorem 3.5 and Corollary 3.6). As byproducts,
some new inequalities related to trace, determinant and permanent are also
included. In Section 4, we investigate the applications of a positive
semidefinite $3\times 3$ block matrix and provide a short proof of Dragomir’s
inequality (Theorem 4.4). In Section 5, we present a simple proof of Krein’s
inequality (Theorem 5.1), and then we also provide some new triangle
inequalities.
## 2\. Preliminaries
Before starting our results, we first review some basic definitions and
notations of Multilinear Algebra Theory [17]. Let $X\otimes Y$ denote the
Kronecker product (tensor product) of $X$ with $Y$, that is, if
$X=[x_{ij}]_{i,j=1}^{m}\in\mathbb{M}_{m}$ and $Y\in\mathbb{M}_{n}$, then
$X\otimes Y\in\mathbb{M}_{m}(\mathbb{M}_{n})$ whose $(i,j)$-block is
$x_{ij}Y$. Let $\otimes^{r}A:=A\otimes\cdots\otimes A$ denote the $r$-fold
tensor power of $A$. We denote by $\wedge^{r}A$ the $r$th antisymmetric tensor
power (or $r$th Grassmann power) of $A$, which is the same as the $r$th
multiplicative compound matrix of $A$, and denote by $\vee^{r}A$ the $r$th
symmetric tensor power of $A$; see [1, p. 18] for more details. We denote by
$e_{r}(A),s_{r}(A)$ the $r$th elementary symmetric and $r$th complete
symmetric function of the eigenvalues of $A$ (see [11, p. 54]). Trivially,
$e_{1}(A)=s_{1}(A)=\mathrm{tr}(A)$ and $e_{n}(A)=\det(A)$ for
$A\in\mathbb{M}_{n}$.
Let $V$ be an $n$-dimensional Hilbert space and $\otimes^{n}V$ be the tensor
product space of $n$ copies of $V$. Let $G$ be a subgroup of the symmetric
group $S_{n}$ and $\chi$ be an irreducible character of $G$. The symmetrizer
induced by $\chi$ on the tensor product space $\otimes^{n}V$ is defined by its
action
$S(v_{1}\otimes\cdots\otimes v_{n}):=\frac{1}{|G|}\sum\limits_{\sigma\in
G}\chi(\sigma)v_{\sigma^{-1}(1)}\otimes\cdots\otimes v_{\sigma^{-1}(n)}.$ (6)
All elements of the form (6) span a vector space, denoted by
$V_{\chi}^{n}(G)\subset\otimes^{n}V$, which is called the space of the
symmetry class of tensors associated with $G$ and $\chi$ (see [17, p. 154,
235]). It is easy to verified that $V^{n}_{\chi}(G)$ is an invariant subspace
of $\otimes^{n}V$ under the tensor operator $\otimes^{n}A$. For a linear
operator $A$ on $V$, the induced operator $K(A)$ of $A$ with respect to $G$
and $\chi$ is defined to be $K(A)=(\otimes^{n}A)\big{|}_{V^{n}_{\chi}(G)}$,
the restriction of $\otimes^{n}A$ on $V_{\chi}^{n}(G)$.
The induced operator $K(A)$ is closely related to generalized matrix function.
Let $e_{1},e_{2},\ldots,e_{n}$ be an orthonormal basis of $V$ and $P$ be a
matrix representation of the linear operator $A$ on $V$ with respect to the
basis $e_{1},\ldots,e_{n}$. Then
$\mathrm{d}_{\chi}^{G}\left(P^{T}\right)=\frac{|G|}{\mathrm{deg}(\chi)}\langle
K(A)e^{*},e^{*}\rangle,$ (7)
where $\mathrm{deg}(\chi)$ is the degree of $\chi$ and
$e^{*}:=e_{1}*e_{2}*\cdots*e_{n}$ is the decomposable symmetrized tensor of
$e_{1},\ldots,e_{n}$ (see [17, p. 227, 155]).
Now, we list some basic properties of tensor product for our latter use.
###### Proposition 2.1.
(see [1, pp. 16–20]) Let $A,B$ and $C$ be $n\times n$ matrices. Then
* (1)
$\otimes^{r}(AB)=(\otimes^{r}A)(\otimes^{r}B),\wedge^{r}(AB)=(\wedge^{r}A)(\wedge^{r}B)$
and $\vee^{r}(AB)=(\vee^{r}A)(\vee^{r}B)$.
* (2)
$\mathrm{tr}(\otimes^{r}A)=(\mathrm{tr}A)^{r}:=p_{r}(A),\mathrm{tr}(\wedge^{r}A)=e_{r}(A)$
and $\mathrm{tr}(\vee^{r}A)=s_{r}(A)$.
* (3)
$\det(\otimes^{r}A)=(\det A)^{rn^{r-1}},\det(\wedge^{r}A)=(\det
A)^{{n-r\choose r-1}}$
and $\det(\vee^{r}A)=(\det A)^{\frac{r}{n}{n+r-1\choose r}}$.
Furthermore, if $A,B$ and $C$ are positive semidefinite matrices, then
* (4)
$A\otimes B,A\wedge B$ and $A\vee B$ are positive semidefinite.
* (5)
$\otimes^{r}(A+B)\geq\otimes^{r}A+\otimes^{r}B,\wedge^{r}(A+B)\geq\wedge^{r}A+\wedge^{r}B$
and $\vee^{r}(A+B)\geq\vee^{r}A+\vee^{r}B$.
Finally, we introduce the definition of partial traces, which comes from
Quantum Information Theory [20, p. 12]. Given
$A\in\mathbb{M}_{m}(\mathbb{M}_{n})$, the first partial trace (map)
$A\mapsto\mathrm{tr}_{1}(A)\in\mathbb{M}_{n}$ is defined as the adjoint map of
the imbedding map $X\mapsto I_{m}\otimes
X\in\mathbb{M}_{m}\otimes\mathbb{M}_{n}$. Correspondingly, the second partial
trace (map) $A\mapsto\mathrm{tr}_{2}(A)\in\mathbb{M}_{m}$ is defined as the
adjoint map of the imbedding map $Y\mapsto Y\otimes
I_{n}\in\mathbb{M}_{m}\otimes\mathbb{M}_{n}$. Therefore, we have
$\langle I_{m}\otimes X,A\rangle=\langle
X,\mathrm{tr}_{1}(A)\rangle,\quad\forall X\in\mathbb{M}_{n},$
and
$\langle Y\otimes I_{n},A\rangle=\langle
Y,\mathrm{tr}_{2}(A)\rangle,\quad\forall Y\in\mathbb{M}_{m}.$
Assume that $A=[A_{ij}]_{i,j=1}^{m}$ with $A_{ij}\in\mathbb{M}_{n}$, then the
visualized forms of the partial traces are actually given in [2, Proposition
4.3.10] as
$\mathrm{tr}_{1}{(A)}=\sum\limits_{i=1}^{m}A_{ii},\quad\mathrm{tr}_{2}{(A)}=\bigl{[}\mathrm{tr}A_{ij}\bigr{]}_{i,j=1}^{m}.$
Under the above definition, it follows that both $\mathrm{tr}_{1}(A)$ and
$\mathrm{tr}_{2}(A)$ are positive semidefinite whenever $A$ is positive
semidefinite; see, e.g., [24, p. 237].
## 3\. Partial Matrix Functions
For $A=[A_{ij}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$, suppose that
$A_{ij}=\bigl{[}a_{rs}^{ij}\bigr{]}_{r,s=1}^{n}$. Setting
$G_{rs}:=\bigl{[}a_{rs}^{ij}\bigr{]}_{i,j=1}^{m}\in\mathbb{M}_{m}.$
Then we can verify that
$\mathrm{tr}_{1}(A)=\sum_{i=1}^{m}A_{ii}=\sum_{i=1}^{m}\bigl{[}a_{rs}^{ii}\bigr{]}_{r,s=1}^{n}=\left[\begin{matrix}\sum\limits_{i=1}^{m}a_{rs}^{ii}\end{matrix}\right]_{r,s=1}^{n}=\bigl{[}\mathrm{tr}\,G_{rs}\bigr{]}_{r,s=1}^{n},$
Motivated by this relation, we next introduce the following definition.
###### Definition 3.1.
Let $\Gamma:\mathbb{M}_{p}\to\mathbb{M}_{q}$ be a matrix function. The first
and second partial matrix functions of $\Gamma$ are defined by
$\Gamma_{1}(A):=\bigl{[}\Gamma(G_{rs})\bigr{]}_{r,s=1}^{n}~{}~{}~{}\text{and}~{}~{}~{}\Gamma_{2}(A):=\bigl{[}\Gamma(A_{ij})\bigr{]}_{i,j=1}^{m}.$
Clearly, when $\Gamma=\mathrm{tr}$, this definition coincides with that of
partial traces; when $\Gamma=\det$, it identifies with the partial
determinants, which were introduced by Choi in [6] recently.
Let $A=[A_{ij}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive
semidefinite block matrix. It is well known that both $\det_{2}(A)=[\det
A_{ij}]_{i,j=1}^{m}$ and $\mathrm{tr}_{2}(A)=[\mathrm{tr}A_{ij}]_{i,j=1}^{m}$
are positive semidefinite matrices; see, e.g., [24, p. 221, 237]. Whereafter,
Zhang [25, Theorem 3.1] extends the positivity to generalized matrix function
via generalized Cauchy-Binet formula, more precisely,
$\mathrm{d}_{\chi}^{G}{}_{2}(A)=[\mathrm{d}_{\chi}^{G}(A_{ij})]_{i,j=1}^{m}$
is also positive semidefinite.
We extend the positivity to more matrix functionals.
###### Proposition 3.2.
Let ${A}\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. If
$\Gamma$ is one of the functionals
$\mathrm{tr},\det,\mathrm{per},\mathrm{d}_{\chi}^{G},p_{r},e_{r}$ and $s_{r}$,
then $\Gamma_{1}(A)$ and $\Gamma_{2}(A)$ are positive semidefinite.
###### Proof.
We denote by
$\widetilde{A}=[G_{rs}]_{r,s=1}^{n}\in\mathbb{M}_{n}(\mathbb{M}_{m})$, and
then it is easy to see that $\widetilde{\widetilde{A}}=A$ and
$\Gamma_{1}(A)=\Gamma_{2}(\widetilde{A})$. Moreover, $\widetilde{A}$ and $A$
are unitarily similar; see [6, Theorem 7] for more details. Thus, we only need
to show $\Gamma_{2}(A)$ is positive semidefinite. It is similar with the
approach in [25], we omit the details of proof. ∎
The following Lemma 3.3 plays a key step in our extension (Theorem 3.5), it
could be found in [3] or [5], we here provide a proof for convenience of
readers.
###### Lemma 3.3.
Let $A,B,C$ be positive semidefinite matrices of same size. Then for every
positive integer $r$, we have
$\displaystyle\otimes^{r}(A+B+C)+\otimes^{r}A+\otimes^{r}B+\otimes^{r}C$
$\displaystyle\quad\geq\otimes^{r}(A+B)+\otimes^{r}(A+C)+\otimes^{r}(B+C).$
The same result is true for $\wedge^{r}$ and $\vee^{r}$.
###### Proof.
The proof is by induction on $r$. The base case $r=1$ holds with equality, and
the case $r=2$ is easy to verify. Assume the required result holds for
$r=m\geq 2$, that is
$\displaystyle\otimes^{m}(A+B+C)+\otimes^{m}A+\otimes^{m}B+\otimes^{m}C$
$\displaystyle\quad\geq\otimes^{m}(A+B)+\otimes^{m}(A+C)+\otimes^{m}(B+C).$
For $r=m+1$, we get from Proposition 2.1 that
$\displaystyle\otimes^{m+1}(A+B+C)$
$\displaystyle\quad=\bigl{(}\otimes^{m}(A+B+C)\bigr{)}\otimes(A+B+C)$
$\displaystyle\quad\geq\bigl{(}\otimes^{m}(A+B)+\otimes^{m}(A+C)+\otimes^{m}(B+C)-\otimes^{m}A-\otimes^{m}B-\otimes^{m}C\bigr{)}$
$\displaystyle\quad\quad\,\otimes(A+B+C)$
$\displaystyle\quad=\otimes^{m+1}(A+B)+\otimes^{m+1}(A+C)+\otimes^{m+1}(B+C)$
$\displaystyle\quad\quad-\otimes^{m+1}A-\otimes^{m+1}B-\otimes^{m+1}C$
$\displaystyle\quad\quad+\bigl{(}\otimes^{m}(A+B)\bigr{)}\otimes
C+\bigl{(}\otimes^{m}(A+C)\bigr{)}\otimes
B+\bigl{(}\otimes^{m}(B+C)\bigr{)}\otimes A$
$\displaystyle\quad\quad-\bigl{(}\otimes^{m}A\bigr{)}\otimes(B+C)-\bigl{(}\otimes^{m}B\bigr{)}\otimes(A+C)-\bigl{(}\otimes^{m}C\bigr{)}\otimes(A+B).$
It remains to show that
$\displaystyle\bigl{(}\otimes^{m}(A+B)\bigr{)}\otimes
C+\bigl{(}\otimes^{m}(A+C)\bigr{)}\otimes
B+\bigl{(}\otimes^{m}(B+C)\bigr{)}\otimes A$
$\displaystyle\quad\geq\bigl{(}\otimes^{m}A\bigr{)}\otimes(B+C)+\bigl{(}\otimes^{m}B\bigr{)}\otimes(A+C)+\bigl{(}\otimes^{m}C\bigr{)}\otimes(A+B).$
This follows immediately by the superadditivity (5) in Proposition 2.1. ∎
We require one more lemma for our purpose.
###### Lemma 3.4.
([2, p. 93]) Let $A=[A_{ij}]_{i,j=1}^{m}\in\mathbb{M}_{m}(\mathbb{M}_{n})$.
Then $[\otimes^{r}A_{ij}]_{i,j=1}^{m}$ is a principal submatrix of
$\otimes^{r}A$ for every positive integer $r$.
Now, we present our main result, which is an unified extension of (4) and (5).
###### Theorem 3.5.
Let ${A},B,C\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. If
$\Gamma$ is one of the functionals
$\mathrm{tr},\det,\mathrm{per},\mathrm{d}_{\chi}^{G},p_{r},e_{r}$ and $s_{r}$,
then
$\displaystyle\Gamma_{1}(A+B+C)+\Gamma_{1}(A)+\Gamma_{1}(B)+\Gamma_{1}(C)$
$\displaystyle\quad\geq\Gamma_{1}(A+B)+\Gamma_{1}(A+C)+\Gamma_{1}(B+C),$
and
$\displaystyle\Gamma_{2}(A+B+C)+\Gamma_{2}(A)+\Gamma_{2}(B)+\Gamma_{2}(C)$
$\displaystyle\quad\geq\Gamma_{2}(A+B)+\Gamma_{2}(A+C)+\Gamma_{2}(B+C).$
###### Proof.
We only show that the desired result holds for $\Gamma=\mathrm{d}_{\chi}^{G}$
and $\Gamma=e_{r}$ since other case of functionals can be proved similarly. It
suffices to show the second desired result by exchanging the role of
$\widetilde{A}$ and $A$. By Lemma 3.3, we have
$\displaystyle\otimes^{r}(A+B+C)+\otimes^{r}A+\otimes^{r}B+\otimes^{r}C$
$\displaystyle\quad\geq\otimes^{r}(A+B)+\otimes^{r}(A+C)+\otimes^{r}(B+C),$
which together with Lemma 3.4 leads to the following
$\displaystyle[\otimes^{r}(A_{ij}+B_{ij}+C_{ij})]_{i,j=1}^{m}+[\otimes^{r}A_{ij}]_{i,j=1}^{m}+[\otimes^{r}B_{ij}]_{i,j=1}^{m}+[\otimes^{r}C_{ij}]_{i,j=1}^{m}$
$\displaystyle\quad\geq[\otimes^{r}(A_{ij}+B_{ij})]_{i,j=1}^{m}+[\otimes^{r}(A_{ij}+C_{ij})]_{i,j=1}^{m}+[\otimes^{r}(B_{ij}+C_{ij})]_{i,j=1}^{m}.$
By restricting above inequality to the symmetry class $V_{\chi}^{G}(V)$, we
get
$\displaystyle[K(A_{ij}+B_{ij}+C_{ij})]_{i,j=1}^{m}+[K(A_{ij})]_{i,j=1}^{m}+[K(B_{ij})]_{i,j=1}^{m}+[K(C_{ij})]_{i,j=1}^{m}$
$\displaystyle\quad\geq[K(A_{ij}+B_{ij})]_{i,j=1}^{m}+[K(A_{ij}+C_{ij})]_{i,j=1}^{m}+[K(B_{ij}+C_{ij})]_{i,j=1}^{m}.$
By combining (7), the second desired result in the case of
$\Gamma=\mathrm{d}_{\chi}^{G}$ follows.
By the same way, it follows that
$\displaystyle[\wedge^{r}(A_{ij}+B_{ij}+C_{ij})]_{i,j=1}^{m}+[\wedge^{r}A_{ij}]_{i,j=1}^{m}+[\wedge^{r}B_{ij}]_{i,j=1}^{m}+[\wedge^{r}C_{ij}]_{i,j=1}^{m}$
$\displaystyle\quad\geq[\wedge^{r}(A_{ij}+B_{ij})]_{i,j=1}^{m}+[\wedge^{r}(A_{ij}+C_{ij})]_{i,j=1}^{m}+[\wedge^{r}(B_{ij}+C_{ij})]_{i,j=1}^{m}.$
By taking trace blockwise and using Proposition 2.1, it yields the second
desired result in the case of $\Gamma=e_{r}$. ∎
From Theorem 3.5, one could get the following Corollary 3.6.
###### Corollary 3.6.
Let ${A},B,C\in\mathbb{M}_{m}(\mathbb{M}_{n})$ be positive semidefinite. If
$\Gamma$ is one of the functionals
$\mathrm{tr},\det,\mathrm{per},\mathrm{d}_{\chi}^{G},p_{r},e_{r}$ and $s_{r}$,
then
$\displaystyle\Gamma_{1}(A+B+C)+\Gamma_{1}(C)\geq\Gamma_{1}(A+C)+\Gamma_{1}(B+C),$
and
$\displaystyle\Gamma_{2}(A+B+C)+\Gamma_{2}(C)\geq\Gamma_{2}(A+C)+\Gamma_{2}(B+C).$
In particular, by setting $m=1$ and $\Gamma=\det$ in Theorem 3.5 and Corollary
3.6, which yields the following renowned determinantal inequalities,
$\displaystyle\det(A+B+C)+\det A+\det B+\det C$
$\displaystyle\quad\geq\det(A+B)+\det(A+C)+\det(B+C),$
and
$\det(A+B+C)+\det C\geq\det(A+C)+\det(B+C).$
We remark that these two inequalities could be proved by using a majorization
approach of eigenvalues. It is more elementary and totally different from our
method. We refer to [14] and [24, p. 215] for more details.
## 4\. Positivity and Dragomir’s inequality
Recently, positive semidefinite $3\times 3$ block matrices are extensively
studied, such a partition leads to versatile and elegant theoretical
inequalities; see, e.g., [15, 9]. Assume that $X,Y,Z$ are matrices with
appropriate size, then it follows from Section 3 that the $3\times 3$ matrix
$\begin{bmatrix}\Gamma(X^{*}X)&\Gamma(X^{*}Y)&\Gamma(X^{*}Z)\\\
\Gamma(Y^{*}X)&\Gamma(Y^{*}Y)&\Gamma(Y^{*}Z)\\\
\Gamma(Z^{*}X)&\Gamma(Z^{*}Y)&\Gamma(Z^{*}Z)\end{bmatrix}$ (8)
is positive semidefinite whenever $\Gamma$ is selected for trace and
determinant. Different size of matrices in (8) will yield a large number of
interesting triangle inequalities. In particular, if $X,Y,Z$ are column
vectors, say $u,v,w\in\mathbb{C}^{n}$, it is easy to see that
$\begin{bmatrix}\mathrm{Re}(u^{*}u)&\mathrm{Re}(u^{*}v)&\mathrm{Re}(u^{*}w)\\\\[2.84544pt]
\mathrm{Re}(v^{*}u)&\mathrm{Re}(v^{*}v)&\mathrm{Re}(v^{*}w)\\\\[2.84544pt]
\mathrm{Re}(w^{*}u)&\mathrm{Re}(w^{*}v)&\mathrm{Re}(w^{*}w)\end{bmatrix}$ (9)
is positive semidefinite; see [13, 4] for more applications.
In this section, we provide two analogous results (Corollary 4.2 and
Proposition 4.3) of the above (9). Based on this result, we then give a short
proof of Dragomir’s inequality (Theorem 4.4). The following Lemma is an
Exercise in [2, p. 26], we will present a detailed proof.
###### Lemma 4.1.
Let $A=[a_{ij}]$ be a $3\times 3$ complex matrix and let
$|A|=\bigl{[}|a_{ij}|\bigr{]}$ be the matrix obtained from $A$ by taking the
absolute values of the entries of $A$. If $A$ is positive semidefinite, then
$|A|$ is positive semidefinite.
###### Proof.
We first note that the positivity of $A$ implies all diagonal entries of $A$
are nonnegative. If a diagonal entry of $A$ is zero, as $A$ is positive
semidefinite, then the entire row entries and column entries of $A$ are zero
and it is obvious that the positivity of
$\begin{bmatrix}\begin{smallmatrix}a&c\\\
\overline{c}&b\end{smallmatrix}\end{bmatrix}$ implies the positivity of
$\begin{bmatrix}\\!\begin{smallmatrix}|a|&|c|\\\
|\overline{c}|&|b|\end{smallmatrix}\\!\end{bmatrix}$. Without loss of
generality, we may assume that $a_{ii}>0$ for every $i=1,2,3$. Let
$D=\mathrm{diag}\bigl{\\{}a_{11}^{-1/2},a_{22}^{-1/2},a_{33}^{-1/2}\bigr{\\}}$
and observe that $D^{*}|A|D=|D^{*}AD|$. By scaling, we further assume that
$A=\begin{bmatrix}1&a&b\\\ \overline{a}&1&c\\\
\overline{b}&\overline{c}&1\end{bmatrix}.$
Recall that $X\geq 0$ means $X$ is positive semidefinite. Our goal is to prove
$\begin{bmatrix}1&a&b\\\ \overline{a}&1&c\\\
\overline{b}&\overline{c}&1\end{bmatrix}\geq
0\Rightarrow\begin{bmatrix}1&|a|&|b|\\\ |\overline{a}|&1&|c|\\\
|\overline{b}|&|\overline{c}|&1\end{bmatrix}\geq 0.$ (10)
Assume that $a=|a|e^{i\alpha}$ and $b=|b|e^{i\beta}$, and denote
$Q=\mathrm{diag}\left\\{1,e^{-i\alpha},e^{-i\beta}\right\\}$. By a direct
computation, we obtain
$Q^{*}AQ=\begin{bmatrix}1&|a|&|b|\\\ |a|&1&ce^{i(\alpha-\beta)}\\\
|b|&\overline{c}e^{i(\beta-\alpha)}&1\end{bmatrix}.$
Since $Q^{*}AQ\geq 0$, taking the determinant leads to the following
$1+|a||b|\left(ce^{i(\alpha-\beta)}+\overline{c}e^{i(\beta-\alpha)}\right)\geq|a|^{2}+|b|^{2}+|c|^{2}.$
Note that $2|c|\geq
2\,\mathrm{Re}\left(ce^{i(\alpha-\beta)}\right)\geq\left(ce^{i(\alpha-\beta)}+\overline{c}e^{i(\beta-\alpha)}\right)$,
then
$1+2|a||b||c|\geq|a|^{2}+|b|^{2}+|c|^{2},$
which is actually $\det|A|\geq 0$. Combining $1-|a|^{2}\geq 0$, that is, every
principal minor of $|A|$ is nonnegative, then $|A|\geq 0$. Thus, the desired
statement (10) now follows. ∎
Remark. We remark that the converse of Lemma 4.1 is not true, additionally,
the statement not holds for $4\times 4$ case. For example, setting
$B=\begin{bmatrix}1&-1&-1\\\ -1&1&-1\\\ -1&-1&1\end{bmatrix},\quad
C=\begin{bmatrix}10&3&-2&1\\\ 3&10&0&9\\\ -2&0&10&4\\\ 1&9&4&10\end{bmatrix}.$
We can see that both $|B|$ and $C$ are positive semidefinite, however, $B$ and
$|C|$ are not positive semidefinite, because $\det B=-4$ and $\det|C|=-364$.
By the positivity of Gram matrix and Lemma 4.1, we get the following
corollary.
###### Corollary 4.2.
If $u,v$ and $w$ are vectors in $\mathbb{C}^{n}$, then
$\begin{bmatrix}\bigl{|}u^{*}u\bigr{|}&\bigl{|}u^{*}v\bigr{|}&\bigl{|}u^{*}w\bigr{|}\\\\[4.26773pt]
\bigl{|}v^{*}u\bigr{|}&\bigl{|}v^{*}v\bigr{|}&\bigl{|}v^{*}w\bigr{|}\\\\[4.26773pt]
\bigl{|}{w^{*}u}\bigr{|}&\bigl{|}{w^{*}v}\bigr{|}&\bigl{|}{w^{*}w}\bigr{|}\end{bmatrix}$
is a positive semidefinite matrix.
###### Proposition 4.3.
If $u,v$ and $w$ are vectors in $\mathbb{R}^{n}$ such that $u+w=v$, then
$\begin{bmatrix}{u^{*}u}&{u^{*}v}&-{u^{*}w}\\\\[2.84544pt]
{v^{*}u}&{v^{*}v}&{v^{*}w}\\\\[2.84544pt]
-{w^{*}u}&{w^{*}v}&{w^{*}w}\end{bmatrix}$
is a positive semidefinite matrix.
###### Proof.
We choose an orthonormal basis of $\mathrm{Span}\\{u,v,w\\}$, then we may
assume that $u,v$ and $w$ are vectors in $\mathbb{R}^{3}$ and form a triangle
on a plane. We denote the angle of $u,v$ by $\alpha$, angle of $-u,w$ by
$\beta$ and angle of $-w,-v$ by $\gamma$, respectively. Note that
$\alpha+\beta+\gamma=\pi$, then we have
$\cos^{2}\alpha+\cos^{2}\beta+\cos^{2}\gamma+2\cos\alpha\cos\beta\cos\gamma=1.$
By computing the principal minor, it follows that
$R:=\begin{bmatrix}1&\cos\alpha&\cos\beta\\\ \cos\alpha&1&\cos\gamma\\\
\cos\beta&\cos\gamma&1\end{bmatrix}$
is positive semidefinite. Setting $S=\mathrm{diag}\\{\left\lVert
u\right\rVert,\left\lVert v\right\rVert,\left\lVert w\right\rVert\\}$. Thus
$S^{T}RS$ is positive semidefinite. This completes the proof. ∎
Dragomir [7] established the following inequality (Theorem 4.4) related to
inner product of three vectors, which yields some improvements of Schwarz’s
inequality; see, e.g.,[8]. We here give a short proof using Corollary 4.2.
###### Theorem 4.4.
Let $u,v$ and $w$ be vectors in an inner product space. Then
$\displaystyle\left(\left\lVert u\right\rVert^{2}\left\lVert
w\right\rVert^{2}-\bigl{|}\left\langle
u,w\right\rangle\bigr{|}^{2}\right)\left(\left\lVert
w\right\rVert^{2}\left\lVert v\right\rVert^{2}-\bigl{|}\left\langle
w,v\right\rangle\bigr{|}^{2}\right)$
$\displaystyle\quad\geq\bigl{|}\left\langle u,w\right\rangle\left\langle
w,v\right\rangle-\left\langle u,v\right\rangle\left\langle
w,w\right\rangle\bigr{|}^{2}.$
###### Proof.
Without loss of generality, by scaling, we may assume that $u,v$ and $w$ are
unit vectors. We now need to prove
$\left(1-\bigl{|}\left\langle
u,w\right\rangle\bigr{|}^{2}\right)\left(1-\bigl{|}\left\langle
w,v\right\rangle\bigr{|}^{2}\right)\geq\left(\bigl{|}\left\langle
u,w\right\rangle\bigr{|}\bigl{|}\left\langle
w,v\right\rangle\bigr{|}-\bigl{|}\left\langle
u,v\right\rangle\bigr{|}\right)^{2},$
which is equivalent to showing
$1+2\bigl{|}\left\langle u,v\right\rangle\bigr{|}\bigl{|}\left\langle
v,w\right\rangle\bigr{|}\bigl{|}\left\langle
w,u\right\rangle\bigr{|}\geq\bigl{|}\left\langle
u,v\right\rangle\bigr{|}^{2}+\bigl{|}\left\langle
v,w\right\rangle\bigr{|}^{2}+\bigl{|}\left\langle
w,u\right\rangle\bigr{|}^{2}.$ (11)
By Corollary 4.2, it follows that
$\begin{bmatrix}1&\bigl{|}\left\langle
u,v\right\rangle\bigr{|}&\bigl{|}\left\langle
u,w\right\rangle\bigr{|}\\\\[4.26773pt] \bigl{|}\left\langle
v,u\right\rangle\bigr{|}&1&\bigl{|}\left\langle
v,w\right\rangle\bigr{|}\\\\[4.26773pt] \bigl{|}\left\langle
w,u\right\rangle\bigr{|}&\bigl{|}\left\langle
w,v\right\rangle\bigr{|}&1\end{bmatrix}$
is positive semidefinite. Taking determinant on this matrix yields (11). ∎
Recently, Zhang gave the following inequality (see [25, Theorem 5.1] ), if
$u,v$ and $w$ are all unit vectors in an inner product space, then
$1+2\,\mathrm{Re}\left(\left\langle u,v\right\rangle\left\langle
v,w\right\rangle\left\langle w,u\right\rangle\right)\geq\bigl{|}\left\langle
u,v\right\rangle\bigr{|}^{2}+\bigl{|}\left\langle
v,w\right\rangle\bigr{|}^{2}+\bigl{|}\left\langle
w,u\right\rangle\bigr{|}^{2}.$ (12)
Inequality (11) seems weaker than (12). Actually, it is not difficult to prove
that (11) and (12) are mutually equivalent, we leave the details for the
interested reader.
## 5\. Some Triangle inequalities
Let $V$ be an inner product space with the inner product
$\left\langle\cdot,\cdot\right\rangle$ over the real number field $\mathbb{R}$
or the complex number field $\mathbb{C}$. For any two nonzero vectors $u,v$ in
$V$, there are two defferent ways to define the angle between the vectors $u$
and $v$ in terms of the inner product, such as,
$\displaystyle\Phi(u,v):=\arccos\frac{\mathrm{Re}\left\langle
u,v\right\rangle}{\left\lVert u\right\rVert\left\lVert v\right\rVert},$
and
$\Psi(u,v):=\arccos\frac{\bigl{|}\left\langle
u,v\right\rangle\bigr{|}}{\left\lVert u\right\rVert\left\lVert
v\right\rVert}.$
Both these definitions are frequently used in the literature, and there are
various reasons and advantages that the angles are defined in these ways; see,
e.g., [13, 4, 18] for recent studies.
The angles $\Phi$ and $\Psi$ are closely related, but not equal unless
$\left\langle u,v\right\rangle$ is a nonnegative number. We can easily see
that $0\leq\Phi\leq\pi$ and $0\leq\Psi\leq\pi/2$, and $\Phi(u,v)\geq\Psi(u,v)$
for all $u,v\in V$, since $\mathrm{Re}\left\langle
u,v\right\rangle\leq\left|\left\langle u,v\right\rangle\right|$ and
$f(x)=\arccos x$ is a decreasing function in $x\in[-1,1]$. It is easy to
verify that
$\Psi(u,v)=\min\limits_{|p|=1}\Phi(pu,v)=\min\limits_{|q|=1}\Phi(u,qv)=\min\limits_{|p|=|q|=1}\Phi(pu,qv).$
(13)
There exist two well known triangle inequalities for $\Phi$ and $\Psi$ in the
literature, we will state it as the following Theorem 5.1.
###### Theorem 5.1.
Let $u,v$ and $w$ be vectors in an inner product space. Then
$\displaystyle\Phi(u,v)$ $\displaystyle\leq\Phi(u,w)+\Phi(w,v),$ (14)
and
$\displaystyle\Psi(u,v)$ $\displaystyle\leq\Psi(u,w)+\Psi(w,v).$ (15)
The first inequality (14) is attributed to Krein who stated it without proof
in [12], and proved first by Rao [21] and [10, p. 56], whose proof boils down
to the positivity of the matrix (9). We remark that (14) on the real field
could be seen in [24, p. 31].
For the second one, Lin [13] observed that (15) can be deduced from (14)
because of the relation (13). It is noteworthy that either Corollary 4.2 or
Theorem 4.4 also guarantees (15). Indeed, by Theorem 4.4, we can obtain
$\displaystyle\left(\left\lVert u\right\rVert^{2}\left\lVert
w\right\rVert^{2}-\bigl{|}\left\langle
u,w\right\rangle\bigr{|}^{2}\right)^{1/2}\left(\left\lVert
w\right\rVert^{2}\left\lVert v\right\rVert^{2}-\bigl{|}\left\langle
w,v\right\rangle\bigr{|}^{2}\right)^{1/2}$
$\displaystyle\quad\geq\bigl{|}\left\langle u,w\right\rangle\left\langle
w,v\right\rangle\bigr{|}-\bigl{|}\left\langle u,v\right\rangle\left\langle
w,w\right\rangle\bigr{|}.$
By dividing with $\left\lVert u\right\rVert\left\lVert
v\right\rVert\left\lVert w\right\rVert^{2}$, we have
$\frac{\left|\left\langle u,v\right\rangle\right|}{\left\lVert
u\right\rVert\left\lVert v\right\rVert}\geq\frac{\left|\left\langle
u,w\right\rangle\right|}{\left\lVert u\right\rVert\left\lVert
w\right\rVert}\frac{\left|\left\langle w,v\right\rangle\right|}{\left\lVert
w\right\rVert\left\lVert v\right\rVert}-\sqrt{1-\frac{\left|\left\langle
u,w\right\rangle\right|}{\left\lVert u\right\rVert\left\lVert
w\right\rVert}}\cdot\sqrt{1-\frac{\left|\left\langle
w,v\right\rangle\right|}{\left\lVert w\right\rVert\left\lVert
v\right\rVert}},$
which is equivalent to
$\displaystyle\cos\Psi(u,v)$
$\displaystyle\geq\cos\Psi(u,w)\cos\Psi(w,v)-\sin\Psi(u,w)\sin\Psi(w,v)$
$\displaystyle=\cos(\Psi(u,w)+\Psi(w,v)).$
Thus, (15) follows by the decreasing property of cosine on $[0,\pi]$.
To end this paper, we present a new proof of inequality (14) and (15), which
can be viewed as a generalization of the method in [24, p. 31], and then we
also provide some new angle inequalities.
###### Proof of Theorem 5.1.
We here only prove (15), since (14) can be proved in a slight similar way.
Because the desireed inequality involves only three vectors $u,v$ and $w$, we
may focus on the subspace spaned by $u,v$ and $w$, which has dimension at most
$3$. We may further choose an orthonormal basis (a unit vector in the case of
dimension one) of this subspace $\mathrm{Span}\\{u,v,w\\}$. Assume that $u,v$
and $w$ have coordinate vectors $x,y$ and $z$ under this basis, respectively.
Then the desired inequality holds if and only if it holds for complex vectors
$x,y$ and $z$ with the standard product
$\left\langle
x,y\right\rangle=\overline{y_{1}}x_{1}+\overline{y_{2}}x_{2}+\cdots+\overline{y}_{n}x_{n}.$
That is to say, our mian goal is to show the following
$\Psi(x,y)\leq\Psi(x,z)+\Psi(z,y),\quad\forall\,x,y,z\in\mathbb{C}^{3}.$ (16)
We next prove the inequality (16) in two steps. If the inner product space is
a Euclidean space (i.e., an inner product space over field $\mathbb{R}$). Then
the problem is reduced to $\mathbb{R},\mathbb{R}^{2}$ or $\mathbb{R}^{3}$
depending on whether the dimension of $\mathrm{Span}\\{u,v,w\\}$ is $1,2$ or
$3$, respectively. In this real case, one can draw a simple graph to get the
result. If the inner product space is an unitary space (i.e., an inner product
space over field $\mathbb{C}$). We now do some technical tricks. We note that
the desired inequality (16) is not changed if we replace $x,y$ with $\omega
x,\delta y$ for any complex numbers $\omega,\delta$ satisfying
$|\omega|=|\delta|=1$. Therefore, we may assume further that both
$\left\langle x,z\right\rangle$ and $\left\langle z,y\right\rangle$ are real
numbers. Let $x=X_{1}+iX_{2},y=Y_{1}+iY_{2}$ and $z=Z_{1}+iZ_{2}$ for some
vectors $X_{i},Y_{i},Z_{i}\in\mathbb{R}^{3}(i=1,2)$ and denote by
$X=\begin{bmatrix}X_{1}\\\ X_{2}\end{bmatrix},\quad Y=\begin{bmatrix}Y_{1}\\\
Y_{2}\end{bmatrix},\quad Z=\begin{bmatrix}Z_{1}\\\ Z_{2}\end{bmatrix}.$
Note that $X,Y,Z\in\mathbb{R}^{6}$, then by the previous statement for
Euclidean space, we get
$\Psi(X,Y)\leq\Psi(X,Z)+\Psi(Z,Y).$ (17)
Since $\left\langle x,z\right\rangle$ and $\left\langle z,y\right\rangle$ are
real numbers, we have
$\displaystyle\left\langle x,z\right\rangle$
$\displaystyle=\mathrm{Re}\left\langle
x,z\right\rangle=Z_{1}^{T}X_{1}+Z_{2}^{T}X_{2}=\left\langle X,Z\right\rangle,$
$\displaystyle\left\langle z,y\right\rangle$
$\displaystyle=\mathrm{Re}\left\langle
z,y\right\rangle=Y_{1}^{T}Z_{1}+Y_{2}^{T}Z_{2}=\left\langle Z,Y\right\rangle,$
$\displaystyle\left\langle x,y\right\rangle$
$\displaystyle=Y_{1}^{T}X_{1}+Y_{2}^{T}X_{2}+i(Y_{1}^{T}X_{2}-Y_{2}^{T}X_{1}).$
It is easy to see that $\left\lVert x\right\rVert=\left\lVert
X\right\rVert,\left\lVert y\right\rVert=\left\lVert Y\right\rVert$ and
$\left\lVert z\right\rVert=\left\lVert Z\right\rVert$. Thus,
$\Psi(x,z)=\Psi(X,Z),\quad\Psi(z,y)=\Psi(Z,Y).$ (18)
Since $f(t)=\mathrm{arccos}\,(t)$ is a decreasing function in $t\in[-1,1]$, we
get
$\Psi(x,y)=\arccos\frac{\bigl{|}\left\langle
x,y\right\rangle\bigr{|}}{\left\lVert x\right\rVert\left\lVert
y\right\rVert}\leq\frac{\bigl{|}Y_{1}^{T}X_{1}+Y_{2}^{T}X_{2}\bigr{|}}{\left\lVert
X\right\rVert\left\lVert Y\right\rVert}=\Psi(X,Y).$ (19)
Combining (17), (18) and (19), we can get the desired inequality (16). ∎
Using the same idea of the proof of Theorem 5.1, one could also get the
following Proposition 5.2.
###### Proposition 5.2.
Let $u,v$ and $w$ be vectors in an inner product space. Then
$\displaystyle\left|\Theta(u,v)-\Theta(v,w)\right|\leq\Theta(u,w)\leq\Theta(u,v)+\Theta(v,w),$
$\displaystyle 0\leq\Theta(u,v)+\Theta(v,w)+\Theta(w,u)\leq 2\pi.$
Moreover, the above inequalities hold for $\Psi$.
The following inner product inequality is the main result in [22] and also can
be found in [24, p. 195], it is derived as a tool in showing a trace
inequality for unitary matrices. Of course, the line of proof provided here is
quite different and simple.
###### Corollary 5.3.
Let $u,v$ and $w$ be vectors in an inner product space over $\mathbb{C}$. Then
$\sqrt{1-\frac{\left|\left\langle u,v\right\rangle\right|^{2}}{\left\lVert
u\right\rVert^{2}\left\lVert
v\right\rVert^{2}}}\leq\sqrt{1-\frac{\left|\left\langle
u,w\right\rangle\right|^{2}}{\left\lVert u\right\rVert^{2}\left\lVert
w\right\rVert^{2}}}+\sqrt{1-\frac{\left|\left\langle
w,v\right\rangle\right|^{2}}{\left\lVert w\right\rVert^{2}\left\lVert
v\right\rVert^{2}}}.$
Moreover, inequality holds if we replace $|\cdot|$ with
$\mathrm{Re}\,(\cdot)$.
###### Proof.
For brevity, we denote $\alpha,\beta,\gamma$ by the angles
$\Psi(u,v),\Psi(u,w)$, $\Psi(w,v)$ or $\Phi(u,v),\Phi(u,w)$, $\Phi(w,v)$,
respectively. By Proposition 5.2, we have
$\displaystyle\frac{\alpha}{2}\leq\frac{\beta+\gamma}{2}\leq\pi-\frac{\alpha}{2},\quad
0\leq\frac{|\beta-\gamma|}{2}\leq\frac{\alpha}{2}\leq\frac{\pi}{2}.$
Then
$\displaystyle 0\leq\sin\frac{\alpha}{2}\leq\sin\frac{\beta+\gamma}{2},\quad
0\leq\cos\frac{\alpha}{2}\leq\cos\frac{\beta-\gamma}{2}.$
The required inequality can be written as
$\sin\alpha\leq
2\sin\frac{\beta+\gamma}{2}\cos\frac{\beta-\gamma}{2}=\sin\alpha+\sin\beta.$
This completes the proof. ∎
## Acknowledgments
The first author would like to expresses sincere thanks to Professor Fuzhen
Zhang for his kind help and valuable discussion [23] before its publication,
which considerably improves the presentation of our manuscript. Finally, all
authors are grateful for valuable comments and suggestions from anonymous
reviewer. This work was supported by NSFC (Grant No. 11671402, 11871479),
Hunan Provincial Natural Science Foundation (Grant No. 2016JJ 2138,
2018JJ2479) and Mathematics and Interdisciplinary Sciences Project of Central
South University.
## References
* [1] R. Bhatia, Matrix Analysis, GTM 169, Springer-Verlag, New York, 1997.
* [2] R. Bhatia, Positive Definite Matrices, Princeton University Press, Princeton, 2007.
* [3] W. Berndt, S. Sra, Hlawka-Popoviciu inequalities on positive definite tensors, Linear Algebra Appl. 486 (2015) 317–327.
* [4] D. Castano, V. E. Paksoy, F. Zhang, Angles, trangle inequalities, correlation matrices and metric-preserving and subadditive functions, Linear Algebra Appl. 491 (2016) 15–29.
* [5] H. Chang, V. E. Paksoy, F. Zhang, An inequality for tensor product of positive operators and its applications, Linear Algebra Appl. 498 (2016) 99–105.
* [6] D. Choi, Inequalities related to trace and determinant of positive semidefinite block matrices, Linear Algebra Appl. 532 (2017) 1–7.
* [7] S. S. Dragomir, Some refinements of Schwarz inequality 13–16, Simpozionul de Matematici si Aplicatii. Timisoara Romania, 1985.
* [8] S. S. Dragomir, Improving Schwarz inequality in inner product spaces, Linear and Multilinear Algebra 67 (2) (2019) 337–347.
* [9] S. W. Drury, Positive semidefiniteness of a $3\times 3$ matrix related to partitioning, Linear Algebra Appl. 446 (2014) 369–376.
* [10] K.E. Gustafson, D.K.M. Rao, Numerical Range, Springer, New York, 1997.
* [11] R.A. Horn, C.R. Johnson, Matrix Analysis, 2nd ed., Cambridge University Press, Cambridge, 2013.
* [12] M. G. Krein, Angular localization of the spectrum of a multiplicative integral in a Hilbert space, Funct. Anal. Appl. 3 (1969) 89–90.
* [13] M. Lin, Remarks on Krein’s inequality, Math. Intelligencer 34 (1) (2012) 3–4.
* [14] M. Lin, A determinantal inequality for positive definite matrices, Electron. J. Linear Algebra 27 (2014) 821–826.
* [15] M. Lin, P. Driessche, Positive semidefinite $3\times 3$ block matrices, Electron. J. Linear Algebra 27 (2014) 827–836.
* [16] M. Lin, S. Sra, A proof of Thompson’s determinantal inequality, Math. Notes 99 (2016) 164–165.
* [17] R. Merris, Multilinear Algebra, Gordon & Breach, Amsterdam, 1997.
* [18] Z. Otachel, Inequalities for angles between subspaces with applications to Cauchy-Schwarz inequality in inner product spaces, Math. Inequal. Appl. 23 (2020) 487–495.
* [19] V. Paksoy, R. Turkmen, F. Zhang, Inequalities of generalized matrix functions via tensor products, Electron. J. Linear Algebra 27 (2014) 332–341.
* [20] D. Petz, Quantum Information Theory and Quantum Statistics. Theoretical and Mathematical Physics, Springer, Berlin, 2008.
* [21] D.K. Rao, A triangle inequality for angles in a Hilbert space, Rev. Colombiana Mat. X (1976) 95–97.
* [22] B.-Y. Wang, F. Zhang, A trace inequality for unitary matrices, Amer. Math. Monthly 101 (1994) 453–455.
* [23] F. Zhang, Matrix Gems, private communication.
* [24] F. Zhang, Matrix Theory: Basic Results and Techniques, 2nd edition, Springer, New York, 2011.
* [25] F. Zhang, Positivity of matrices with generalized matrix functions, Acta Math. Sinica 28(9) (2012) 1779–1786.
|
2024-09-04T02:54:59.142476 | 2020-03-11T11:49:17 | 2003.05232 | {
"authors": "Derek Reitz, Junxue Li, Wei Yuan, Jing Shi, and Yaroslav Tserkovnyak",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26161",
"submitter": "Derek Reitz",
"url": "https://arxiv.org/abs/2003.05232"
} | arxiv-papers | # Spin Seebeck Effect near the Antiferromagnetic Spin-Flop Transition
Derek Reitz Department of Physics and Astronomy, University of California,
Los Angeles, California 90095, USA Junxue Li Wei Yuan Jing Shi Department
of Physics and Astronomy, University of California, Riverside, California
92521, USA Yaroslav Tserkovnyak Department of Physics and Astronomy,
University of California, Los Angeles, California 90095, USA
###### Abstract
We develop a low-temperature, long-wavelength theory for the interfacial spin
Seebeck effect (SSE) in easy-axis antiferromagnets. The field-induced spin-
flop (SF) transition of Néel order is associated with a qualitative change in
SSE behavior: Below SF, there are two spin carriers with opposite magnetic
moments, with the carriers polarized along the field forming a majority magnon
band. Above SF, the low-energy, ferromagnetic-like mode has magnetic moment
opposite the field. This results in a sign change of the SSE across SF, which
agrees with recent measurements on Cr2O3/Pt and Cr2O3/Ta devices [Li et al.,
Nature 578, 70 (2020)]. In our theory, SSE is due to a Néel spin current below
SF and a magnetic spin current above SF. Using the ratio of the associated
Néel to magnetic spin-mixing conductances as a single constant fitting
parameter, we reproduce the field dependence of the experimental data and
partially the temperature dependence of the relative SSE jump across SF.
Introduction.—SSE involves transfer of spin angular momentum between a magnet
and a metal via thermal spin fluctuations at their interface. In a typical
experiment, a heat flux injected across the interface pumps a spin current
into the metal, which is then converted into a transverse electric voltage
$V_{\mathrm{SSE}}$ by spin-orbit interactions. This spin-current generation
can be broadly attributed to two sources: One is due to a thermal gradient
inside the magnet, which produces bulk magnon transport Adachi _et al._
(2010); Rezende _et al._ (2014, 2016a); Flebus _et al._ (2017); Prakash _et
al._ (2018); Luo _et al._ (2019) and results in interfacial spin
accumulation. The other is due to the interfacial temperature discontinuity,
which produces spin pumping directly Xiao _et al._ (2010).
SSE has been studied in ferromagnets Slachter _et al._ (2010); Uchida _et
al._ (2010a), ferrimagnets Miao _et al._ (2016); Geprägs _et al._ (2016);
Ohnuma _et al._ (2013), paramagnets Wu _et al._ (2015); Li _et al._ (2019);
Yamamoto _et al._ (2019), and recently in antiferromagnets Seki _et al._
(2015); Wu _et al._ (2016); Li _et al._ (2020); Rezende _et al._ (2016b);
Troncoso _et al._ (2020) as well as noncollinear magnets Flebus _et al._
(2019); Ma _et al._ (2020). The sign of $V_{\mathrm{SSE}}$ is determined by
the polarization of the spin current along the applied magnetic field and the
effective spin Hall angle of the metal detector. Fixing the spin Hall angle
and the gyromagnetic ratio, the observed sign of the underlying spin current
turns out to contain valuable information about the nature of spin order in
the magnet and its nonequilibrium transport properties.
Collinear ferromagnets (FMs) or noncollinear systems with weak ferromagnetic
order have their net spin ordering along the magnetic field, whereas the
elementary low-energy magnon excitations yield average spin polarization in
the opposite direction. We can also imagine another class of systems, whose
intrinsic excitations form spin-degenerate bands, with the degeneracy lifted
by Zeeman splitting. The majority species, polarized along the field, may then
determine the sign of the spin current, thus ending up opposite to the FM
case. In our formalism, uniaxial AFs fall in this latter, majority-species
scenario below SF, switching to the ferromagnetic-like SSE behavior above SF.
Unlike argued in Ref. Hirobe _et al._ (2017), therefore, the SSE with the
sign opposite to the FM case is a not a unique signature of correlated spin
liquids, but can be expected to be a rather generic low-temperature signature
of materials lacking FM order.
Theoretically, there is at present no consensus on the “correct” sign of the
SSE in antiferromagnets. Rezende et al. Rezende _et al._ (2016b) developed a
magnon transport theory for uniaxial AFs below SF and concluded it falls into
the majority-species scenario (i.e., SSE opposite to the FM case), but did not
consider the sign when comparing their theory to experiment. Yamamoto et al.
Yamamoto _et al._ (2019) used the fluctuation-dissipation theorem in a
Landau-Ginzburg theory for easy-axis AFs below SF to study SSE around the Néel
temperature $T_{N}$, concluding paramagnets and AFs below SF both have the
same sign, but that it is the same as FMs. Here, we determine the sign within
a low-temperature, long-wavelength theory for the interfacial SSE and show it
changes across SF, in agreement with recent experiments. The quantitative
aspects of the SSE over a broad range of temperatures and magnetic fields also
appear in general agreement with the data.
Spin pumping near SF transition.—In easy-axis AFs, when the Zeeman energy due
to an applied field along the easy axis exceeds the anisotropy energy, there
is a metamagnetic phase transition called spin flop (SF). Below SF (state I),
the Néel order aligns with the easy axis, and there is a small net
magnetization due to remnant longitudinal magnetic susceptibility Nordblad
_et al._ (1979); Foner (1963). Dynamically, there are two circularly-polarized
spin-wave modes with opposite handedness. When quantized, they correspond to
magnons with magnetic moment parallel or antiparallel to the order parameter,
each forming a gas (with equal and opposite chemical potentials, if driven
slightly out of equilibrium Flebus (2019)). Above SF (state II), the Néel
order reorients into the hard plane, and the spins cant giving net
magnetization along the easy axis, due to a sizeable transverse magnetic
susceptibility. There are now two distinct spin-wave modes at long
wavelengths: a ferromagnetic-like mode ($\omega\rightarrow\gamma B$ when
applied field $B\rightarrow\infty$) and a low-energy Goldstone mode associated
with the U(1)-symmetry breaking Néel orientation in the hard plane. See Fig.
1.
Figure 1: $k=0$ resonance frequencies are plotted for an easy-axis AF:
$\omega_{1}$ and $\omega_{2}$ below spin flop and $\omega_{3}$ and
$\omega_{4}$ above spin flop. $B$ is the applied magnetic field,
$B_{c}=(\gamma s)^{-1}\sqrt{K_{1}/\chi}$ is the spin-flop field (which is
about 6 Tesla for Cr2O3) according to the energy (3), and $\omega_{0}=\gamma
B_{c}$ is the gap in I. The $\omega_{1}$ mode is right-hand circularly
polarized and $\omega_{2}$ is left-hand circularly polarized in
$\delta\boldsymbol{l}$ and $\delta\boldsymbol{m}$ (however the magnitude of
$\delta\boldsymbol{m}$ is a factor $\chi K_{1}$ smaller than
$\delta\boldsymbol{l}$ below SF, so it is omitted from the Figure).
$\omega_{3}$ is linearly polarized in $\delta\boldsymbol{l}$ and
$\delta\boldsymbol{m}$ so it does not produce spin currents bey . $\omega_{4}$
is linearly polarized in $\delta\boldsymbol{l}$ and elliptically polarized in
$\delta\boldsymbol{m}$.
The spin-current density pumped across the interface consist of the Néel,
$\boldsymbol{J}_{l}$, and magnetic, $\boldsymbol{J}_{m}$, contributions:
$\boldsymbol{J}_{l}=(\hbar
g^{\uparrow\downarrow}_{l}/4\pi)\,\boldsymbol{l}\times\partial_{t}\boldsymbol{l},~{}~{}~{}\boldsymbol{J}_{m}=(\hbar
g^{\uparrow\downarrow}_{m}/4\pi)\,\boldsymbol{m}\times\partial_{t}\boldsymbol{m},$
(1)
where $g^{\uparrow\downarrow}$ is the respective (real part of the
dimensionless) interfacial spin-mixing conductance per unit area. Thermal
agitations in the metal held at temperature $T_{e}$ and in the AF at $T_{a}$
produce contributions $\boldsymbol{J}_{e}$ and $\boldsymbol{J}_{a}$ to the
spin current, respectively. The spin Seebeck coefficient $S$ can be defined as
the net spin current $J_{s}$ (projected onto the direction of the applied
field) across the interface, divided by the temperature drop $\delta
T=T_{a}-T_{e}$:
$S\equiv J_{s}/\delta T=[J_{a}(T_{a})-J_{e}(T_{e})]/\delta
T\to\partial_{T}J_{a}(T),$ (2)
in linear response, where $J_{a}=J_{l}+J_{m}$ and $J_{e}(T)=J_{a}(T)$, in
thermal equilibrium.
In this paper, we investigate the signatures of SF in the SSE. In state I,
there are two components of the Néel spin current that contribute oppositely
to the SSE. With respect to increasing field, the (anti)parallel mode
(decreases) increases in frequency. The antiparallel mode thus has greater
thermal occupation at finite field, producing a net Néel spin current
antiparallel to the field Ohnuma _et al._ (2013); Rezende _et al._ (2016b).
In state II, there is only a magnetic spin current parallel to the field from
the FM-like mode. Therefore, the SSE changes sign across SF.
Spin-wave modes.—Following standard procedure Andreev and Marchenko (1980), we
construct the low-energy long-wavelength theory for AF dynamics in terms of
the Lagrangian density
$\mathcal{L}(\boldsymbol{l},\boldsymbol{m})=s\boldsymbol{m}\cdot(\boldsymbol{l}\times\partial\boldsymbol{l}/\partial
t)-E$. The energy density is given here by
$E(\boldsymbol{l},\boldsymbol{m})=A(\gradient{\boldsymbol{l}})^{2}/2+\boldsymbol{m}^{2}/2\chi-
K_{1}l_{z}^{2}/2-b\,\boldsymbol{m}\cdot\hat{\textbf{z}},$ (3)
for a bipartite easy-axis AF subjected to a collinear magnetic field. The AF
state is parametrized by directional Néel order $\boldsymbol{l}$ and
normalized spin density $\boldsymbol{m}=\mathbf{s}/s$ ($\mathbf{s}$ being the
spin density and $s\equiv\hbar S/V$, for spin $S$ and volume $V$ per site), in
a nonlinear $\sigma$ model with constraint $\boldsymbol{l}^{2}=1$ and
$\boldsymbol{l}\cdot\boldsymbol{m}=0$. We work well below the ordering
temperature $T_{N}$, retaining the lowest-order gradient term of the Néel
order with spin stiffness $A$. $\chi$ is the transverse magnetic
susceptibility, $K_{1}$ the easy-axis anisotropy, and $b\equiv\gamma sB$, in
terms of the magnetic field $B$ applied along the easy axis in the
$\hat{\textbf{z}}$ direction (where $\gamma$ is the gyromagnetic ratio, whose
sign is lumped into the value of $B$; i.e. when $\gamma<0$, our $B$ has
opposite sign to the applied field). The Euler-Lagrange equations of motion
may be extended to include dissipative forces
$\partial\mathcal{F}/\partial\dot{\boldsymbol{m}}$ and
$\partial\mathcal{F}/\partial\dot{\boldsymbol{l}}$ from the Rayleigh
dissipation functional
$\mathcal{F}=\alpha\dot{\boldsymbol{l}}^{2}/2+\widetilde{\alpha}\dot{\boldsymbol{m}}^{2}/2$,
parametrized by Gilbert damping constants $\alpha$ and $\widetilde{\alpha}$.
The ground states I and II are
$(\boldsymbol{l}_{0},\boldsymbol{m}_{0})_{\mathrm{I}}=(\hat{\textbf{z}},0)$
and
$(\boldsymbol{l}_{0},\boldsymbol{m}_{0})_{\mathrm{II}}=(\hat{\textbf{y}},\chi
b\hat{\textbf{z}})$, with the critical field $B_{c}$ marking the jump from I
to II. Spin waves are linear excitations,
$\boldsymbol{l}=\boldsymbol{l}_{0}+\delta\boldsymbol{l}$ and
$\boldsymbol{m}=\boldsymbol{m}_{0}+\delta\boldsymbol{m}$, satisfying the
equations of motion. The dispersions are
$\displaystyle\omega_{1k},\omega_{2k}=\mp\gamma B+\sqrt{(\gamma
B_{c})^{2}+(ck)^{2}},$ (4a)
$\displaystyle\omega_{3k}=ck,~{}~{}~{}\omega_{4k}=\sqrt{\gamma^{2}B^{2}-\gamma^{2}B_{c}^{2}+(ck)^{2}},$
(4b)
where $c=s^{-1}\sqrt{A/\chi}$ is the speed of the large-$k$ AF spin waves.
The six Cartesian components of $\delta\boldsymbol{l}$ and
$\delta\boldsymbol{m}$ reduce to four independent and two slave variables,
after applying the nonlinear constraints. Correspondingly, there are four
spin-wave modes with momentum $k$, as shown in Fig. 1 (for consistency of the
gradient expansion, we require $k\ll a^{-1}$, the inverse lattice spacing).
$\omega_{1k}$ and $\omega_{2k}$ are waves with circularly precessing
$\delta\boldsymbol{l}$ and $\delta\boldsymbol{m}$ in the plane perpendicular
to $\boldsymbol{l}_{0,\mathrm{I}}$. $\omega_{3k}$ has linearly polarized
$\delta\boldsymbol{l}(t)\propto e^{i\omega_{3k}t}\hat{\textbf{x}}$ and
$\delta\boldsymbol{m}(t)\propto(\omega_{3k}/\omega_{x})e^{i(\omega_{3k}t-\pi/2)}\hat{\textbf{z}}$
bey . $\omega_{4k}$ has linearly polarized $\delta\boldsymbol{l}(t)\propto
e^{i\omega_{4k}t}\hat{\textbf{z}}$ and elliptically polarized
$\delta\boldsymbol{m}(t)\propto(\omega_{4k}/\omega_{x})e^{i\omega_{4k}t}\hat{\textbf{x}}-\chi
be^{i(\omega_{3k}t-\pi/2)}\hat{\textbf{y}}$, where $\omega_{x}\equiv 1/\chi
s$. Additional anisotropy energy $-K_{2}l_{y}^{2}/2$ within the easy plane
will slightly shift the ground states, gap $\omega_{3}$, and introduce
ellipticities in precession. When $k_{B}T\gg(\hbar/s)\sqrt{K_{2}/\chi}$,
however, these modifications are negligible k2_ .
Main results.—A thermal heat flux driven across the AF interface with a metal
is given in the bulk by $-\sigma\gradient{T}$ and at the interface by
$-\kappa\delta T$, where $\sigma$ and $\kappa$ are, respectively, the bulk and
interfacial (Kapitza) thermal conductivities. $\delta T$ here is the
temperature difference between phonons in the AF and electrons in the metal,
$\delta T=T_{p}-T_{e}$ Xiao _et al._ (2010); Adachi _et al._ (2011). The
Kapitza resistance ($\kappa^{-1}$) is large when there is poor phonon-phonon
and phonon-electron interfacial coupling. For a fixed heat flux, this results
in a larger $\delta T$, which drives the local SSE. The temperature gradient
$\gradient{T}$ inside the magnet, furthermore, generates a bulk spin current,
which flows towards the interface and contributes to the measured SSE Hoffman
_et al._ (2013). We will specialize to the limit, in which the local spin
pumping $\propto\delta T$ dominates, which corresponds to the case of an
opaque interface and/or short spin-diffusion length in the AF.
Equipped with the theory for AF dynamics, based on the Hamiltonian (3), we can
use thermodynamic fluctuation-dissipation relations in order to convert
magnetic response into thermal noise. The spin Seebeck coefficient (2) can
then be evaluated by averaging Eqs. (1) over thermal fluctuations, whose
spectral features follow the spin-wave dispersions discussed above. Carrying
out this program, we arrive at the following final results (with the details
of the derivations discussed later): Below spin flop (state I),
$S_{\mathrm{I}}=\frac{g^{\uparrow\downarrow}_{l}\hbar^{2}}{2\pi\chi
s^{2}}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{\omega_{2k}\partial_{T}n_{\rm
BE}(\omega_{2k})-\omega_{1k}\partial_{T}n_{\rm
BE}(\omega_{1k})}{\omega_{1k}+\omega_{2k}},$ (5)
and above spin flop (state II),
$S_{\mathrm{II}}=\frac{g^{\uparrow\downarrow}_{m}\hbar^{2}\chi\gamma
B}{2\pi}\int\frac{d^{3}k}{(2\pi)^{3}}\omega_{4k}\partial_{T}n_{\rm
BE}(\omega_{4k}),$ (6)
where $n_{\rm BE}(\omega)=(e^{\hbar\omega/k_{B}T}-1)^{-1}$ is the Bose-
Einstein distribution function.
We may evaluate the Seebeck coefficients analytically when
$k_{B}T\gg\hbar\gamma B_{c}$. Since they are both linear in $B$, we compare
the field slopes which go as $\partial_{B}S_{\mathrm{I}}\propto
g^{\uparrow\downarrow}_{l}T$ and $\partial_{B}S_{\mathrm{II}}\propto
g^{\uparrow\downarrow}_{m}T^{3}$:
$v(T)\equiv-\frac{\partial_{B}S_{\mathrm{I}}}{\partial_{B}S_{\mathrm{II}}}\approx\frac{g^{\uparrow\downarrow}_{l}}{g^{\uparrow\downarrow}_{m}}\left(\frac{\hbar/\chi
s}{k_{B}T}\right)^{2}\sim\frac{g^{\uparrow\downarrow}_{l}}{g^{\uparrow\downarrow}_{m}}\left(\frac{T_{N}}{T}\right)^{2}.$
(7)
The ratio $v(T)$ contains the square of exchange ($\propto T_{N}$) to thermal
energy in $v(T)$ (for the complete expressions, see s_I ). Note that for the
applicability of our long-wavelength description, we require that $T\ll
T_{N}$, throughout.
Comparison to experiment.—In a conventional measurement scheme, the
(longitudinal) SSE is revealed in a Nernst geometry as a lateral voltage
induced perpendicular to the magnetic field applied in the plane of the
magnetic interface Uchida _et al._ (2010a). This voltage is understood to
arise from the inverse spin Hall effect associated with the thermally injected
spin current. Normalizing the SSE voltage by the input thermal power
$P_{\mathrm{in}}$, this gives
$\frac{V_{\mathrm{SSE}}}{P_{\mathrm{in}}}=S(B,T)\frac{2e}{\hbar}\frac{\lambda^{*}}{wt}\frac{\rho(T)}{\kappa^{*}(T)},$
(8)
where the materials-dependent interfacial spin-to-charge conversion
lengthscale $\lambda^{*}$ can be loosely broken down into a product of an
effective spin-diffusion length (a.k.a. spin-memory loss)
$\lambda_{\mathrm{sd}}$ in the (heavy) normal metal and the effective spin
Hall angle $\theta_{\mathrm{sH}}$, which converts the spin-current density
$J_{\mathrm{s}}$ injected into the normal metal into the lateral charge-
current density $J_{c}=(2e/\hbar)\theta_{\mathrm{sH}}J_{s}$. The total charge
current is $I_{c}=w\lambda_{\mathrm{sd}}J_{c}$ when $\lambda_{\mathrm{sd}}\ll
t$, the thickness of the metal film, where $w$ is the heterostructure width
transverse to the injected charge current. In the open circuit, the underlying
spin Hall motive force Uchida _et al._ (2010b) is balanced by the detectable
voltage $V_{\mathrm{SSE}}=\rho lI_{c}/wt$, along the length $l$, where $\rho$
is the normal-metal resistivity. Putting everything together and expressing
the spin current in terms of the Seebeck coefficient (2), we get the SSE
voltage (8) normalized by the input power
$P_{\mathrm{in}}=\kappa(T_{p}-T_{\mathrm{e}})lw$.
$\kappa^{*}=\kappa(T_{p}-T_{e})/(T_{a}-T_{e})$ is an effective Kapitza
conductance, which can be reduced relative to $\kappa$, if the lengthscale for
the magnon-phonon equilibration that controls the temperature mismatch
$T_{a}-T_{p}$ in the AF is long compared to $\sigma/\kappa$.
Kapitza conductances for metal-insulator interfaces have been investigated in
Refs. Stoner _et al._ (1992); Stevens _et al._ (2005); Hohensee _et al._
(2015); Lu _et al._ (2016), yielding nontrivial temperature dependences. The
parameters for Cr2O3 are: $\sqrt{A}/a=(\chi\gamma s)^{-1}\approx 500$ T,
$B_{\mathrm{c}}\approx 6$ T, $\gamma\approx\gamma_{e}$ Li _et al._ (2020)
(where $\gamma_{e}$ is the free-electron value), $K_{2}\approx 0$ Foner
(1963); for the Cr2O3/Pt and Cr2O3/Ta devices: $w=0.2$ mm, $t=5$ nm, the
resistivity of the strips are $\rho_{\mathrm{Pt}}\approx 7\times
10^{-6}~{}\Omega\cdot$m and $\rho_{\mathrm{Ta}}\approx 9\times
10^{-5}~{}\Omega\cdot$m Li _et al._ (2020) at $T=75$ K, we take $\lambda^{*}$
from spin-pumping experiments: $\lambda_{\mathrm{Pt}}^{*}\sim 0.1$ nm Sinova
_et al._ (2015) and $\lambda_{\mathrm{Ta}}^{*}\sim-0.04$ nm Hahn _et al._
(2013); Gómez _et al._ (2014); Yu _et al._ (2018), we approximate
$g^{\uparrow\downarrow}_{m}$ for Pt and Ta with YIG/Pt’s:
$g^{\uparrow\downarrow}_{m}\sim 10$ nm-2 Zhang _et al._ (2015).
Figure 2: Theoretical spin Seebeck coefficients below, Eq. (5), and above, Eq.
(6), spin flop for Cr2O3 are compared to experimental data from Li et al. Li
_et al._ (2020). (a) and (b): The ratio
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$ is fit to the relative
slopes across SF. c) $S(T)$ is plotted until $T=80$ K; at higher temperatures,
the long-wavelength theory loses quantitative accuracy. (d) Dispersions below
SF are plotted. The majority spin carrier has magnetic moment along the field,
which determines the polarization of the spin current.
The comparison of the Seebeck coefficients (5), (6) (which may be evaluated
analytically s_I ) to the data Li _et al._ (2020) is shown in Figs. 2(a)-(b).
We use the slope of experimental $V_{\mathrm{SSE}}/P_{\mathrm{in}}$ in I to
determine $\kappa_{\mathrm{Pt}}^{*}\sim 10^{9}$ W/m${}^{2}\cdot$K and
$\kappa_{\mathrm{Ta}}^{*}\sim 10^{10}$ W/m${}^{2}\cdot$K at $T=75$ K, which
are within 1-2 orders of magnitude of Stoner et al. measurements Stoner _et
al._ (1992) of $\kappa$ in diamond$|$heavy-metal films. We also use an
independent measurement of crystalline Cr2O3’s bulk thermal conductivity
$\sigma$ Yuan _et al._ (2018), giving us an associated length scale
$\sigma/\kappa_{\mathrm{Pt}}^{*}\approx 400$ nm and
$\sigma/\kappa_{\mathrm{Ta}}^{*}\approx 60$ nm. Since the thin-film
resistivities in our samples are about ten times larger than those in Refs.
Vlaminck _et al._ (2013); Dutta _et al._ (2017) for Pt, from which we use
the values for $\lambda_{\mathrm{Pt}}^{*}$ and $g^{\uparrow\downarrow}_{m}$
which go into determining $\kappa_{\mathrm{Pt}}^{*}$, the latter can only be
taken as giving us a rough order-of-magnitude guidance.
It should be safe to suppose that $\rho$, $\kappa^{*}$, and
$g^{\uparrow\downarrow}$ are largely field independent, so that the field
dependence in $V_{\mathrm{SSE}}/P_{\mathrm{in}}$ comes from $S$. The relative
value of $S(B)$ across SF is determined theoretically up to the ratio
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$ gl_ , which is a
property of the interfaces. Several values are chosen in plotting Fig. 2. The
best fit is determined by comparing theoretical $v(T)$ s_I , defined in Eq. 7,
to the data at $T=75$ K. Note that $S|_{B=0}=0$, as expected on symmetry
grounds. However, it is nontrivial that the $S_{\rm II}(B)$ dependence
extrapolates to zero at zero field, both experimentally and in our theory.
The temperature dependence in the calculated spin Seebeck coefficient $S$
enter through the magnon occupation number in the fluctuation-dissipation
relation (9). The overall temperature dependence of the measured SSE is,
furthermore, convoluted with thermal and charge conductivities. There are also
slower temperature dependences in various parameters, such as $\chi(T)$ Foner
(1963), which can complicate a detailed analysis. By looking at the slope
ratio $v(T)$, however, we can eliminate the common prefactor associated with
the heat-to-spin-to-charge conversions [see Eq. (8)], if the signal is
dominated by the interfacial thermal bias. The experimental $v(T)$ for a bulk
Cr2O3/Pt sample is plotted in Fig. 3 along with theoretical curves. The
experimental data points for $v(T)$ are obtained by fitting a linear-in-field
line to $V_{\mathrm{SSE}}$ in states I and II and taking the ratio of the
slopes; for the theoretical curves see s_I . At low temperatures $T<7$ K, the
theoretical slopes start becoming nonlinear [so that
$S_{\mathrm{I}},S_{\mathrm{II}}$ must be evaluated numerically using Eqs. (5),
(6)], with $S_{\mathrm{II}}(B)$ at large fields being the first portion of
$S(B)$ to become nonlinear. Nonlinearities in $V_{\mathrm{SSE}}(B)$ are also
observed experimentally above SF at $T=5$ K Li _et al._ (2020).
While we see qualitative agreement, it appears there are additional spin
Seebeck contribution(s) not captured by our formalism. The latter can stem
from a bulk SSE in state I Lebrun _et al._ (2018), since thermal magnons
polarized along the Néel order can diffuse over long distances Prakash _et
al._ (2018). In particular, an additional linear in $T$ contribution to
$S_{\mathrm{I}}$ would affect the estimate of
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$ from the low-$T$ data,
while a cubic contribution would explain the constant offset in $v(T)$ at
larger temperatures. There may also be additional contributions in I and II
due to other types of dynamics associated with interfacial inhomogeneities and
locally uncompensated moments. In order to fit the totality of experimental
data with our interfacial SSE-based model, we would require different values
of $g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$ as a function of
temperature. In particular, the data shown in Fig. 2a for $T=75$ K
(corresponding to the largest temperature data point in Fig. 3) is well
reproduced by taking
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}\approx 15$, while the
low temperature dependence of the data follows $v(T)\approx 160/T^{2}$
corresponding to $g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}\approx
300$. Although the order-of-magnitude estimate for the mixing conductance
ratio and the trend in $v(T)$ as a function of temperature are reasonably
captured by our simple model, a more complete theory (accounting for the bulk
spin transport as well as for disorder-induced mesoscopic effects at the
interface) is needed for developing a detailed quantitative understanding.
Figure 3: The ratio of the spin Seebeck coefficient field slopes $v(T)$.
Experimental data is from the same device as in Fig. 2(a) and is obtained from
the slopes of linear-in-field fit lines, as discussed in the text. Theoretical
curves are based on Eq. (7), evaluated here s_I ; plotted for various
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$. The dashed line shows
an approximate fit to the data.
Theoretical formalism.—We calculate the spin currents in Eqs. (1) by averaging
over thermal fluctuations of the magnetic variables. The latter can be
obtained from the symmetrized fluctuation-dissipation theorem:
$\left\langle\delta\phi_{i}\delta\phi_{j}\right\rangle=\frac{i\hbar}{2}\int\frac{d^{3}k}{(2\pi)^{3}}\left[\chi_{ji}^{*}(\mathbf{k},\omega)-\chi_{ij}(\mathbf{k},\omega)\right]N(\omega),$
(9)
where $\delta\phi_{i}$ stands for a Cartesian component of $\boldsymbol{l}$ or
$\boldsymbol{m}$ and $\chi_{ij}$ is the corresponding linear-response
function. $N(\omega)\equiv n_{\rm BE}(\omega)+1/2$ accounts for thermal
fluctuations associated with occupied modes, according to the Bose-Einstein
distribution function $n_{\rm BE}$, with $1/2$ reflecting the zero-point
motion Landau and Lifshitz (1980). The dynamic susceptibility tensor is
defined by $\delta\phi_{i}=\chi_{ij}\xi_{j}$, for the field $\xi_{j}$
thermodynamically conjugate to $\phi_{j}$. Our system is driven according to
the energy density
$E(B,t)=E(B)-\boldsymbol{m}\cdot\boldsymbol{h}(t)-\boldsymbol{l}\cdot\boldsymbol{g}(t)$,
where $\boldsymbol{g}$ and $\boldsymbol{h}$ are conjugate to $\boldsymbol{l}$
and $\boldsymbol{m}$, respectively. The off-diagonal components of the Néel
response $\chi^{(l)}_{ij}$ thus determine the Néel pumping as
$\left\langle\boldsymbol{l}\times\partial\boldsymbol{l}/\partial
t\right\rangle_{k}\to i\omega\epsilon^{ijk}\left\langle
l_{i}l_{j}\right\rangle$ (in terms of the Levi-Civita tensor $\epsilon^{ijk}$,
and upon the Fourier transform), and similarly for the magnetic response,
$\chi^{(m)}_{ij}$.
The components contributing to spin currents in I are
$\displaystyle\chi_{xy}^{(l)}=-\frac{i}{2s^{2}\chi\omega_{0k}}\left(\frac{1}{\omega-\omega_{1k}+i\epsilon}-\frac{1}{\omega-\omega_{2k}+i\epsilon}\right),$
(10a) $\displaystyle\chi_{xy}^{(m)}=\chi^{2}K_{1}^{2}\chi_{xy}^{(l)},$ (10b)
where $\omega_{0k}=\sqrt{(\gamma B_{c})^{2}+(ck)^{2}}$ and the dispersions are
given in Eq. (4). According to Eq. (10a), the fluctuations perpendicular to
$\boldsymbol{l}_{0,\mathrm{I}}=\hat{\textbf{z}}$ at $\omega_{1k}$ and
$\omega_{2k}$ produce opposite contributions to the spin currents. The
magnetic fluctuations in I in, e.g. Cr2O3, are a factor $(\chi K_{1})^{2}\sim
10^{-7}$ smaller than the Néel fluctuations and will be neglected. In II,
$\delta\boldsymbol{l}$ is linearly polarized in the $\omega_{3k}$ and
$\omega_{4k}$ modes, so Néel fluctuations do not produce spin currents bey .
$\delta\boldsymbol{m}$ is elliptically polarized in the $\omega_{4k}$ mode,
with magnetic fluctuations producing a spin current according to
$\chi_{xy}^{(m)}=i\gamma\chi
B\left(\frac{1}{\omega-\omega_{4k}+i\epsilon}\right).$ (11)
Without dissipation, the poles $\chi_{ij}\propto
1/(\omega-\omega_{k}+i\epsilon)$ at the resonance frequencies are shifted by
positive infinitesimal $\epsilon$. With dissipation, we end up with
Lorentzians centered at these poles, whose widths are determined by bulk
Gilbert damping and the effective damping due to interfacial spin pumping
Tserkovnyak _et al._ (2002); Hoffman _et al._ (2013). When these resonance
modes’ quality factors are large, however, their spectral weight is sharp and
may be simply integrated over. We will assume this is the case, allowing us to
neglect dissipation and simply use the infinitesimal $\epsilon$.
Conclusion and outlook.—Our theory specializes to SSE from spin currents
produced by an interfacial thermal bias. The formalism may be extended to
account for bulk thermal gradients, which produce nonequilibrium interfacial
spin accumulation $\boldsymbol{\mu}$. However, determining $\boldsymbol{\mu}$
requires complimenting the interfacial transport with coupled spin and heat
transport in the bulk Prakash _et al._ (2018), which is beyond our present
scope. The purely local SSE studied here should quantitatively model SSE for
interfaces with large interfacial thermal resistances and weak interfacial
spin coupling. In this regime, SSE would provide a noninvasive probe of the
magnet’s transverse components of $\chi_{ij}$, much like scanning tunneling
microscopy is an interfacial probe of an electron density of states Tersoff
and Hamann (1983).
We have discussed two classes of systems which produce different signs for
SSE. The FM-like class involves spin excitations with magnetic moment opposite
the order parameter, such as in FMs, uniaxial AFs above SF, and DMI AFs.
Another class involves degenerate spin excitations, whose degeneracy is lifted
by magnetic field. The majority carrier, which has magnetic moment along the
magnetic field, can then dominates spin transport. In our low-temperature,
long-wavelength theory we have shown that uniaxial AFs below SF belong to this
class. However, when the bulk SSE contribution is significant, this reasoning
alone may not determine the sign. Since the majority band reaches the edge of
the BZ faster than the minority, it may suffer greater umklapp scattering at
elevated temperatures, which would lower its conductivity. A full transport
theory is then required to determine the SSE sign, as a function of
temperature.
By comparing
$v(T)\equiv-\partial_{B}S_{\mathrm{I}}/\partial_{B}S_{\mathrm{II}}$ in
experiment to our theory as a function of $T$, we see some discrepancy. Our
theory predicts $v\propto 1/T^{2}$, while the Cr2O3/Pt sample indicates
$v(T)\approx 0.7+160/T^{2}$. The constant offset could stem from a bulk
Seebeck contribution in I at higher $T$ whose coefficient goes as $T^{3}$.
Above SF, bulk contributions to SSE can be expected to be reduced, since spin
transport is then normal to the Néel order. $v(T)$ may also have contributions
from paramagnetic impurities or other extrinsic surface modes, or be
convoluted with temperature dependence in
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$. The magnitude of
$g^{\uparrow\downarrow}_{l}$ and $g^{\uparrow\downarrow}_{m}$ can,
furthermore, vary from one sample to another due to the amount of disorder in
the interfacial exchange coupling Takei _et al._ (2014); Troncoso _et al._
(2020). While our theory well reproduces the temperature dependence at low
$T$, a different value of
$g^{\uparrow\downarrow}_{m}/g^{\uparrow\downarrow}_{l}$ is needed to
consistently explain higher temperature data. Looking forward, a more complete
theory is called for which includes SSE contributions from both the interface
and the bulk, in addition to the dynamical effects of disorder at the
interface.
The sensitivity of the SSE to the preparation and quality of the interface may
complicate the analysis based on the measured $v(T)$ across the SF. We recall
that Seki et al. Seki _et al._ (2015) did not observe a significant SSE in I
at low temperatures in Cr2O3/Pt. Wu et al. Wu _et al._ (2016) observed SSE
with nonlinear field dependence and ferromagnetic sign signature on both sides
of SF in MnF2/Pt. Ferromagnetic sign in I was also observed in an etched-
interface Cr2O3/Pt sample by Li et al. Li _et al._ (2020). Thus the origin of
the measured sign of the signal in I, and, therefore, the physical mechanism
of SSE are unclear for these cases. We also note that both Wu et al. Wu _et
al._ (2015) in paramagnetic SSE in GGG/Pt and Li et al. Li _et al._ (2020) in
Cr2O3/Pt at $T>T_{N}$ observed the ferromagnetic sign signature, suggesting
perhaps the importance of the magnon umklapp scattering in the bulk.
The work was supported by the U.S. Department of Energy, Office of Basic
Energy Sciences under Award No. DE-SC0012190.
## References
* Adachi _et al._ (2010) H. Adachi, K.-i. Uchida, E. Saitoh, J.-i. Ohe, S. Takahashi, and S. Maekawa, Applied Physics Letters 97, 252506 (2010).
* Rezende _et al._ (2014) S. M. Rezende, R. L. Rodríguez-Suárez, R. O. Cunha, A. R. Rodrigues, F. L. A. Machado, G. A. Fonseca Guerra, J. C. Lopez Ortiz, and A. Azevedo, Phys. Rev. B 89, 014416 (2014).
* Rezende _et al._ (2016a) S. Rezende, R. Rodríguez-Suárez, R. Cunha, J. L. Ortiz, and A. Azevedo, Journal of Magnetism and Magnetic Materials 400, 171 (2016a).
* Flebus _et al._ (2017) B. Flebus, K. Shen, T. Kikkawa, K.-i. Uchida, Z. Qiu, E. Saitoh, R. A. Duine, and G. E. W. Bauer, Phys. Rev. B 95, 144420 (2017).
* Prakash _et al._ (2018) A. Prakash, B. Flebus, J. Brangham, F. Yang, Y. Tserkovnyak, and J. P. Heremans, Phys. Rev. B 97, 020408 (2018).
* Luo _et al._ (2019) Y. Luo, C. Liu, H. Saglam, Y. Li, W. Zhang, S. S. L. Zhang, J. E. Pearson, B. Fisher, A. Bhattacharya, and A. Hoffmann, (2019), arXiv:1910.10340 [cond-mat.mtrl-sci] .
* Xiao _et al._ (2010) J. Xiao, G. E. W. Bauer, K.-c. Uchida, E. Saitoh, and S. Maekawa, Phys. Rev. B 81, 214418 (2010).
* Slachter _et al._ (2010) A. Slachter, F. L. Bakker, J.-P. Adam, and B. J. van Wees, Nature Physics 6, 879 (2010).
* Uchida _et al._ (2010a) K.-i. Uchida, T. Nonaka, T. Ota, and E. Saitoh, Applied Physics Letters 97, 262504 (2010a).
* Miao _et al._ (2016) B. F. Miao, S. Y. Huang, D. Qu, and C. L. Chien, AIP Advances 6, 015018 (2016).
* Geprägs _et al._ (2016) S. Geprägs, A. Kehlberger, F. Della Coletta, Z. Qiu, E.-J. Guo, T. Schulz, C. Mix, S. Meyer, A. Kamra, M. Althammer, _et al._ , Nature communications 7, 10452 (2016).
* Ohnuma _et al._ (2013) Y. Ohnuma, H. Adachi, E. Saitoh, and S. Maekawa, Phys. Rev. B 87, 014423 (2013).
* Wu _et al._ (2015) S. M. Wu, J. E. Pearson, and A. Bhattacharya, Phys. Rev. Lett. 114, 186602 (2015).
* Li _et al._ (2019) J. Li, Z. Shi, V. H. Ortiz, M. Aldosary, C. Chen, V. Aji, P. Wei, and J. Shi, Phys. Rev. Lett. 122, 217204 (2019).
* Yamamoto _et al._ (2019) Y. Yamamoto, M. Ichioka, and H. Adachi, Phys. Rev. B 100, 064419 (2019).
* Seki _et al._ (2015) S. Seki, T. Ideue, M. Kubota, Y. Kozuka, R. Takagi, M. Nakamura, Y. Kaneko, M. Kawasaki, and Y. Tokura, Phys. Rev. Lett. 115, 266601 (2015).
* Wu _et al._ (2016) S. M. Wu, W. Zhang, A. KC, P. Borisov, J. E. Pearson, J. S. Jiang, D. Lederman, A. Hoffmann, and A. Bhattacharya, Phys. Rev. Lett. 116, 097204 (2016).
* Li _et al._ (2020) J. Li, B. Wilson, R. Cheng, M. Lohmann, M. Kavand, W. Yuan, M. Aldosary, N. Agladze, P. Wei, M. Sherwin, and J. Shi, Nature 578, 70 (2020).
* Rezende _et al._ (2016b) S. M. Rezende, R. L. Rodríguez-Suárez, and A. Azevedo, Phys. Rev. B 93, 014425 (2016b).
* Troncoso _et al._ (2020) R. E. Troncoso, S. A. Bender, A. Brataas, and R. A. Duine, Phys. Rev. B 101, 054404 (2020).
* Flebus _et al._ (2019) B. Flebus, Y. Tserkovnyak, and G. A. Fiete, Phys. Rev. B 99, 224410 (2019).
* Ma _et al._ (2020) B. Ma, B. Flebus, and G. A. Fiete, Phys. Rev. B 101, 035104 (2020).
* Hirobe _et al._ (2017) D. Hirobe, M. Sato, T. Kawamata, Y. Shiomi, K.-i. Uchida, R. Iguchi, Y. Koike, S. Maekawa, and E. Saitoh, Nature Physics 13, 30 (2017).
* Nordblad _et al._ (1979) P. Nordblad, L. Lundgren, E. Figueroa, U. Gäfvert, and O. Beckman, Physica Scripta 20, 105 (1979).
* Foner (1963) S. Foner, Phys. Rev. 130, 183 (1963).
* Flebus (2019) B. Flebus, Phys. Rev. B 100, 064410 (2019).
* (27) If we relax the nonlinear constraint $\delta\boldsymbol{l}^{2}=1$, we can allow for an additional term $\boldsymbol{m}\times\delta E/\delta\boldsymbol{l}$ in the equation of motion for $\boldsymbol{l}$. When considering linear excitations about the same ground states, the only change then is that $\omega_{3k}$ develops small elliptical polarization in $\boldsymbol{\delta l}$. This produces a Néel spin current parallel to the field with similar magnitude to the $\omega_{4k}$ magnetic spin current. Since it pumps at $g^{\uparrow\downarrow}_{l}$ $\lesssim$ $g^{\uparrow\downarrow}_{m}$, we discard it here.
* Andreev and Marchenko (1980) A. F. Andreev and V. I. Marchenko, Soviet Physics Uspekhi 23, 21 (1980).
* (29) For example, in Cr2O3, the temperature associated with the zero-field magnon gap in I is $T=\hbar\gamma B_{c}/k_{B}\approx 8$ K, and the temperature associated with $K_{2}$ will be much less than this. So at all but low temperatures, the majority of magnons contributing to SSE will have frequencies which are unaffected by $K_{2}$.
* Adachi _et al._ (2011) H. Adachi, J.-i. Ohe, S. Takahashi, and S. Maekawa, Phys. Rev. B 83, 094410 (2011).
* Hoffman _et al._ (2013) S. Hoffman, K. Sato, and Y. Tserkovnyak, Phys. Rev. B 88, 064408 (2013).
* (32) For $k_{B}T\gg\hbar\gamma B_{c}$, we get $S_{\mathrm{I}}\approx\frac{g^{\uparrow\downarrow}_{l}\gamma Bk_{B}^{2}T}{2\pi^{3}c^{3}\chi s^{2}}\int_{0}^{\infty}dx~{}x^{2}e^{x}n_{\rm BE}^{2}(x)\propto g^{\uparrow\downarrow}_{l}BT,$ $S_{\mathrm{II}}\approx\frac{g^{\uparrow\downarrow}_{m}\gamma\chi Bk_{B}^{4}T^{3}}{4\pi^{3}c^{3}\hbar^{2}}\int_{0}^{\infty}dx~{}x^{4}e^{x}n_{\rm BE}^{2}(x)\propto g^{\uparrow\downarrow}_{m}BT^{3},$ where $x$ is dimensionless and the integrals are convergent.
* Uchida _et al._ (2010b) K. Uchida, T. Ota, K. Harii, S. Takahashi, S. Maekawa, Y. Fujikawa, and E. Saitoh, Solid State Communications 150, 524 (2010b).
* Stoner _et al._ (1992) R. J. Stoner, H. J. Maris, T. R. Anthony, and W. F. Banholzer, Phys. Rev. Lett. 68, 1563 (1992).
* Stevens _et al._ (2005) R. J. Stevens, A. N. Smith, and P. M. Norris, Journal of Heat Transfer 127, 315 (2005).
* Hohensee _et al._ (2015) G. T. Hohensee, R. Wilson, and D. G. Cahill, Nature communications 6, 6578 (2015).
* Lu _et al._ (2016) T. Lu, J. Zhou, T. Nakayama, R. Yang, and B. Li, Phys. Rev. B 93, 085433 (2016).
* Sinova _et al._ (2015) J. Sinova, S. O. Valenzuela, J. Wunderlich, C. H. Back, and T. Jungwirth, Rev. Mod. Phys. 87, 1213 (2015).
* Hahn _et al._ (2013) C. Hahn, G. de Loubens, O. Klein, M. Viret, V. V. Naletov, and J. Ben Youssef, Phys. Rev. B 87, 174417 (2013).
* Gómez _et al._ (2014) J. E. Gómez, B. Zerai Tedlla, N. R. Álvarez, G. Alejandro, E. Goovaerts, and A. Butera, Phys. Rev. B 90, 184401 (2014).
* Yu _et al._ (2018) R. Yu, B. F. Miao, L. Sun, Q. Liu, J. Du, P. Omelchenko, B. Heinrich, M. Wu, and H. F. Ding, Phys. Rev. Materials 2, 074406 (2018).
* Zhang _et al._ (2015) W. Zhang, W. Han, X. Jiang, S.-H. Yang, and S. S. Parkin, Nature Physics 11, 496 (2015).
* Yuan _et al._ (2018) W. Yuan, Q. Zhu, T. Su, Y. Yao, W. Xing, Y. Chen, Y. Ma, X. Lin, J. Shi, R. Shindou, _et al._ , Science advances 4, eaat1098 (2018).
* Vlaminck _et al._ (2013) V. Vlaminck, J. E. Pearson, S. D. Bader, and A. Hoffmann, Phys. Rev. B 88, 064414 (2013).
* Dutta _et al._ (2017) S. Dutta, K. Sankaran, K. Moors, G. Pourtois, S. Van Elshocht, J. Bömmels, W. Vandervorst, Z. Tőkei, and C. Adelmann, Journal of Applied Physics 122, 025107 (2017).
* (46) Takei et al. Takei _et al._ (2014) concluded within their model that the two spin-mixing conductances may be of similar order of magnitude, with $g^{\uparrow\downarrow}_{m}\gtrsim g^{\uparrow\downarrow}_{l}$, and $g^{\uparrow\downarrow}_{l}$ approaching $g^{\uparrow\downarrow}_{m}$ with increasing disorder of interfacial exchange coupling.
* Lebrun _et al._ (2018) R. Lebrun, A. Ross, S. Bender, A. Qaiumzadeh, L. Baldrati, J. Cramer, A. Brataas, R. Duine, and M. Kläui, Nature 561, 222 (2018).
* Landau and Lifshitz (1980) L. Landau and E. Lifshitz, Publisher: Butterworth-Heinemann 5 (1980).
* Tserkovnyak _et al._ (2002) Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. Lett. 88, 117601 (2002).
* Tersoff and Hamann (1983) J. Tersoff and D. R. Hamann, Phys. Rev. Lett. 50, 1998 (1983).
* Takei _et al._ (2014) S. Takei, B. I. Halperin, A. Yacoby, and Y. Tserkovnyak, Phys. Rev. B 90, 094408 (2014).
|
2024-09-04T02:54:59.151337 | 2020-03-11T11:49:26 | 2003.05234 | {
"authors": "Absos Ali Shaikh, Chandan Kumar Mondal, Prosenjit Mandal",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26162",
"submitter": "Chandan Kumar Mondal",
"url": "https://arxiv.org/abs/2003.05234"
} | arxiv-papers | # Compact gradient $\rho$-Einstein soliton is isometric to the Euclidean
sphere
Absos Ali Shaikh1, Chandan Kumar Mondal2, Prosenjit Mandal3
Department of Mathematics,
University of Burdwan, Golapbag,
Burdwan-713104,
West Bengal, India<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
In this paper we have investigated some aspects of gradient $\rho$-Einstein
Ricci soliton in a complete Riemannian manifold. First, we have proved that
the compact gradient $\rho$-Einstein soliton is isometric to the Euclidean
sphere by showing that the scalar curvature becomes constant. Second, we have
showed that in a non-compact gradient $\rho$-Einstein soliton satisfying some
integral condition, the scalar curvature vanishes.
††footnotetext: $\mathbf{2020}$ Mathematics Subject Classification: 53C20;
53C21; 53C44.
Key words and phrases: Gradient $\rho$-Einstein Ricci soliton; scalar
curvature; Riemannian manifold.
## 1\. Introduction and preliminaries
A $1$-parameter family of metrics $\\{g(t)\\}$ on a Riemannian manifold $M$,
defined on some time interval $I\subset\mathbb{R}$ is said to satisfy Ricci
flow if it satisfies
$\frac{\partial}{\partial t}g_{ij}=-2R_{ij},$
where $R_{ij}$ is the Ricci curvature with respect to the metric $g_{ij}$.
Hamilton [9] proved that for any smooth initial metric $g(0)=g_{0}$ on a
closed manifold, there exists a unique solution $g(t)$, $t\in[0,\epsilon)$, to
the Ricci flow equation for some $\epsilon>0$. A solution $g(t)$ of the Ricci
flow of the form
$g(t)=\sigma(t)\varphi(t)^{*}g(0),$
where $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ is a positive function and
$\varphi(t):M\rightarrow M$ is a 1-parameter family of diffeomorphisms, is
called a Ricci soliton. It is known that if the initial metric $g_{0}$
satisfies the equation
(1) $Ric(g_{0})+\frac{1}{2}\pounds_{X}g_{0}=\lambda g_{0},$
where $\lambda$ is a constant and $X$ is a smooth vector field on $M$, then
the manifold $M$ admits Ricci soliton. Therefore, the equation (1), in
general, is known as Ricci soliton. If $X$ is the gradient of some smooth
function, then it is called gradient Ricci soliton. For more results of Ricci
soliton see [2, 7, 8]. In 1979, Bourguignon [1] introduced the notion of
Ricci-Bourguignon flow, where the metrics $g(t)$ is evolving according to the
flow equation
$\frac{\partial}{\partial t}g_{ij}=-2R_{ij}+2\rho Rg_{ij},$
where $\rho$ is a non-zero scalar constant and $R$ is the scalar curvature of
the metric $g(t)$. Following the Ricci soliton, Catino and Mazzier [4] gave
the definition of gradient $\rho$-Einstein soliton, which is the self-similar
solution of Ricci-Bourguignon flow. This soliton is also called gradient
Ricci-Bourguignon soliton by some authors.
###### Definition 1.1.
[4] Let $(M,g)$ be a Riemannian manifold of dimension $n$, $(n\geq 3)$, and
let $\rho\in\mathbb{R}$, $\rho\neq 0$. Then $M$ is called gradient
$\rho$-Einstein soliton, denoted by $(M,g,f,\rho)$, if there is a smooth
function $f:M\rightarrow\mathbb{R}$ such that
(2) $Ric+\nabla^{2}f=\lambda g+\rho Rg,$
for some constant $\lambda$.
The soliton is trivial if $\nabla f$ is a parallel vector field. The function
$f$ is known as $\rho$-Einstein potential function. If $\lambda>0$
$(\text{resp.}=0,<0)$, then the gradient $\rho$-Einstein soliton
$(M,g,f,\rho)$ is said to be shrinking (resp. steady or expanding) . On the
other hand, the $\rho$-Einstein soliton is called gradient Einstein soliton,
gradient traceless Ricci soliton or gradient Schouten soliton if
$\rho=1/2,1/n$ or $1/2(n-1)$. Later, this notion has been generalized in
various directions such as $m$-quasi Einstein manifold [11], $(m,\rho)$-quasi
Einstein manifold [12], Ricci-Bourguignon almost soliton [13].
Catino and Mazzier [4] showed that compact gradient Einstein, Schouten or
traceless Ricci soliton is trivial. They classified three-dimensional gradient
shrinking Schouten soliton and proved that it is isometric to a finite
quotient of either $\mathbb{S}^{3}$ or $\mathbb{R}^{3}$ or
$\mathbb{R}\times\mathbb{S}^{2}$. Huang [10] deduced a sufficient condition
for the compact gradient shrinking $\rho$-Einstein soliton to be isometric to
a quotient of the round sphere $\mathbb{S}^{n}$.
###### Theorem 1.1.
[10] Let $(M,g,f,\rho)$ be an $n$-dimensional $(4\leq n\leq 5)$ compact
gradient shrinking $\rho$-Einstein soliton with $\rho<0$. If the following
condition holds
$\displaystyle\Big{(}\int_{M}|W+\frac{\sqrt{2}}{\sqrt{n}(n-2)}Z\mathbin{\bigcirc\mspace{-15.0mu}\wedge\mspace{3.0mu}}g|^{2}\Big{)}^{\frac{2}{n}}$
$\displaystyle+\sqrt{\frac{(n-4)^{2}(n-1)}{8(n-2)}}\lambda
vol(M)^{\frac{2}{n}}$ $\displaystyle\leq\sqrt{\frac{n-2}{32(n-1)}}Y(M,[g]),$
where $Z=Ric-\frac{R}{n}g$ is the trace-less Ricci tensor, $W$ is the Weyl
tensor and $Y(M,[g])$ is the Yamabe invariant associated to $(M,g)$, then $M$
is isometric to a quotient of the round sphere $\mathbb{S}^{n}$.
In 2019, Mondal and Shaikh [14] proved the isometry theorem for gradient
$\rho$-Einstein soliton in case of conformal vector field. In particular, they
proved the following result:
###### Theorem 1.2.
[14] Let $(M,g,f,\rho)$ be a compact gradient $\rho$-Einstein soliton. If
$\nabla f$ is a non-trivial conformal vector field, then $M$ is isometric to
the Euclidean sphere $\mathbb{S}^{n}$.
Dwivedi [13] proved an isometry theorem for gradient Ricci-Bourguignon
soliton.
###### Theorem 1.3.
[13] A non-trivial compact gradient Ricci-Bourguignon soliton is isometric to
an Euclidean sphere if any one of the following holds
(1) $M$ has constant scalar curvature.
(2) $\int_{M}g(\nabla R,\nabla f)\leq 0$.
(3) $M$ is a homogeneous manifold.
We note that Catino et. al. [5] proved many results for gradient
$\rho$-Einstein soliton in non-compact manifold.
###### Theorem 1.4.
Let $(M,g,f,\rho)$ be a complete non-compact gradient shrinking
$\rho$-Einstein soliton with $0<\rho<1/2(n-1)$ bounded curvature, non-negative
radial sectional curvature, and non-negative Ricci curvature. Then the scalar
curvature is constant.
In this paper, we have showed that a non-trivial compact gradient
$\rho$-Einstein soliton is isometric to an Euclidean sphere. The main results
of this paper are as follows:
###### Theorem 1.5.
A nontrivial compact gradient $\rho$-Einstein soliton has constant scalar
curvature and therefore $M$ is isometric to an Euclidean sphere.
We have also showed that in a non-compact gradient $\rho$-Einstein soliton
satisfying some conditions the scalar curvature vanishes.
###### Theorem 1.6.
Suppose $(M,g,f,\rho)$ is a non-compact gradient non-expanding $\rho$-Einstein
soliton with non-negative scalar curvature. If $\rho>1/n$ and the
$\rho$-Einstein potential function satisfies
(3) $\int_{M-B(p,r)}d(x,p)^{-2}f<\infty,$
then the scalar curvature vanishes in $M$.
## 2\. Proof of the results
###### Proof of the Theorem 1.5.
Since the gradient $\rho$-Einstein soliton is non-trivial, it follows that
$\rho\neq 1/n$, see [4]. Taking the trace of (2) we get
(4) $R+\Delta f=\lambda n+\rho Rn.$
From the commutative equation, we obtain
(5) $\Delta\nabla_{i}f=\nabla_{i}\Delta f+R_{ij}\nabla_{j}f.$
By using contracted second Bianchi identity, we have
$\displaystyle\Delta\nabla_{i}f=\nabla_{j}\nabla_{j}\nabla_{i}f$
$\displaystyle=$ $\displaystyle\nabla_{j}(\lambda g_{ij}+\rho Rg_{ij}-R_{ij})$
$\displaystyle=$ $\displaystyle\nabla_{i}(\rho R-\frac{1}{2}R).$
and
$\nabla_{i}\Delta f=\nabla_{i}(\lambda n+\rho Rn-R)=\nabla_{i}(\rho Rn-R).$
Therefore, (5) yields
(6) $(n-1)\rho\nabla_{i}R-\frac{1}{2}\nabla_{i}R+R_{ij}\nabla_{j}f=0,$
Taking covariant derivative $\nabla_{l}$, we get
$(n-1)\rho\nabla_{l}\nabla_{i}R-\frac{1}{2}\nabla_{l}\nabla_{i}R+\nabla_{l}R_{ij}\nabla_{j}f+R_{ij}\nabla_{l}\nabla_{j}f=0.$
Taking trace in both sides, we obtain
(7) $((n-1)\rho-\frac{1}{2})\Delta R+\frac{1}{2}g(\nabla R,\nabla f)+R(\lambda
n+\rho Rn-R)=0.$
Now integrating using divergence theorem we get
$\displaystyle\int_{M}R(\lambda n+\rho Rn-R)$ $\displaystyle=$
$\displaystyle-\int_{M}((n-1)\rho-\frac{1}{2})\Delta
R-\frac{1}{2}\int_{M}g(\nabla R,\nabla f)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\int_{M}R\Delta f=\frac{1}{2}\int_{M}R(\lambda n+\rho
Rn-R).$
The above equation is true only if
(8) $\int_{M}R(\lambda n+\rho Rn-R)=0,$
which implies
(9) $\int_{M}R\Big{(}R+\frac{\lambda n}{n\rho-1}\Big{)}=0,$
Again integrating (4), we obtain
(10) $\int_{M}\Big{(}R+\frac{\lambda n}{n\rho-1}\Big{)}=0.$
Therefore, (9) and (10) together imply that
$\int_{M}\Big{(}R+\frac{\lambda n}{n\rho-1}\Big{)}^{2}=0.$
Hence, $R=\lambda n/(1-\rho n)$. Then from Theorem 1.3 we can conclude our
result. ∎
###### Proof of the Theorem 1.6.
From (4) we get
$(n\rho-1)R=\Delta f-\lambda n.$
Since $\lambda\geq 0$, the above equation implies that
(11) $(n\rho-1)R\leq\Delta f.$
Now, we consider the cut-off function, introduced in [6], $\varphi_{r}\in
C^{2}_{0}(B(p,2r))$ for $r>0$ such that
$\begin{cases}0\leq\varphi_{r}\leq 1&\text{ in }B(p,2r)\\\
\varphi_{r}=1&\text{ in }B(p,r)\\\
|\nabla\varphi_{r}|^{2}\leq\frac{C}{r^{2}}&\text{ in }B(p,2r)\\\
\Delta\varphi_{r}\leq\frac{C}{r^{2}}&\text{ in }B(p,2r),\end{cases}$
where $C>0$ is a constant. Then for $r\rightarrow\infty$, we have
$\Delta\varphi^{2}_{r}\rightarrow 0$ as
$\Delta\varphi^{2}_{r}\leq\frac{C}{r^{2}}$. Then we calculate
(12)
$\displaystyle(n\rho-1)\int_{M}R\varphi^{2}_{r}\leq\int_{M}\varphi^{2}_{r}\Delta
f$ $\displaystyle=$ $\displaystyle\int_{B(p,2r)-B(p,r)}f\Delta\varphi_{r}^{2}$
(13) $\displaystyle\leq$
$\displaystyle\int_{B(p,2r)-B(p,r)}f\frac{C}{r^{2}}\rightarrow 0,$
as $r\rightarrow\infty$. Hence, we obtain
(14) $(n\rho-1)\lim_{r\rightarrow\infty}\int_{B(p,r)}R\leq 0.$
Since $\rho>1/n$, it follows that
$\lim_{r\rightarrow\infty}\int_{B(p,r)}R\leq 0.$
But $R$ is non-negative everywhere in $M$. Therefore, $R\equiv 0$ in $M$. ∎
## 3\. acknowledgment
The third author gratefully acknowledges to the CSIR(File
No.:09/025(0282)/2019-EMR-I), Govt. of India for financial assistance.
## References
* [1] Bourguignon, J. P., Ricci curvature and Einstein metrics. Global differential geometry and global analysis, Berlin, 1979, 42–63, Lecture Notes in Math. 838, Springer, Berlin, 1981.
* [2] Cao, H. D., Recent progress on Ricci solitons, in: Recent Advances in Geometric Analysis, Adv. Lectures Math., 11 (2010), 1–38.
* [3] Cao, H. D. and Zhou, D., On complee gradient shrnking Ricci solitons. J. Diff. Geom., 85 (2010), 175–183.
* [4] Catino, G. and Mazzieri, L., Gradient Einstein solitons, Nonlinear Anal., 132 (2016), 66–94.
* [5] Catino, G., Mazzieri, L. and Mongodi, S., Rigidity of gradient Einstein shrinkers, Commun. Contemp. Math., 17(6) (2015), 1–18.
* [6] Cheeger, J. and Colding, T. H., Lower bounds on Ricci curvature and the almost rigidity of warped products, Ann. Math., 144(1) (1996), 189–237.
* [7] Chow, B. and Knopf, D., The Ricci flow: An introduction, mathematical surveys and monographs. Amer. Math. Soc., 110, 2004.
* [8] Fang, F. Q., Man, J. W. and Zhang, Z. L., Complete gradient shrinking Ricci solitons have finite topological type. C. R. Acad. Sci. Paris, Ser. I 346(1971), 653–656.
* [9] Hamilton, R. S., Three-manifolds with positive Ricci curvature, J. Differ. Geom., 17 (1982), 255–306.
* [10] Huang, G., Integral pinched gradient shrinking $\rho$-Einstein solitons, J. Math. Ann. Appl. 451(2) (2017), 1045–1055.
* [11] Hu, Z., Li, D. and Xu, J., On generalized $m$-quasi-Einstein manifolds with constant scalar curvature, J. Math. Ann. Appl. 432(2) (2015), 733–743.
* [12] Huang, G. and Wei, Y., The classification of $(m,ρ)$-quasi-Einstein manifolds, Ann. Glob. Anal. geom. 44 (2013), 269–282.
* [13] Dwivedi, S., Some results on Ricci-Bourguignon and Ricci-Bourguignon almost solitons, arXiv:1809.11103.
* [14] Mondal, C. K. and Shaikh, A. A., Some results on $\eta$-Ricci Soliton and gradient $rho$-Einstein soliton in a complete Riemannian manifold, Comm. Korean Math. Soc. 34(4) (2019), 1279–1287.
|
2024-09-04T02:54:59.159884 | 2020-03-11T12:49:09 | 2003.05263 | {
"authors": "Florian-Horia Vasilescu",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:26163",
"submitter": "Florian-Horia Vasilescu",
"url": "https://arxiv.org/abs/2003.05263"
} | arxiv-papers | # Spectrum and Analytic Functional Calculus in Real and Quaternionic
Frameworks
Florian-Horia Vasilescu
Department of Mathematics, University of Lille,
59655 Villeneuve d’Ascq, France
<EMAIL_ADDRESS>
###### Abstract
We present an approach to the spectrum and analytic functional calculus for
quaternionic linear operators, following the corresponding results concerning
the real linear operators. In fact, the construction of the analytic
functional calculus for real linear operators can be refined to get a similar
construction for quaternionic linear ones, in a classical manner, using a
Riesz-Dunford-Gelfand type kernel, and considering spectra in the complex
plane. A quaternionic joint spectrum for pairs of operators is also discussed,
and an analytic functional calculus is constructed, via a Martinelli type
kernel in two variables.
Keywords: spectrum in real and quaternionic contexts; holomorphic stem
functions; analytic functional calculus for real and quaternionic operators
AMS Subject Classification: 47A10; 30G35; 47A60
Keywords: real and quaternionic operators; spectra; analytic
functionalcalculus.
Mathematics Subject Classification 2010: 47A10; 47A60; 30G35
## 1 Introduction
In this text we consider ${\mathbb{R}}$-, ${\mathbb{C}}$-, and
${\mathbb{H}}$-linear operators, that is, real, complex and quaternionic
linear operators, respectively.
While the spectrum of a linear operator is traditionally defined for complex
linear operators, it is sometimes useful to have it also for real linear
operators, as well as for quaternionic linear ones. The definition of the
spectrum for a real linear operator goes seemingly back to Kaplansky (see
[10]), and it can be stated as follows. If $T$ is a real linear operator on
the real vector space ${\mathcal{V}}$, a point $u+iv$ ($u,v\in{\mathbb{R}}$)
is in the spectrum of $T$ if the operator $(u-T)^{2}+v^{2}$ is not invertible
on ${\mathcal{V}}$, where the scalars are identified with multiples of the
identity on ${\mathcal{V}}$. Although this definition involves only operators
acting in ${\mathcal{V}}$, the spectrum is, nevertheless, a subset of the
complex plane. As a matter of fact, a motivation of this choice can be
illustrated via the complexification of the space ${\mathcal{V}}$ (see Section
2).
The spectral theory for quaternionic linear operators is largely discussed in
numerous work, in particular in the monographs [5] and [4], and in many of
their references as well. In these works, the construction of an analytic
functional calculus (called $S$-analytic functional calculus) means to
associate to each function from the class of the so-called slice
hyperholomorphic or slice regular functions a quaternionic linear operator,
using a specific noncommutative kernel.
The idea of the present work is to replace the class of slice regular
functions by a class holomorphic functions, using a commutative kernel of type
Riesz- Dunford-Gelfand. These two classes are isomorphic via a Cauchy type
transform (see [21]), and the image of the analytic functional calculus is the
same, as one might expect (see Remark 8).
As in the case of real operators, the verbatim extension of the classical
definition of the spectrum for quaternionic operators is not appropriate, and
so a different definition using the squares of operators and real numbers was
given, which can be found in [5] (see also [4]). We discuss this definition in
our framework (see Definition 1), showing later that its ”complex border“
contains the most significant information, leading to the construction of an
analytic functional calculus, equivalent to that obtained via the slice
hyperholomorphic functions.
In fact, we first consider the spectrum for real operators on real Banach
spaces, and sketch the construction of an analytic functional calculus for
them, using some classical ideas (see Theorem 2). Then we extend this
framework to a quaternionic one, showing that the approach from the real case
can be easily adapted to the new situation.
As already mentioned, and unlike in [5] or [4], our functional calculus is
obtained via a Riesz-Dunford-Gelfand formula, defined in a partially
commutatative context, rather than the non-commutative Cauchy type formula
used by previous authors. Our analytic functional calculus holds for a class
of analytic operator valued functions, whose definition extends that of stem
functions, and it applies, in particular, to a large family of quaternionic
linear operators. Moreover, we can show that the analytic functional calculus
obtained in this way is equivalent to the analytic functional calculus
obtained in [5] or [4], in the sense that the images of these functional
calculi coincide (see Remark 8).
We finally discuss the case of pairs of commuting real operators, in the
spirit of [20], showing some connections with the quaternionic case.
Specifically, we define a quaternionic spectrum for them and construct an
analytic functional calculus using a Martinelli type formula, showing that for
such a construction only a sort of ”complex border“ of the quaternionic
spectrum should be used.
This work is just an introductory one. Hopefully, more contributions on this
line will be presented in the future.
## 2 Spectrum and Conjugation
Let ${\mathcal{A}}$ be a unital real Banach algebra, not necessarily
commutative. As mentioned in the Introduction, the (complex) spectrum of an
element $a\in{\mathcal{A}}$ may be defined by the equality
$\sigma_{\mathbb{C}}(a)=\\{u+iv;(u-a)^{2}+v^{2}\,\,{\rm
is\,\,not\,\,invertible},u,v\in{\mathbb{R}}\\},$ (1)
This set is conjugate symmetric, that is $u+iv\in\sigma_{\mathbb{C}}(a)$ if
and only if $u-iv\in\sigma_{\mathbb{C}}(a)$. A known motivation of this
definition comes from the following remark.
Fixing a unital real Banach algebra ${\mathcal{A}}$, we denote by
${\mathcal{A}}_{\mathbb{C}}$ the complexification of ${\mathcal{A}}$, which is
given by $A_{\mathbb{C}}={\mathbb{C}}\otimes_{\mathbb{R}}{\mathcal{A}}$,
written simply as ${\mathcal{A}}+i{\mathcal{A}}$, where the sum is direct,
identifying the element $1\otimes a+i\otimes b$ with the element $a+ib$, for
all $a,b\in{\mathcal{A}}$.
Then ${\mathcal{A}}_{\mathbb{C}}$ is a unital complex algebra, which can be
organized as a Banach algebra, with a (not necessarily unique) convenient
norm. To fix the ideas, we recall that the product of two elements is given by
$(a+ib)(c+id)=ac-bd+i(ad+bc)$ for all $a,b,c,d\in{\mathcal{A}}$, and the norm
may be defind by $\|a+ib\|=\|a\|+\|b\|$, where $\|*\|$ is the norm of
${\mathcal{A}}$.
In the algebra ${\mathcal{A}}_{\mathbb{C}}$, the complex numbers commute with
all elements of ${\mathcal{A}}$. Moreover, we have a conjugation given by
${\mathcal{A}}_{\mathbb{C}}\ni a+ib\mapsto
a-ib\in{\mathcal{A}}_{\mathbb{C}},\,a,b\in{\mathcal{A}},$
which is a unital conjugate-linear automorphism, whose square is the identity.
In particular, an arbitrary element $a+ib$ is invertible if and only if $a-ib$
is invertible.
The usual spectrum, defined for each element $a\in{\mathcal{A}}_{\mathbb{C}}$,
will be denoted by $\sigma(a)$. Regarding the algebra ${\mathcal{A}}$ as a
real subalgebra of ${\mathcal{A}}_{\mathbb{C}}$, one has the following.
###### Lemma 1
For every $a\in{\mathcal{A}}$ we have the equality
$\sigma_{\mathbb{C}}(a)=\sigma(a)$.
Proof. The result is well known but we give a short proof, because a similar
idea will be later used.
Let $\lambda=u+iv$ with $u,v\in{\mathbb{R}}$ arbitrary. Assuming $\lambda-a$
invertible, we also have $\bar{\lambda}-a$ invertible. From the obvious
identity
$(u-a)^{2}+v^{2}=(u+iv-a)(u-iv-a),$
we deduce that the element $(u-a)^{2}+v^{2}$ is invertible, implying the
inclusion $\sigma_{\mathbb{C}}(a)\subset\sigma(a)$.
Conversely, if $(u-a)^{2}+v^{2}$ is invertible, then both $u+iv-a,u-iv-a$ are
invertible via the decomposition from above, showing that we also have
$\sigma_{\mathbb{C}}(a)\supset\sigma(a)$.
###### Remark 1
The spectrum $\sigma(a)$ with $a\in{\mathcal{A}}$ is always a conjugate
symmetric set.
We are particularly interested to apply the discussion from above to the
context of linear operators. The spectral theory for real linear operators is
well known, and it is developed actually in the framework of linear relations
(see [1]). Nevertheless, we present here a different approach, which can be
applied, with minor changes, to the case of some quaternionic operators.
For a real or complex Banach space $\mathcal{V}$, we denote by
$\mathcal{B(V)}$ the algebra of all bounded ${\mathbb{R}}$-( respectively
${\mathbb{C}}$-)linear operators on $\mathcal{V}$. As before, the multiples of
the identity will be identified with the corresponding scalars.
Let ${\mathcal{V}}$ be a real Banach space, and let
${\mathcal{V}}_{\mathbb{C}}$ be its complexification, which, as above, is
identified with the direct sum ${\mathcal{V}}+i{\mathcal{V}}$. Each operator
$T\in\mathcal{B(V)}$ has a natural extension to an operator
$T_{\mathbb{C}}\in\mathcal{B}({\mathcal{V}}_{\mathbb{C}})$, given by
$T_{\mathbb{C}}(x+iy)=Tx+iTy,\,x,y\in{\mathcal{V}}$. Moreover, the map
$\mathcal{B(V)}\ni T\mapsto
T_{\mathbb{C}}\in\mathcal{B}({\mathcal{V}}_{\mathbb{C}})$ is unital,
${\mathbb{R}}$-linear and multiplicative. In particular, $T\in\mathcal{B(V)}$
is invertible if and only if
$T_{\mathbb{C}}\in\mathcal{B}({\mathcal{V}}_{\mathbb{C}})$ is invertible.
Fixing an operator $S\in\mathcal{B}(\mathcal{V}_{\mathbb{C}})$, we define the
operator $S^{\flat}\in\mathcal{B}(\mathcal{V}_{\mathbb{C}})$ to be equal to
$CSC$, where $C:{\mathcal{V}}_{\mathbb{C}}\mapsto{\mathcal{V}}_{\mathbb{C}}$
is the conjugation $x+iy\mapsto x-iy,\,x,y\in{\mathcal{V}}$. It is easily seen
that the map $\mathcal{B}(\mathcal{V}_{\mathbb{C}})\ni S\mapsto
S^{\flat}\in\mathcal{B}(\mathcal{V}_{\mathbb{C}})$ is a unital conjugate-
linear automorphism, whose square is the identity on
$\mathcal{B}(\mathcal{V}_{\mathbb{C}})$. Because
$\mathcal{V}=\\{u\in\mathcal{V}_{\mathbb{C}};Cu=u\\}$, we have $S^{\flat}=S$
if and only if $S(\mathcal{V})\subset\mathcal{V}$. In particular, we have
$T_{\mathbb{C}}^{\flat}=T_{\mathbb{C}}$. In fact, because of the
representation
$S=\frac{1}{2}(S+S^{\flat})+i\frac{1}{2i}(S-S^{\flat}),\,\,S\in\mathcal{B}(\mathcal{V}_{\mathbb{C}}),$
where
$(S+S^{\flat})({\mathcal{V}})\subset{\mathcal{V}},i(S-S^{\flat})({\mathcal{V}})\subset{\mathcal{V}}$,
the algebras $\mathcal{B}(\mathcal{V}_{\mathbb{C}})$ and
$\mathcal{B(V)}_{\mathbb{C}}$ are isomorphic and they will be often
identified, and $\mathcal{B(V)}$ will be regarded as a (real) subalgebra of
$\mathcal{B}(\mathcal{V})_{\mathbb{C}}$. In particular, if $S=U+iV$, with
$U,V\in\mathcal{B(V)}$, we have $S^{\flat}=U-iV$, so the map $S\mapsto
S^{\flat}$ is the conjugation of the complex algebra
$\mathcal{B}(\mathcal{V})_{\mathbb{C}}$ induced by the conjugation $C$ of
${\mathcal{V}}_{\mathbb{C}}$.
For every operator $S\in\mathcal{B}(\mathcal{V}_{\mathbb{C}})$, we denote, as
before, by $\sigma(S)$ its usual spectrum. As $\mathcal{B(V)}$ is a real
algebra, the (complex) spectrum of an operator $T\in\mathcal{B(V)}$ is given
by the equality (1):
$\sigma_{\mathbb{C}}(T)=\\{u+iv;(u-T)^{2}+v^{2}\,\,{\rm
is\,\,not\,\,invertible},u,v\in{\mathbb{R}}\\}.$
###### Corollary 1
For every $T\in\mathcal{B}({\mathcal{V}})$ we have the equality
$\sigma_{\mathbb{C}}(T)=\sigma(T_{\mathbb{C}})$.
## 3 Analytic Functional Calculus for Real
Operators
Having a concept of spectrum for real operators, an important step for further
development is the construction of an analytic functional calculus. Such a
construction has been done actually in the context of real linear relations in
[1]. In what follows we shall present a similar construction for real linear
operators. Although the case of linear relations looks more general, unlike in
[1], we perform our construction using a class of operator valued analytic
functions insted of scalar valued analytic functions. Moreover, our arguments
look simpler, and the construction is a model for a more general one, to get
an analytic functional calculus for quaternionic linear operators.
If ${\mathcal{V}}$ is a real Banach space, and so each operator
$T\in\mathcal{B}({\mathcal{V}})$ has a complex spectrum
$\sigma_{\mathbb{C}}(T)$, which is compact and nonempty, one can use the
classical Riesz-Dunford functional calculus, in a slightly generalized form
(that is, replacing the scalar-valued analytic functions by operator-valued
analytic ones, which is a well known idea).
The use of vector versions of the Cauchy formula is simplified by adopting the
following definition. Let $U\subset{\mathbb{C}}$ be open. An open subset
$\Delta\subset U$ will be called a Cauchy domain (in $U$) if
$\Delta\subset\bar{\Delta}\subset U$ and the boundary of $\Delta$ consists of
a finite family of closed curves, piecewise smooth, positively oriented. Note
that a Cauchy domain is bounded but not necessarily connected.
###### Remark 2
If $\mathcal{V}$ is a real Banach space, and $T\in\mathcal{B(V})$, we have the
usual analytic functional calculus for the operator
$T_{\mathbb{C}}\in\mathcal{B}(\mathcal{V}_{\mathbb{C}})$ (see [6]). That is,
in a slightly generalized form, and for later use, if
$U\supset\sigma(T_{\mathbb{C}})$ is an open set in ${\mathbb{C}}$ and
$F:U\mapsto\mathcal{B}(\mathcal{V}_{\mathbb{C}})$ is analytic, we put
$F(T_{\mathbb{C}})=\frac{1}{2\pi i}\int_{\Gamma}F(\zeta)(\zeta-
T_{\mathbb{C}})^{-1}d\zeta,$
where $\Gamma$ is the boundary of a Cauchy domain $\Delta$ containing
$\sigma(T_{\mathbb{C}})$ in $U$. In fact, because $\sigma(T_{\mathbb{C}})$ is
conjugate symmetric, we may and shall assume that both $U$ and $\Gamma$ are
conjugate symmetric. Because the function $\zeta\mapsto F(\zeta)(\zeta-
T_{\mathbb{C}})^{-1}$ is analytic in $U\setminus\sigma(T_{\mathbb{C}})$, the
integral does not depend on the particular choice of the Cauchy domain
$\Delta$ containing $\sigma(T_{C})$.
A natural question is to find an appropriate condition to we have
$F(T_{\mathbb{C}})^{\flat}=F(T_{\mathbb{C}})$, which would imply the
invariance of $\mathcal{V}$ under $F(T_{\mathbb{C}})$.
With the notation of Remark 2, we have the following.
###### Theorem 1
Let $U\subset{\mathbb{C}}$ be open and conjugate symmetric. If
$F:U\mapsto\mathcal{B}(\mathcal{V}_{\mathbb{C}})$ is analytic and
$F(\zeta)^{\flat}=F(\bar{\zeta})$ for all $\zeta\in U$, then
$F(T_{\mathbb{C}})^{\flat}=F(T_{\mathbb{C}})$ for all
$T\in\mathcal{B}(\mathcal{V})$ with $\sigma_{\mathbb{C}}(T)\subset U$.
Proof. We use the notation from Remark 2, assuming, in addition, that $\Gamma$
is conjugate symmetric as well. We put
$\Gamma_{\pm}:=\Gamma\cap{\mathbb{C}}_{\pm}$, where ${\mathbb{C}}_{+}$ (resp.
${\mathbb{C}}_{-}$) equals to $\\{\lambda\in{\mathbb{C}};\Im\lambda\geq 0\\}$
(resp. $\\{\lambda\in{\mathbb{C}};\Im\lambda\leq 0\\}$). We write
$\Gamma_{+}=\cup_{j=1}^{m}\Gamma_{j+}$, where $\Gamma_{j+}$ are the connected
components of $\Gamma_{+}$. Similarly, we write
$\Gamma_{-}=\cup_{j=1}^{m}\Gamma_{j-}$, where $\Gamma_{j-}$ are the connected
components of $\Gamma_{-}$, and $\Gamma_{j-}$ is the reflexion of
$\Gamma_{j+}$ with respect of the real axis.
As $\Gamma$ is a finite union of Jordan piecewise smooth closed curves, for
each index $j$ we have a parametrization $\phi_{j}:[0,1]\mapsto{\mathbb{C}}$,
positively oriented, such that $\phi_{j}([0,1])=\Gamma_{j+}$. Taking into
account that the function $t\mapsto\overline{\phi_{j}(t)}$ is a
parametrization of $\Gamma_{j-}$ negatively oriented, and setting
$\Gamma_{j}=\Gamma_{j+}\cup\Gamma_{j-}$, we can write
$F_{j}(T_{\mathbb{C}}):=\frac{1}{2\pi i}\int_{\Gamma_{j}}F(\zeta)(\zeta-
T_{\mathbb{C}})^{-1}d\zeta=$ $\frac{1}{2\pi
i}\int_{0}^{1}F(\phi_{j}(t))(\phi_{j}(t)-T_{\mathbb{C}})^{-1}\phi_{j}^{\prime}(t)dt$
$-\frac{1}{2\pi
i}\int_{0}^{1}F(\overline{\phi_{j}(t)})(\overline{\phi_{j}(t)}-T_{\mathbb{C}})^{-1}\overline{\phi_{j}^{\prime}(t)}dt.$
Therefore,
$F_{j}(T_{\mathbb{C}})^{\flat}=-\frac{1}{2\pi
i}\int_{0}^{1}F(\phi_{j}(t))^{\flat}(\overline{\phi_{j}(t)}-T_{\mathbb{C}})^{-1}\overline{\phi_{j}^{\prime}(t)}dt$
$+\frac{1}{2\pi
i}\int_{0}^{1}F(\overline{\phi_{j}(t)})^{\flat}(\phi_{j}(t)-T_{\mathbb{C}})^{-1}\phi_{j}^{\prime}(t)dt.$
According to our assumption on the function $F$, we obtain
$F_{j}(T_{\mathbb{C}})=F_{j}(T_{\mathbb{C}})^{\flat}$ for all $j$, and
therefore
$F(T_{\mathbb{C}})^{\flat}=\sum_{j=1}^{m}F_{j}(T_{\mathbb{C}})^{\flat}=\sum_{j=1}^{m}F_{j}(T_{\mathbb{C}})=F(T_{\mathbb{C}}),$
which concludes the proof.
###### Remark 3
If ${\mathcal{A}}$ is a unital real Banach algebra,
${\mathcal{A}}_{\mathbb{C}}$ its complexification, and $U\subset{\mathbb{C}}$
is open, we denote by $\mathcal{O}(U,{\mathcal{A}}_{\mathbb{C}})$ the algebra
of all analytic ${\mathcal{A}}_{\mathbb{C}}$-valued functions. If $U$ is
conjugate symmetric, and ${\mathcal{A}}_{\mathbb{C}}\ni
a\mapsto\bar{a}\in{\mathcal{A}}_{\mathbb{C}}$ is its natural conjugation, we
denote by $\mathcal{O}_{s}(U,{\mathcal{A}}_{\mathbb{C}})$ the real subalgebra
of $\mathcal{O}(U,{\mathcal{A}}_{\mathbb{C}})$ consisting of those functions
$F$ with the property $F(\bar{\zeta})=\overline{F(\zeta)}$ for all $\zeta\in
U$. Adapting a well known terminology, such functions will be called
(${\mathcal{A}}_{\mathbb{C}}$-valued $)$ stem functions.
When ${\mathcal{A}}={\mathbb{R}}$, so
${\mathcal{A}}_{\mathbb{C}}={\mathbb{C}}$, the space
$\mathcal{O}_{s}(U,{\mathbb{C}})$ will be denoted by $\mathcal{O}_{s}(U)$,
which is a real algebra. Note that
$\mathcal{O}_{s}(U,{\mathcal{A}}_{\mathbb{C}})$ is also a bilateral
$\mathcal{O}_{s}(U)$-module.
In the next result, we identify the algebra $\mathcal{B}(\mathcal{V})$ with a
subalgebra of $\mathcal{B}(\mathcal{V})_{\mathbb{C}}$. In ths case, when
$F\in\mathcal{O}_{s}(U,\mathcal{B}(\mathcal{V})_{\mathbb{C}})$, we shall write
$F(T)=\frac{1}{2\pi i}\int_{\Gamma}F(\zeta)(\zeta-T)^{-1}d\zeta,$
noting that the right hand side of this formula belongs to
$\mathcal{B}(\mathcal{V})$, by Theorem 1.
The properties of the map $F\mapsto F(T)$, which can be called the (left)
analytic functional calculus of $T$, are summarized by the following.
###### Theorem 2
Let ${\mathcal{V}}$ be a real Banach space, let $U\subset{\mathbb{C}}$ be a
conjugate symmetric open set, and let $T\in\mathcal{B}(\mathcal{V})$, with
$\sigma_{\mathbb{C}}(T)\subset U$. Then the assignment
${\mathcal{O}}_{s}(U,\mathcal{B}(\mathcal{V})_{\mathbb{C}})\ni F\mapsto
F(T)\in\mathcal{B}(\mathcal{V})$
is an ${\mathbb{R}}$-linear map, and the map
${\mathcal{O}}_{s}(U)\ni f\mapsto f(T)\in\mathcal{B}(\mathcal{V})$
is a unital real algebra morphism.
Moreover, the following properties are true:
(1) For all
$F\in\mathcal{O}_{s}(U,\mathcal{B}(\mathcal{V})_{\mathbb{C}}),\,f\in{\mathcal{O}}_{s}(U)$,
we have $(Ff)(T)=F(T)f(T)$.
(2) For every polynomial
$P(\zeta)=\sum_{n=0}^{m}A_{n}\zeta^{n},\,\zeta\in{\mathbb{C}}$, with
$A_{n}\in\mathcal{B}(\mathcal{V})$ for all $n=0,1,\ldots,m$, we have
$P(T)=\sum_{n=0}^{m}A_{n}T^{n}\in\mathcal{B}(\mathcal{V})$.
Proof. The arguments are more or less standard (see [6]). The
${\mathbb{R}}$-linearity of the maps
${\mathcal{O}}_{s}(U,\mathcal{B}(\mathcal{V})_{\mathbb{C}})\ni F\mapsto
F(T)\in\mathcal{B}(\mathcal{V}),\,{\mathcal{O}}_{s}(U)\ni f\mapsto
f(T)\in\mathcal{B}(\mathcal{V}),$
is clear. The second one is actually multiplicative, which follows from the
multiplicativiry of the usual analytic functional calculus of $T$.
In fact, we have a more general property, specifically $(Ff)(T)=F(T)f(T)$ for
all
$F\in\mathcal{O}_{s}(U,\mathcal{B}(\mathcal{V})_{\mathbb{C}}),\,f\in{\mathcal{O}}_{s}(U)$.
This follows from the equalities,
$(Ff)(T)=\frac{1}{2\pi
i}\int_{\Gamma_{0}}F(\zeta)f(\zeta)(\zeta-T)^{-1}d\zeta=$ $\left(\frac{1}{2\pi
i}\int_{\Gamma_{0}}F(\zeta)(\zeta-T)^{-1}d\zeta\right)\left(\frac{1}{2\pi
i}\int_{\Gamma}f(\eta)(\eta-T)^{-1}d\eta\right)=F(T)f(T),$
obtained as in the classical case (see [6], Section VII.3), which holds
because $f$ is ${\mathbb{C}}$-valued and commutes with the operators in
$\mathcal{B}(\mathcal{V})$. Here $\Gamma,\,\Gamma_{0}$ are the boundaries of
two Cauchy domains $\Delta,\,\Delta_{0}$ respectively, such that
$\Delta\supset\bar{\Delta}_{0}$, and $\Delta_{0}$ contains $\sigma(T)$.
Note that, in particular, for every polynomial
$P(\zeta)=\sum_{n=0}^{m}A_{n}\zeta^{n}$ with
$A_{n}\in\mathcal{B}(\mathcal{V})$ for all $n=0,1,\ldots,m$, we have
$P(T)=\sum_{n=0}^{m}A_{n}q^{n}\in\mathcal{B}(\mathcal{V})$ for all
$T\in\mathcal{B}(\mathcal{V})$.
###### Example 1
Let $\mathcal{V}={\mathbb{R}}^{2}$, so
$\mathcal{V}_{\mathbb{C}}={\mathbb{C}}^{2}$, endowed with its natural Hilbert
space structure. Let us first observe that we have
$S=\left(\begin{array}[]{cc}a_{1}&a_{2}\\\
a_{3}&a_{4}\end{array}\right)\,\,\Longleftrightarrow
S^{\flat}=\left(\begin{array}[]{cc}\bar{a}_{1}&\bar{a}_{2}\\\
\bar{a}_{3}&\bar{a}_{4}\end{array}\right),$
for all $a_{1},a_{2},a_{3},a_{4}\in{\mathbb{C}}$.
Next we consider the operator $T\in\mathcal{B}({\mathbb{R}}^{2})$ given by the
matrix
$T=\left(\begin{array}[]{cc}u&v\\\ -v&u\end{array}\right),$
where $u,v\in{\mathbb{R}},v\neq 0$. The extension $T_{\mathbb{C}}$ of the
operator $T$ to ${\mathbb{C}}^{2}$, which is a normal operator, is given by
the same formula. Note that
$\sigma_{\mathbb{C}}(T)=\\{\lambda\in{\mathbb{C}};(\lambda-u)^{2}+v^{2}=0\\}=\\{u\pm
iv\\}=\sigma(T_{\mathbb{C}}).$
Note also that the vectors $\nu_{\pm}=(\sqrt{2})^{-1}(1,\pm i)$ are normalized
eigenvectors for $T_{\mathbb{C}}$ corresponding to the eigenvalues $u\pm iv$,
respectively. The spectral projections of $T_{\mathbb{C}}$ corresponding to
these eigenvalues are given by
$E_{\pm}(T_{\mathbb{C}}){\bf w}=\langle{\bf
w},\nu_{\pm}\rangle\nu_{\pm}=\frac{1}{2}\left(\begin{array}[]{cc}1&\mp i\\\
\pm i&1\end{array}\right)\left(\begin{array}[]{c}w_{1}\\\
w_{2}\end{array}\right),$
for all ${\bf w}=(w_{1},w_{2})\in{\mathbb{C}}^{2}$.
Let $U\subset{\mathbb{C}}$ be an open set with $U\supset\\{u\pm iv\\}$, and
let $F:U\mapsto\mathcal{B}({\mathbb{C}}^{2})$ be analytic. We shall compute
explicitly $F(T_{\mathbb{C}})$. Let $\Delta$ be a Cauchy domain contained in
$U$ with its boundary $\Gamma$, and containing the points $u\pm iv$. Assuming
$v>0$, we have
$F(T_{\mathbb{C}})=\frac{1}{2\pi i}\int_{\Gamma}F(\zeta)(\zeta-
T_{\mathbb{C}})^{-1}d\zeta=$
$F(u+iv)E_{+}(T_{\mathbb{C}})+F(u-iv)E_{-}(T_{\mathbb{C}})=$
$\frac{1}{2}F(u+iv)\left(\begin{array}[]{cc}1&-i\\\
i&1\end{array}\right)+\frac{1}{2}F(u-iv)\left(\begin{array}[]{cc}1&i\\\
-i&1\end{array}\right).$
Assume now that $F(T_{\mathbb{C}})^{\flat}=F(T_{\mathbb{C}})$. Then we must
have
$(F(u+iv)-F(u-iv)^{\flat})\left(\begin{array}[]{cc}1&-i\\\
i&1\end{array}\right)=(F(u+iv)^{\flat}-F(u-iv))\left(\begin{array}[]{cc}1&i\\\
-i&1\end{array}\right).$
We also have the equalities
$\left(\begin{array}[]{cc}1&-i\\\
i&1\end{array}\right)\left(\begin{array}[]{c}1\\\
i\end{array}\right)=2\left(\begin{array}[]{c}1\\\
i\end{array}\right),\,\,\left(\begin{array}[]{cc}1&-i\\\
i&1\end{array}\right)\left(\begin{array}[]{c}1\\\ -i\end{array}\right)=0,$
$\left(\begin{array}[]{cc}1&i\\\
-i&1\end{array}\right)\left(\begin{array}[]{c}1\\\
-i\end{array}\right)=2\left(\begin{array}[]{c}1\\\
-i\end{array}\right),\,\,\left(\begin{array}[]{cc}1&i\\\
-i&1\end{array}\right)\left(\begin{array}[]{c}1\\\ i\end{array}\right)=0,$
Using these equalities, we finally deduce that
$(F(u+iv)-F(u-iv)^{\flat})\left(\begin{array}[]{c}1\\\ i\end{array}\right)=0,$
and
$(F(u-iv)-F(u+iv)^{\flat})\left(\begin{array}[]{c}1\\\
-i\end{array}\right)=0,$
which are necessary conditions for the equality
$F(T_{\mathbb{C}})^{\flat}=F(T_{\mathbb{C}})$. As a matter of fact, this
example shows, in particular, that the condition
$F(\zeta)^{\flat}=F(\bar{\zeta})$ for all $\zeta\in U$, used in Theorem 1, is
sufficient but it might not be always necessary.
## 4 Analytic Functional Calculus for Quaternionic
Operators
### 4.1 Quaternionic Spectrum
We now recall some known definitions and elementary facts (see, for instance,
[5], Section 4.6, and/or [21]).
Let ${\mathbb{H}}$ be the abstract algebra of quaternions, which is the four-
dimensional ${\mathbb{R}}$-algebra with unit $1$, generated by the ”imaginary
units“ $\\{\bf{j,k,l}\\}$, which satisfy
${\bf jk=-kj=l,\,kl=-lk=j,\,lj=-jl=k,\,jj=kk=ll}=-1.$
We may assume that ${\mathbb{H}}\supset{\mathbb{R}}$ identifying every number
$x\in{\mathbb{R}}$ with the element $x1\in{\mathbb{H}}$.
The algebra ${\mathbb{H}}$ has a natural multiplicative norm given by
$\|{\bf x}\|=\sqrt{x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{0}^{2}},\,\,{\bf
x}=x_{0}+x_{1}{\bf j}+x_{2}{\bf k}+x_{3}{\bf
l},\,\,x_{0},x_{1},x_{2},x_{3}\in{\mathbb{R}},$
and a natural involution
${\mathbb{H}}\ni{\bf x}=x_{0}+x_{1}{\bf j}+x_{2}{\bf k}+x_{3}{\bf
l}\mapsto{\bf x}^{*}=x_{0}-x_{1}{\bf j}-x_{2}{\bf k}-x_{3}{\bf
l}\in{\mathbb{H}}.$
Note that ${\bf x}{\bf x}^{*}={\bf x}^{*}{\bf x}=\|{\bf x}\|^{2}$, implying,
in particular, that every element ${\bf x}\in{\mathbb{H}}\setminus\\{0\\}$ is
invertible, and ${\bf x}^{-1}=\|{\bf x}\|^{-2}{\bf x}^{*}$.
For an arbitrary quaternion ${\bf x}=x_{0}+x_{1}{\bf j}+x_{2}{\bf k}+x_{3}{\bf
l},\,\,x_{0},x_{1},x_{2},x_{3}\in{\mathbb{R}}$, we set $\Re{\bf x}=x_{0}=({\bf
x}+{\bf x}^{*})/2$, and $\Im{\bf x}=x_{1}{\bf j}+x_{2}{\bf k}+x_{3}{\bf
l}=({\bf x}-{\bf x}^{*})/2$, that is, the real and imaginary part of ${\bf
x}$, respectively.
We consider the complexification
${\mathbb{C}}\otimes_{\mathbb{R}}{\mathbb{H}}$ of the ${\mathbb{R}}$-algebra
${\mathbb{H}}$ (see also [8]), which will be identified with the direct sum
${\mathbb{M}}={\mathbb{H}}+i{\mathbb{H}}$. Of course, the algebra
${\mathbb{M}}$ contains the complex field ${\mathbb{C}}$. Moreover, in the
algebra ${\mathbb{M}}$, the elements of ${\mathbb{H}}$ commute with all
complex numbers. In particular, the ”imaginary units“ $\bf j,k,l$ of the
algebra ${\mathbb{H}}$ are independent of and commute with the imaginary unit
$i$ of the complex plane ${\mathbb{C}}$.
In the algebra ${\mathbb{M}}$, there also exists a natural conjugation given
by $\bar{\bf a}={\bf b}-i{\bf c}$, where ${\bf a}={\bf b}+i{\bf c}$ is
arbitrary in ${\mathbb{M}}$, with ${\bf b},{\bf c}\in{\mathbb{H}}$ (see also
[8]). Note that $\overline{\bf a+b}=\bar{\bf a}+\bar{\bf b}$, and
$\overline{\bf ab}=\bar{\bf a}\bar{\bf b}$, in particular $\overline{r\bf
a}=r\bar{\bf a}$ for all ${\bf a},{\bf b}\in{\mathbb{M}}$, and
$r\in{\mathbb{R}}$. Moreover, $\bar{{\bf a}}={\bf a}$ if and only if ${\bf
a}\in{\mathbb{H}}$, which is a useful characterization of the elements of
${\mathbb{H}}$ among those of ${\mathbb{M}}$.
###### Remark 4
In the algebra ${\mathbb{M}}$ we have the identities
$(\lambda-{\bf x}^{*})(\lambda-{\bf x})=(\lambda-{\bf x})(\lambda-{\bf
x}^{*})=\lambda^{2}-\lambda({\bf x}+{\bf x}^{*})+\|{\bf
x}\|^{2}\in{\mathbb{C}},$
for all $\lambda\in{\mathbb{C}}$ and ${\bf x}\in{\mathbb{H}}$. If the complex
number $\lambda^{2}-2\lambda\Re{\bf x}+\|{\bf x}\|^{2}$ is nonnull, then both
element $\lambda-{\bf x}^{*},\,\lambda-{\bf x}$ are invertible. Conversely, if
$\lambda-{\bf x}$ is invertible, we must have $\lambda^{2}-2\lambda\Re{\bf
x}+\|{\bf x}\|^{2}$ nonnull; otherwise we would have $\lambda={\bf
x}^{*}\in{\mathbb{R}}$, so $\lambda={\bf x}\in{\mathbb{R}}$, which is not
possible. Therefore, the element $\lambda-{\bf x}\in{\mathbb{M}}$ is
invertible if and only if the complex number $\lambda^{2}-2\lambda\Re{\bf
x}+\|{\bf x}\|^{2}$ is nonnull. Hence, the element $\lambda-{\bf
x}\in{\mathbb{M}}$ is not invertible if and only if $\lambda=\Re{\bf x}\pm
i\|\Im{\bf x}\|$. In this way, the spectrum of a quaternion ${\bf
x}\in{\mathbb{H}}$ is given by the equality $\sigma({\bf x})=\\{s_{\pm}(\bf
x)\\}$, where $s_{\pm}(\bf x)=\Re{\bf x}\pm i\|\Im{\bf x}\|$ are the
eigenvalues of $\bf x$ (see also [20, 21]).
The polynomial $P_{\bf x}(\lambda)=\lambda^{2}-2\lambda\Re{\bf x}+\|{\bf
x}\|^{2}$ is the minimal polynomial of $\bf x$. In fact, the equality
$\sigma({\bf y})=\sigma({\bf x})$ for some ${\bf x,y}\in{\mathbb{H}}$ is an
equivalence relation in the algebra ${\mathbb{H}}$, which holds if and only if
$P_{\bf x}=P_{\bf y}$. In fact, setting
$\mathbb{S}=\\{\mathfrak{\kappa}\in{\mathbb{H}};\Re\mathfrak{\kappa}=0,\|\mathfrak{\kappa}\|=1\\}$
(that is the unit sphere of purely imaginary quaternions), representig an
arbitrary quaternion $\bf x$ under the form
$x_{0}+y_{0}\mathfrak{\kappa}_{0}$, with $x_{0},y_{0}\in{\mathbb{R}}$ and
$\mathfrak{\kappa}_{0}\in\mathbb{S}$, a quaternion $\bf y$ is equivalent to
$\bf x$ if anf only if it is of the form $x_{0}+y_{0}\mathfrak{\kappa}$ for
some $\mathfrak{\kappa}\in\mathbb{S}$ (see [3] or [21] for some details).
###### Remark 5
Following [5], a right ${\mathbb{H}}$-vector space $\mathcal{V}$ is a real
vector space having a right multiplication with the elements of
${\mathbb{H}}$, such that $(x+y){\bf q}=x{\bf q}+y{\bf q},\,x({\bf q}+{\bf
s})=x{\bf q}+x{\bf s},\,x({\bf q}{\bf s})=(x{\bf q}){\bf s}$ for all
$x,y\in\mathcal{V}$ and ${\bf q},{\bf s}\in{\mathbb{H}}$.
If $\mathcal{V}$ is also a Banach space the operator $T\in\mathcal{B(V)}$ is
right ${\mathbb{H}}$-linear if $T(x{\bf q})=T(x){\bf q}$ for all
$x\in\mathcal{V}$ and ${\bf q}\in{\mathbb{H}}$. The set of right
${\mathbb{H}}$ linear operators will be denoted by $\mathcal{B^{\rm r}(V)}$,
which is, in particular, a unital real algebra.
In a similar way, one defines the concept of a left ${\mathbb{H}}$-vector
space. A real vector space $\mathcal{V}$ will be said to be an
${\mathbb{H}}$-vector space if it is simultaneously a right ${\mathbb{H}}$\-
and a left ${\mathbb{H}}$-vector space. As noticed in [5], it is the framework
of ${\mathbb{H}}$-vector spaces an appropriate one for the study of right
${\mathbb{H}}$-linear operators.
If ${\mathcal{V}}$ is ${\mathbb{H}}$-vector space which is also a Banach
space, then ${\mathcal{V}}$ is said to be a Banach ${\mathbb{H}}$-space. In
this case, we also assume that $R_{\bf q}\in\mathcal{B}({\mathcal{V}})$, and
the map ${\mathbb{H}}\ni{\bf q}\mapsto R_{\bf q}\in\mathcal{B}({\mathcal{V}})$
is norm continuous, where $R_{\bf q}$ is the right multiplication of the
elements of $\mathcal{V}$ by a given quaternion ${\bf q}\in{\mathbb{H}}$.
Similarly, if $L_{\bf q}$ is the left multiplication of the elements of
$\mathcal{V}$ by the quaternion ${\bf q}\in{\mathbb{H}}$, we assume that
$L_{\bf q}\in\mathcal{B}({\mathcal{V}})$ for all ${\bf q}\in{\mathbb{H}}$, and
that the map ${\mathbb{H}}\ni{\bf q}\mapsto L_{\bf
q}\in\mathcal{B}({\mathcal{V}})$ is norm continuous. Note also that
$\mathcal{B^{\rm r}(V)}=\\{T\in\mathcal{B(V)};TR_{\bf q}=R_{\bf q}T,\,{\bf
q}\in{\mathbb{H}}\\}.$
To adapt the discussion regarding the real algebras to this case, we first
consider the complexification ${\mathcal{V}}_{\mathbb{C}}$ of ${\mathcal{V}}$.
Because ${\mathcal{V}}$ is an ${\mathbb{H}}$-bimodule, the space
${\mathcal{V}}_{\mathbb{C}}$ is actually an ${\mathbb{M}}$-bimodule, via the
multiplications
$({\bf q}+i{\bf s})(x+iy)={\bf q}x-{\bf s}y+i({\bf q}y+{\bf s}x),(x+iy)({\bf
q}+i{\bf s})=x{\bf q}-y{\bf s}+i(y{\bf q}+x{\bf s}),$
for all ${\bf q}+i{\bf s}\in{\mathbb{M}},\,{\bf q},{\bf
s}\in{\mathbb{H}},\,x+iy\in{\mathcal{V}}_{\mathbb{C}},\,x,y\in{\mathcal{V}}$.
Moreover, the operator $T_{\mathbb{C}}$ is right ${\mathbb{M}}$-linear, that
is $T_{\mathbb{C}}((x+iy)({\bf q}+i{\bf s}))=T_{\mathbb{C}}(x+iy)({\bf
q}+i{\bf s})$ for all ${\bf q}+i{\bf
s}\in{\mathbb{M}},\,x+iy\in{\mathcal{V}}_{\mathbb{C}}$, via a direct
computation.
Let $C$ be the conjugation of ${\mathcal{V}}_{\mathbb{C}}$. As in the real
case, for every $S\in\mathcal{B}({\mathcal{V}}_{\mathbb{C}})$, we put
$S^{\flat}=CSC$. The left and right multiplication with the quaternion ${\bf
q}$ on ${\mathcal{V}}_{\mathbb{C}}$ will be also denoted by $L_{\bf q},R_{\bf
q}$, respectively, as elements of $\mathcal{B}({\mathcal{V}}_{\mathbb{C}})$.
We set
$\mathcal{B}^{\rm
r}({\mathcal{V}}_{\mathbb{C}})=\\{S\in\mathcal{B}({\mathcal{V}}_{\mathbb{C}});SR_{\bf
q}=R_{\bf q}S,\,{\bf q}\in{\mathbb{H}}\\},$
which is a unital complex algebra containing all operators $L_{\bf q},{\bf
q}\in{\mathbb{H}}$. Note that if $S\in\mathcal{B}^{\rm
r}({\mathcal{V}}_{\mathbb{C}})$, then $S^{\flat}\in\mathcal{B}^{\rm
r}({\mathcal{V}}_{\mathbb{C}})$. Indeed, because $CR_{\bf q}=R_{\bf q}C$, we
also have $S^{\flat}R_{\bf q}=R_{\bf q}S^{\flat}$. In fact, as we have
$(S+S^{\flat})({\mathcal{V}})\subset{\mathcal{V}}$ and
$i(S-S^{\flat})({\mathcal{V}})\subset{\mathcal{V}}$, it folows that the
algebras $\mathcal{B}^{\rm r}({\mathcal{V}}_{\mathbb{C}}),\,\mathcal{B}^{\rm
r}({\mathcal{V}})_{\mathbb{C}}$ are isomorphic, and they will be often
identified, where $\mathcal{B^{\rm r}(V)}_{\mathbb{C}}=\mathcal{B^{\rm
r}(V)}+i\mathcal{B^{\rm r}(V)}$ is the complexification of $\mathcal{B^{\rm
r}(V)}$, which is also a unital complex Banach algebra.
Looking at the Definition 4.8.1 from [5] (see also [4]), we give the folowing.
###### Definition 1
For a given operator $T\in\mathcal{B^{\rm r}(V)}$, the set
$\sigma_{\mathbb{H}}(T):=\\{{\bf q}\in{\mathbb{H}};T^{2}-2(\Re{\bf q})T+\|{\bf
q}\|^{2})\,\,{\rm not}\,\,{\rm invertible}\\}$
is called the quaternionic spectrum (or simply the $Q$-spectrum) of $T$.
The complement
$\rho_{\mathbb{H}}(T)={\mathbb{H}}\setminus\sigma_{\mathbb{H}}(T)$ is called
the quaternionic resolvent (or simply the $Q$-resolvent) of $T$.
Note that, if ${\bf q}\in\sigma_{\mathbb{H}}(T)$), then $\\{{\bf
s}\in{\mathbb{H}};\sigma({\bf s})=\sigma({\bf
q})\\}\subset\sigma_{\mathbb{H}}(T)$.
Assuming that ${\mathcal{V}}$ is a Banach ${\mathbb{H}}$-space, then
$\mathcal{B^{\rm r}(V)}$ is a unital real Banach ${\mathbb{H}}$-algebra (that
is, a Banach algebra which also a Banach ${\mathbb{H}}$-space), via the
algebraic operations $({\bf q}T)(x)={\bf q}T(x)$, and $(T{\bf q})(x)=T({\bf
q}x)$ for all ${\bf q}\in{\mathbb{H}}$ and $x\in{\mathcal{V}}$. Hence the
complexification $\mathcal{B^{\rm r}(V)}_{\mathbb{C}}$ is, in particular, a
unital complex Banach algebra. Also note that the complex numbers, regarded as
elements of $\mathcal{B^{\rm r}(V)}_{\mathbb{C}}$, commute with the elements
of $\mathcal{B^{\rm r}(V)}$. For this reason, for each $T\in\mathcal{B^{\rm
r}(V)}$ we have the resolvent set
$\rho_{\mathbb{C}}(T)=\\{\lambda\in{\mathbb{C}};(T^{2}-2(\Re\lambda)T+|\lambda|^{2})^{-1}\in\mathcal{B^{\rm
r}(V)}\\}=$ $\\{\lambda\in{\mathbb{C}};(\lambda-
T_{\mathbb{C}})^{-1}\in\mathcal{B^{\rm
r}(V}_{\mathbb{C}})\\}=\rho(T_{\mathbb{C}}),$
and the associated spectrum $\sigma_{\mathbb{C}}(T)=\sigma(T_{\mathbb{C}})$.
Clearly, there exists a strong connexion between $\sigma_{\mathbb{H}}(T)$ and
$\sigma_{\mathbb{C}}(T)$. In fact, the set $\sigma_{\mathbb{C}}(T)$ looks like
a ”complex border“ of the set $\sigma_{\mathbb{H}}(T)$. Specifically, we can
prove the following.
###### Lemma 2
For every $T\in\mathcal{B^{\rm r}(V)}$ we have the equalities
$\sigma_{\mathbb{H}}(T)=\\{{\bf
q}\in{\mathbb{H}};\sigma_{\mathbb{C}}(T)\cap\sigma({\bf q})\neq\emptyset\\}.$
(2)
and
$\sigma_{\mathbb{C}}(T)=\\{\lambda\in\sigma({\bf q});{\bf
q}\in\sigma_{\mathbb{H}}(T)\\}.$ (3)
Proof. Let us prove (2). If ${\bf q}\in\sigma_{\mathbb{H}}(T)$, and so the
$T^{2}-2(\Re{\bf q})T+\|{\bf q}\|^{2}$ is not invertible, choosing
$\lambda\in\\{\Re{\bf q}\pm i\|\Im{\bf q}\|\\}=\sigma({\bf q})$, we clearly
have $T^{2}-2(\Re\lambda)T+|\lambda|^{2}$ not invertible, implying
$\lambda\in\sigma_{\mathbb{C}}(T)\cap\sigma({\bf q})\neq\emptyset$.
Conversely, if for some ${\bf q}\in{\mathbb{H}}$ there exists
$\lambda\in\sigma_{\mathbb{C}}(T)\cap\sigma({\bf q})$, and so
$T^{2}-2(\Re\lambda)T+|\lambda|^{2}=T^{2}-2(\Re{\bf q})T+\|{\bf q}\|^{2}$ is
not invertible, implying ${\bf q}\in\sigma_{\mathbb{H}}(T)$.
We now prove (3). Let $\lambda\in\sigma_{\mathbb{C}}(T)$, so the operator
$T^{2}-2(\Re\lambda)T+|\lambda|^{2}$ is not invertible. Setting ${\bf
q}=\Re(\lambda)+\|\Im\lambda\|\kappa$, with $\kappa\in\mathbb{S}$, we have
$\lambda\in\sigma({\bf q})$. Moreover, $T^{2}+2(\Re{\bf q})T+\|{\bf q}\|^{2}$
is not invertible, and so ${\bf q}\in\sigma_{\mathbb{H}}(T)$.
Conversely, if $\lambda\in\sigma({\bf q})$ for some ${\bf
q}\in\sigma_{\mathbb{H}}(T)$, then $\lambda\in\\{\Re{\bf q}\pm i\|\Im({\bf
q})\|\\}$, showing that $T^{2}-2\Re(\lambda)T+|\lambda|^{2}=T^{2}+2(\Re{\bf
q})T+\|{\bf q}\|^{2}$ is not invertible.
Remark As expected, the set $\sigma_{\mathbb{H}}(T)$ is nonempty and bounded,
which follows easily from Lemma 2. It is also compact, as a consequence of
Definition 1, because the set of invertible elements in $\mathcal{B^{\rm
r}(V)}$ is open.
We recall that a subset $\Omega\subset{\mathbb{H}}$ is said to be spectrally
saturated (see [20],[21]) if whenever $\sigma({\bf h})=\sigma({\bf q})$ for
some ${\bf h}\in{\mathbb{H}}$ and ${\bf q}\in\Omega$, we also have ${\bf
h}\in\Omega$. As noticed in [20] and [21], this concept coincides with that of
axially symmetric set, introduced in [5].
Note that the subset $\sigma_{\mathbb{H}}(T)$ spectrally saturated.
### 4.2 Analytic Functional Calculus
If ${\mathcal{V}}$ is a Banach ${\mathbb{H}}$-space, because $\mathcal{B^{\rm
r}({\mathcal{V}})}$ is real Banach space, each operator $T\in\mathcal{B^{\rm
r}({\mathcal{V}})}$ has a complex spectrum $\sigma_{\mathbb{C}}(T)$.
Therefore, applying the corresponding result for real operators, we may
construct an analytic functional calculus using the classical Riesz-Dunford
functional calculus, in a slightly generalized form. In this case, our basic
complex algebra is $\mathcal{B}^{\rm r}(\mathcal{V})_{\mathbb{C}}$, endowed
with the conjugation $\mathcal{B}^{\rm r}(\mathcal{V})_{\mathbb{C}}\ni
S\mapsto S^{\flat}\in\mathcal{B}^{\rm r}(\mathcal{V})_{\mathbb{C}}$.
###### Theorem 3
Let $U\subset{\mathbb{C}}$ be open and conjugate symmetric. If
$F:U\mapsto\mathcal{B}^{\rm r}(\mathcal{V}_{\mathbb{C}})$ is analytic and
$F(\zeta)^{\flat}=F(\bar{\zeta})$ for all $\zeta\in U$, then
$F(T_{\mathbb{C}})^{\flat}=F(T_{\mathbb{C}})$ for all $T\in\mathcal{B}^{\rm
r}(\mathcal{V})$ with $\sigma_{\mathbb{C}}(T)\subset U$.
Both the statement and the proof of Theorem 3 are similar to those of Theorem
1, and will be omitted.
As in the real case, we may identify the algebra $\mathcal{B}^{\rm
r}(\mathcal{V})$ with a subalgebra of $\mathcal{B}^{\rm
r}(\mathcal{V})_{\mathbb{C}}$. In ths case, when
$F\in\mathcal{O}_{s}(U,\mathcal{B}^{\rm
r}(\mathcal{V})_{\mathbb{C}})=\\{F\in{\mathcal{O}}(U,\mathcal{B}^{\rm
r}({\mathcal{V}})_{\mathbb{C}});F(\bar{\zeta})=F(\zeta)^{\flat}\,\,\forall\zeta\in
U\\}$ (see also Remark 3), we can write, via the previous Theorem,
$F(T)=\frac{1}{2\pi
i}\int_{\Gamma}F(\zeta)(\zeta-T)^{-1}d\zeta\in\mathcal{B}^{\rm
r}(\mathcal{V}),$
for a suitable choice of $\Gamma$.
The next result provides an analytic functional calculus for operators from
the real algebra $\mathcal{B}^{\rm r}(\mathcal{V})$.
###### Theorem 4
Let ${\mathcal{V}}$ be a Banach ${\mathbb{H}}$-space, let
$U\subset{\mathbb{C}}$ be a conjugate symmetric open set, and let
$T\in\mathcal{B}^{\rm r}(\mathcal{V})$, with $\sigma_{\mathbb{C}}(T)\subset
U$. Then the map
${\mathcal{O}}_{s}(U,\mathcal{B}^{\rm r}(\mathcal{V})_{\mathbb{C}})\ni
F\mapsto F(T)\in\mathcal{B}^{\rm r}(\mathcal{V})$
is ${\mathbb{R}}$-linear, and the map
${\mathcal{O}}_{s}(U)\ni f\mapsto f(T)\in\mathcal{B}^{\rm r}(\mathcal{V})$
is a unital real algebra morphism.
Moreover, the following properties are true:
$(1)$ For all $F\in\mathcal{O}_{s}(U,\mathcal{B}^{\rm
r}(\mathcal{V})_{\mathbb{C}}),\,f\in{\mathcal{O}}_{s}(U)$, we have
$(Ff)(T)=F(T)f(T)$.
$(2)$ For every polynomial
$P(\zeta)=\sum_{n=0}^{m}A_{n}\zeta^{n},\,\zeta\in{\mathbb{C}}$, with
$A_{n}\in\mathcal{B}^{\rm r}(\mathcal{V})$ for all $n=0,1,\ldots,m$, we have
$P(T)=\sum_{n=0}^{m}A_{n}T^{n}\in\mathcal{B}^{\rm r}(\mathcal{V})$.
The proof of this result is similar to that of Theorem 2 and will be omitted.
###### Remark 6
The algebra ${\mathbb{H}}$ is, in particular, a Banach ${\mathbb{H}}$-space.
As already noticed, the left multiplications $L_{\bf q},\,{\bf
q}\in{\mathbb{H}},$ are elements of $\mathcal{B}^{\rm r}({\mathbb{H}})$. In
fact, the map ${\mathbb{H}}\ni{\bf q}\mapsto L_{\bf q}\in\mathcal{B}^{\rm
r}({\mathbb{H}})$ is a injective morphism of real algebras allowing the
identification of ${\mathbb{H}}$ with a subalgebra of $\mathcal{B}^{\rm
r}({\mathbb{H}})$.
Let $\Omega\subset{\mathbb{H}}$ be a spectrally saturated open set, and let
$U=\mathfrak{S}(\Omega):=\\{\lambda\in{\mathbb{C}},\exists{\bf
q}\in\Omega,\lambda\in\sigma({\bf q})\\}$, which is open and conjugate
symmetric (see [21]). Denotig by $f_{\mathbb{H}}$ the function $\Omega\ni{\bf
q}\mapsto f({\bf q}),{\bf q}\in\Omega$, for every $f\in\mathcal{O}_{s}(U)$, we
set
$\mathcal{R}(\Omega):=\\{f_{\mathbb{H}};f\in\mathcal{O}_{s}(U)\\},$
which is a commutative real algebra. Defining the function $F_{\mathbb{H}}$ in
a similar way for each $F\in\mathcal{O}_{s}(U,{\mathbb{M}})$, we set
$\mathcal{R}(\Omega,{\mathbb{H}}):=\\{F_{\mathbb{H}};F\in\mathcal{O}_{s}(U,{\mathbb{M}})\\},$
which, according to the next theorem, is a right $\mathcal{R}(\Omega)$-module.
The next result is an analytic functional calculus for quaternions (see [21],
Theorem 5), obtained as a particular case of Theorem 4 (see also its
predecessor in [5]).
###### Theorem 5
Let $\Omega\subset{\mathbb{H}}$ be a spectrally saturated open set, and let
$U=\mathfrak{S}(\Omega)$. The space $\mathcal{R}(\Omega)$ is a unital
commutative ${\mathbb{R}}$-algebra, the space
$\mathcal{R}(\Omega,{\mathbb{H}})$ is a right $\mathcal{R}(\Omega)$-module,
the map
${\mathcal{O}}_{s}(U,{\mathbb{M}})\ni F\mapsto
F_{\mathbb{H}}\in\mathcal{R}(\Omega,{\mathbb{H}})$
is a right module isomorphism, and its restriction
${\mathcal{O}}_{s}(U)\ni f\mapsto f_{\mathbb{H}}\in\mathcal{R}(\Omega)$
is an ${\mathbb{R}}$-algebra isomorphism.
Moreover, for every polynomial
$P(\zeta)=\sum_{n=0}^{m}a_{n}\zeta^{n},\,\zeta\in{\mathbb{C}}$, with
$a_{n}\in{\mathbb{H}}$ for all $n=0,1,\ldots,m$, we have
$P_{\mathbb{H}}(q)=\sum_{n=0}^{m}a_{n}q^{n}\in{\mathbb{H}}$ for all
$q\in{\mathbb{H}}$.
Most of the assertions of Theorem 5 can be obtained directly from Theorem 4.
The injectivity of the map ${\mathcal{O}}_{s}(U)\ni f\mapsto
f_{\mathbb{H}}\in\mathcal{R}(\Omega)$, as well as an alternative complete
proof, can be obtained as in the proof of Theorem 5 from [21].
###### Remark 7
That Theorems 3 and 4 have practically the same proof as Theorems 1 and 2
(respectively) is due to the fact that all of them can be obtained as
particular cases of more general results. Indeed, considering a unital real
Banach algebra ${\mathcal{A}}$, and its complexification
${\mathcal{A}}_{\mathbb{C}}$, identifying ${\mathcal{A}}$ with a real
subalgebra of ${\mathcal{A}}_{\mathbb{C}}$, for a function
$F\in\mathcal{O}_{s}(U,A_{\mathbb{C}})$, where $U\subset{\mathbb{C}}$ is open
and conjugate symmetric, the element $F(b)\in{\mathcal{A}}$ for each
$b\in{\mathcal{A}}$ with $\sigma_{\mathbb{C}}(b)\subset U$. The assertion
follows as in the proof of Theorem 1. The other results also have their
counterparts. We omit the details.
###### Remark 8
The space $\mathcal{R}(\Omega,{\mathbb{H}})$ can be independently defined, and
it consists of the set of all ${\mathbb{H}}$-valued functions, which are slice
regular in the sense of [5], Definition 4.1.1. They are used in [5] to define
a quaternionic functional calculus for quaternionic linear operators (see also
[4]). Roughly speaking, given a quaternionic linear operator, each regular
quaternionic-valued function defined in a neighborhood $\Omega$ of its
quaternionic spectrum is associated with another quaternionic linear operator,
replacing formally the quaternionic variable with that operator. This
constraction is largely explained in the fourth chapter of [5].
Our Theorem 4 constructs an analytic functional calculus with functions from
${\mathcal{O}}_{s}(U,\mathcal{B}^{\rm r}(\mathcal{V})_{\mathbb{C}})$, where
$U$ is a a neighborhood of the complex spectrum of a given quaternionic linear
operator, leading to another quaternionic linear operator, replacing formally
the complex variable with that operator. We can show that those functional
calculi are equivalent. This is a consequence of the fact that the class of
regular quaternionic-valued function used by the construction in [5] is
isomorphic to the class of analytic functions used in our Theorem 5. The
advantage of our approach is its simplicity and a stronger connection with the
classical approach, using spectra defined in the complex plane, and Cauchy
type kernels partially commutative.
Let us give a direct argument concerning the equivalence of those analytic
functional calculi. For an operator $T\in\mathcal{B}^{\rm r}(\mathcal{V})$,
the so-called right $S$-resolvent is defined via the formula
$S_{R}^{-1}({\bf s},T)=-(T-{\bf s}^{*})(T^{2}-2\Re({\bf s})T+\|{\bf
s}\|)^{-1},\,\,{\bf s}\in\rho_{\mathbb{H}}(T)$ (4)
(see [5], formula (4.27)). Fixing an element $\kappa\in\mathbb{S}$, and a
spectrally saturated open set $\Omega\subset{\mathbb{H}}$, for
$\Phi\in\mathcal{R}(\Omega,{\mathbb{H}})$ one sets
$\Phi(T)=\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}\Phi({\bf s})d{\bf
s}_{\kappa}S_{R}^{-1}({\bf s},T),$ (5)
where $\Sigma\subset\Omega$ is a spectrally saturated open set containing
$\sigma_{\mathbb{H}}(T)$, such that
$\Sigma_{\kappa}=\\{u+v\kappa\in\Sigma;u,v\in{\mathbb{R}}\\}$ is a subset
whose boundary $\partial(\Sigma_{\kappa})$ consists of a finite family of
closed curves, piecewise smooth, positively oriented, and $d{\bf
s}_{\kappa}=-\kappa du\wedge dv$. Formula (5) is a (right) quaternionic
functional calculus, as defined in [5], Section 4.10.
Because the space $\mathcal{V}_{\mathbb{C}}$ is also an ${\mathbb{H}}$-space,
we may extend these formulas to the operator
$T_{\mathbb{C}}\in\mathcal{B}^{\rm r}(\mathcal{V}_{\mathbb{C}})$, extending
the operator $T$ to $T_{\mathbb{C}}$, and replacing $T$ by $T_{\mathbb{C}}$ in
formulas (4) and (5). For the function
$\Phi\in\mathcal{R}(\Omega,{\mathbb{H}})$ there exists a function
$F\in{\mathcal{O}}_{s}(U,\mathcal{B}^{\rm r}(\mathcal{V}_{\mathbb{C}}))$ such
that $F_{\mathbb{H}}=\Phi$. Denoting by $\Gamma_{\kappa}$ the boundary of a
Cauchy domain in ${\mathbb{C}}$ containing the compact set $\cup\\{\sigma({\bf
s});{\bf s}\in\overline{\Sigma_{\kappa}}\\}$, we can write
$\Phi(T_{\mathbb{C}})=\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}\left(\frac{1}{2\pi
i}\int_{\Gamma_{\kappa}}F(\zeta)(\zeta-{\bf s})^{-1}d\zeta\right)d{\bf
s}_{\kappa}S_{R}^{-1}({\bf s},T_{\mathbb{C}})=$ $\frac{1}{2\pi
i}\int_{\Gamma_{\kappa}}F(\zeta)\left(\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}(\zeta-{\bf
s})^{-1}d{\bf s}_{\kappa}S_{R}^{-1}({\bf s},T_{\mathbb{C}})\right)d\zeta.$
It follows from the complex linearity of $S_{R}^{-1}({\bf s},T_{\mathbb{C}})$,
and from formula (4.49) in [5], that
$(\zeta-{\bf s})S_{R}^{-1}({\bf s},T_{\mathbb{C}})=S_{R}^{-1}({\bf
s},T_{\mathbb{C}})(\zeta-T_{\mathbb{C}})-1,$
whence
$(\zeta-{\bf s})^{-1}S_{R}^{-1}({\bf s},T_{\mathbb{C}})=S_{R}^{-1}({\bf
s},T_{\mathbb{C}})(\zeta-T_{\mathbb{C}})^{-1}+(\zeta-{\bf s})^{-1}(\zeta-
T_{\mathbb{C}})^{-1},$
and therefore,
$\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}(\zeta-{\bf s})^{-1}d{\bf
s}_{\kappa}S_{R}^{-1}({\bf
s},T_{\mathbb{C}})=\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}d{\bf
s}_{\kappa}S_{R}^{-1}({\bf s},T_{\mathbb{C}})(\zeta-T_{\mathbb{C}})^{-1}+$
$\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}(\zeta-{\bf s})^{-1}d{\bf
s}_{\kappa}(\zeta-T_{\mathbb{C}})^{-1}=(\zeta-T_{\mathbb{C}})^{-1},$
because
$\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}d{\bf
s}_{\kappa}S_{R}^{-1}({\bf s},T_{\mathbb{C}})=1\,\,\,{\rm
and}\,\,\,\frac{1}{2\pi}\int_{\partial(\Sigma_{\kappa})}(\zeta-{\bf
s})^{-1}d{\bf s}_{\kappa}=0,$
as in Theorem 4.8.11 from [5], since the ${\mathbb{M}}$-valued function ${\bf
s}\mapsto(\zeta-{\bf s})^{-1}$ is analytic in a neighborhood of the set
$\overline{\Sigma_{\kappa}}\subset{\mathbb{C}}_{\kappa}$ for each
$\zeta\in\Gamma_{\kappa}$, respectively. Therefore
$\Phi(T_{\mathbb{C}})=\Phi(T)_{\mathbb{C}}=F(T_{\mathbb{C}})=F(T)_{\mathbb{C}}$,
implying $\Phi(T)=F(T)$.
## 5 Some Examples
###### Example 2
One of the simplest Banach ${\mathbb{H}}$-space is the space ${\mathbb{H}}$
itself. As already noticed (see Remark 6), taking
${\mathcal{V}}={\mathbb{H}}$, so ${\mathcal{V}}_{\mathbb{C}}={\mathbb{M}}$,
and fixing an element ${\bf q}\in{\mathbb{H}}$, we may consider the operator
$L_{\bf q}\in\mathcal{B}^{\rm r}({\mathbb{H}})$, whose complex spectrum is
given by $\sigma_{\mathbb{C}}(L_{\bf q})=\sigma({\bf q})=\\{\Re{\bf q}\pm
i\|\Im{\bf q}\|\\}$. If $U\subset{\mathbb{C}}$ is conjugate symmetric open set
containing $\sigma_{\mathbb{C}}(L_{\bf q})$, and
$F\in\mathcal{O}_{s}(U,{\mathbb{M}})$, then we have
$F({L_{\bf q}})=F(s_{+}({\bf q}))\iota_{+}(\mathfrak{s}_{\tilde{\bf
q}})+F(s_{-}({\bf q}))\iota_{-}(\mathfrak{s}_{\tilde{\bf q}})\in{\mathbb{M}},$
(6)
where $s_{\pm}({\bf q})=\Re{\bf q}\pm i\|\Im{\bf q}\|$, $\tilde{\bf q}=\Im\bf
q,\,\mathfrak{s}_{\tilde{\bf q}}=\tilde{\bf q}\|\tilde{\bf q}\|^{-1}$, and
$\iota_{\pm}(\mathfrak{s}_{\tilde{\bf q}})=2^{-1}(1\mp
i\mathfrak{s}_{\tilde{\bf q}})$ (see [21], Remark 3).
###### Example 3
Let ${\mathfrak{X}}$ be a topological compact space, and let
$C({\mathfrak{X}},{\mathbb{M}})$ be the space of ${\mathbb{M}}$-valued
continuous functions on ${\mathfrak{X}}$. Then
$C({\mathfrak{X}},{\mathbb{H}})$ is the real subspace of
$C({\mathfrak{X}},{\mathbb{M}})$ consisting of ${\mathbb{H}}$-valued
functions, which is also a Banach ${\mathbb{H}}$-space with respect to the
operations $({\bf q}F)(x)={\bf q}F(x)$ and $(F{\bf q})(x)=F(x){\bf q}$ for all
$F\in C({\mathfrak{X}},{\mathbb{H}})$ and $x\in{\mathfrak{X}}$. Moreover,
$C({\mathfrak{X}},{\mathbb{H}})_{\mathbb{C}}=C({\mathfrak{X}},{\mathbb{H}}_{\mathbb{C}})=C({\mathfrak{X}},{\mathbb{M}})$.
We fix a function $\Theta\in C({\mathfrak{X}},{\mathbb{H}})$ and define the
operator $T\in\mathcal{B}(C({\mathfrak{X}},{\mathbb{H}}))$ by the relation
$(TF)(x)=\Theta(x)F(x)$ for all $F\in C({\mathfrak{X}},{\mathbb{H}})$ and
$x\in{\mathfrak{X}}$. Note that $(T(F{\bf q}))(x)=\Theta(x)F(x){\bf
q}=((TF){\bf q})(x)$ for all $F\in C({\mathfrak{X}},{\mathbb{H}}),{\bf
q}\in{\mathbb{H}}$, and $x\in{\mathfrak{X}}$. In othe words,
$T\in\mathcal{B}^{\rm r}(C({\mathfrak{X}},{\mathbb{H}}))$. Note also that the
operator $T$ is invertible if and only if the function $\Theta$ has no zero in
${\mathfrak{X}}$.
Let us compute the $Q$-spectrum of $T$. According to Definition 1, we have
$\rho_{\mathbb{H}}(T)=\\{{\bf q}\in{\mathbb{H}};(T^{2}-2\Re{\bf q}\,T+\|{\bf
q}\|^{2})^{-1}\in\mathcal{B}^{\rm r}(C({\mathfrak{X}},{\mathbb{H}}))\\}.$
Consequently, ${\bf q}\in\sigma_{\mathbb{H}}(T)$ if and only if zero is in the
range of the function
$\tau({\bf q},x):=\Theta(x)^{2}-2\Re{\bf q}\,\Theta(x)+\|{\bf
q}\|^{2},\,x\in\mathfrak{X}.$
Similarly,
$\rho_{\mathbb{C}}(T)=\\{\lambda\in{\mathbb{C}};(T^{2}-2\Re\lambda\,T+\|\lambda\|^{2})^{-1}\in\mathcal{B}^{\rm
r}(C({\mathfrak{X}},{\mathbb{H}}))\\},$
and so $\lambda\in\sigma_{\mathbb{C}}(T)$ if and only if zero is in the range
of the function
$\tau(\lambda,x):=\Theta(x)^{2}-2\Re\lambda\,\Theta(x)+|\lambda|^{2},\,x\in\mathfrak{X}.$
Looking for solutions $u+iv,u,v\in{\mathbb{R}}$, of the equation
$(u-\Theta(x))^{2}+v^{2}=0$, a direct calculation shows that $u=\Re\Theta(x)$
and $v=\pm\|\Im\Theta(x)\|$. Hence
$\sigma_{\mathbb{C}}(T)=\\{\Re\Theta(x)\pm
i\|\Im\Theta(x)\|;x\in\mathfrak{X}\\}=\cup_{x\in\mathfrak{X}}\sigma(\Theta(x)).$
Of course, for every open conjugate symmetric subset $U\subset{\mathbb{C}}$
containing $\sigma_{\mathbb{C}}(T)$, and for every function
$\Phi\in\mathcal{O}_{c}(U,\mathcal{B}(C({\mathfrak{X}},{\mathbb{M}})))$, we
may construct the operator $\Phi(T)\in\mathcal{B}^{\rm
r}(C({\mathfrak{X}},{\mathbb{H}}))$, using Theorem 4.
## 6 Quaternionic Joint Spectrum of Paires
In many applications, it is more convenient to work with matrix quaternions
rather than with abstract quaternions. Specifically, one considers the
injective unital algebra morphism
${\mathbb{H}}\ni x_{1}+y_{1}{\bf j}+x_{2}{\bf k}+y_{2}{\bf
l}\mapsto\left(\begin{array}[]{cc}x_{1}+iy_{1}&x_{2}+iy_{2}\\\
-x_{2}+iy_{2}&x_{1}-iy_{1}\end{array}\right)\in{\mathbb{M}}_{2},$
with $x_{1},y_{1},x_{2},y_{2}\in{\mathbb{R}},$ where ${\mathbb{M}}_{2}$ is the
complex algebra of $2\times 2$-matrix, whose image, denoted by
${\mathbb{H}}_{2}$ is the real algebra of matrix quaternions. The elements of
${\mathbb{H}}_{2}$ can be also written as matrices of the form
$Q({\bf z})=\left(\begin{array}[]{cc}z_{1}&z_{2}\\\
-\bar{z}_{2}&\bar{z_{1}}\end{array}\right),\,\,{\bf
z}=(z_{1},z_{2})\in{\mathbb{C}}^{2}.$
A strong connection between the spectral theory of pairs of commuting
operators in a complex Hilbert space and the algebra of quaternions has been
firstly noticed in [17]. Another connection will be presented in this section.
If ${\mathcal{V}}$ is an arbitrary vector space, we denote by
${\mathcal{V}}^{2}$ the Cartesian product ${\mathcal{V}}\times{\mathcal{V}}$.
Let $\mathcal{V}$ be a real Banach space, and let ${\bf
T}=(T_{1},T_{2})\in\mathcal{B(V)}^{2}$ be a pair of commuting operators. The
extended pair ${\bf
T}_{\mathbb{C}}=(T_{1{\mathbb{C}}},T_{2{\mathbb{C}}})\in\mathcal{B(V_{\mathbb{C}})}^{2}$
also consists of commuting operators. For simplicity, we set
$Q({\bf
T}_{\mathbb{C}}):=\left(\begin{array}[]{cc}T_{1{\mathbb{C}}}&T_{2{\mathbb{C}}}\\\
-T_{2{\mathbb{C}}}&T_{1{\mathbb{C}}}\end{array}\right)$
which acts on the complex Banach space $\mathcal{V}_{\mathbb{C}}^{2}$.
We now define the quaternionic resolvent set and spectrum for the case of a
pair of operators, inspired by the previous discussion concerning a single
operator.
###### Definition 2
Let $\mathcal{V}$ be a real Banach space. For a given pair ${\bf
T}=(T_{1},T_{2})\in\mathcal{B(V)}^{2}$ of commuting operators, the set of
those $Q({\bf z})\in{\mathbb{H}}_{2},\,{\bf
z}=(z_{1},z_{2})\in{\mathbb{C}}^{2}$, such that the operator
$T_{1}^{2}+T_{2}^{2}-2\Re{z_{1}}T_{1}-2\Re{z_{2}}T_{2}+|z_{1}|^{2}+|z_{2}|^{2}$
is invertible in $\mathcal{B(V)}$ is said to be the quaternionic joint
resolvent (or simply the $Q$-joint resolvent) of ${\bf T}$, and is denoted by
$\rho_{\mathbb{H}}({\bf T})$.
The complement $\sigma_{\mathbb{H}}({\bf
T})={\mathbb{H}}_{2}\setminus\rho_{\mathbb{H}}({\bf T})$ is called the
quaternionic joint spectrum (or simply the $Q$-joint spectrum) of ${\bf T}$.
For every pair ${\bf
T}_{\mathbb{C}}=(T_{1{\mathbb{C}}},T_{2{\mathbb{C}}})\in\mathcal{B(V_{\mathbb{C}})}^{2}$
we put ${\bf
T}_{\mathbb{C}}^{c}=(T_{1{\mathbb{C}}},-T_{2{\mathbb{C}}})\in\mathcal{B(V_{\mathbb{C}})}^{2}$,
and for every pair ${\bf z}=(z_{1},z_{2})\in{\mathbb{C}}^{2}$ we put ${\bf
z}^{c}=(\bar{z}_{1},-z_{2})\in{\mathbb{C}}^{2}$
###### Lemma 3
A matrix quaternion $Q({\bf z})$ $({\bf z}\in{\mathbb{C}}^{2})$ is in the set
$\rho_{\mathbb{H}}({\bf T})$ if and only if the operators $Q({\bf
T}_{\mathbb{C}})-Q({\bf z}),\,Q({\bf T}_{\mathbb{C}}^{c})-Q({\bf z}^{c})$ are
invertible in $\mathcal{B}(\mathcal{V}_{\mathbb{C}}^{2})$.
Proof The assertion follows from the equalities
$\left(\begin{array}[]{cc}T_{1{\mathbb{C}}}-z_{1}&T_{2{\mathbb{C}}}-z_{2}\\\
-T_{2{\mathbb{C}}}+\bar{z}_{2}&T_{1{\mathbb{C}}}-\bar{z}_{1}\end{array}\right)\left(\begin{array}[]{cc}T_{1{\mathbb{C}}}-\bar{z}_{1}&-T_{2{\mathbb{C}}}+z_{2}\\\
T_{2{\mathbb{C}}}-\bar{z}_{2}&T_{1{\mathbb{C}}}-z_{1}\end{array}\right)=$
$\left(\begin{array}[]{cc}T_{1{\mathbb{C}}}-\bar{z}_{1}&-T_{2{\mathbb{C}}}+z_{2}\\\
T_{2{\mathbb{C}}}-\bar{z}_{2}&T_{1{\mathbb{C}}}-z_{1}\end{array}\right)\left(\begin{array}[]{cc}T_{1{\mathbb{C}}}-z_{1}&T_{2{\mathbb{C}}}-z_{2}\\\
-T_{2{\mathbb{C}}}+\bar{z}_{2}&T_{1{\mathbb{C}}}-\bar{z}_{1}\end{array}\right)=$
$[(T_{1{\mathbb{C}}}-z_{1})(T_{1{\mathbb{C}}}-\bar{z}_{1})+(T_{2{\mathbb{C}}}-z_{2})(T_{2{\mathbb{C}}}-\bar{z}_{2})]{\bf
I}.$
for all ${\bf z}=(z_{1},z_{2})\in{\mathbb{C}}^{2}$, where $\bf I$ is the
identity. Consequently, the operators $Q({\bf T}_{\mathbb{C}})-Q({\bf
z}),\,Q({\bf T}_{\mathbb{C}}^{c})-Q({\bf z}^{c})$ are invertible in
$\mathcal{B}({\mathcal{V}}_{\mathbb{C}}^{2})$ if and only if the operator
$(T_{1{\mathbb{C}}}-z_{1})(T_{1{\mathbb{C}}}-\bar{z}_{1})+(T_{2{\mathbb{C}}}-z_{2})(T_{2{\mathbb{C}}}-\bar{z}_{2})$
is invertible in $\mathcal{B}(\mathcal{V}_{\mathbb{C}})$. Because we have
$T_{1{\mathbb{C}}}^{2}+T_{2{\mathbb{C}}}^{2}-2\Re{z_{1}}T_{1{\mathbb{C}}}-2\Re{z_{2}}T_{2{\mathbb{C}}}+|z_{1}|^{2}+|z_{2}|^{2}=$
$[T_{1}^{2}+T_{1}^{2}-2\Re{z_{1}}T_{1}-2\Re{z_{2}}T_{2}+|z_{1}|^{2}+|z_{2}|^{2}]_{\mathbb{C}},$
the operators $Q({\bf T}_{\mathbb{C}})-Q({\bf z}),\,Q({\bf
T}_{\mathbb{C}}^{c})-Q({\bf z}^{c})$ are invertible in
$\mathcal{B}({\mathcal{V}}_{\mathbb{C}}^{2})$ if and only if the operator
$T_{1}^{2}+T_{1}^{2}-2\Re{z_{1}}T_{1}-2\Re{z_{2}}T_{2}+|z_{1}|^{2}+|z_{2}|^{2}$
is invertible in $\mathcal{B(V)}$.
Lemma 3 shows that we have the property $Q({\bf z})\in\sigma_{\mathbb{H}}({\bf
T})$ if and only if $Q(z^{c})\in\sigma_{\mathbb{H}}({\bf T}^{c})$. Putting
$\sigma_{{\mathbb{C}}^{2}}({\bf T}):=\\{{\bf z}\in{\mathbb{C}}^{2};Q({\bf
z})\in\sigma_{\mathbb{H}}({\bf T})\\},$
the set $\sigma_{{\mathbb{C}}^{2}}({\bf T})$ has a similar property,
specifically $\bf z\in\sigma_{{\mathbb{C}}^{2}}({\bf T})$ if and only if $\bf
z^{c}\in\sigma_{{\mathbb{C}}^{2}}({\bf T}^{c})$. As in the quaternionic case,
the set $\sigma_{{\mathbb{C}}^{2}}({\bf T})$ looks like a ”complex border“ of
the set $\sigma_{\mathbb{H}}({\bf T})$.
###### Remark 9
For the extended pair ${\bf
T}_{\mathbb{C}}=(T_{1{\mathbb{C}}},T_{2{\mathbb{C}}})\in{B(V_{\mathbb{C}})}^{2}$
of the commuting pair ${\bf T}=(T_{1},T_{2})\in\mathcal{B(V)}$ there is an
interesting connexion with the joint spectral theory of J. L. Taylor (see [15,
16]; see also [19]). Namely, if the operator
$T_{1{\mathbb{C}}}^{2}+T_{2{\mathbb{C}}}^{2}-2\Re{z_{1}}T_{1{\mathbb{C}}}-2\Re{z_{2}}T_{2{\mathbb{C}}}+|z_{1}|^{2}+|z_{2}|^{2}$
is invertible, then the point ${\bf z}=(z_{1},z_{2})$ belongs to the joint
resolvent of ${\bf T}_{\mathbb{C}}$. Indeed, setting
$R_{j}({\bf T}_{\mathbb{C}},{\bf
z})=(T_{j{\mathbb{C}}}-\bar{z}_{j})(T_{1{\mathbb{C}}}^{2}+T_{2{\mathbb{C}}}^{2}-2\Re{z_{1}}T_{1{\mathbb{C}}}-2\Re{z_{2}}T_{2{\mathbb{C}}}+|z_{1}|^{2}+|z_{2}|^{2})^{-1},$
$q=Q({\bf z})$ for $j=1,2$, we clearly have
$(T_{1{\mathbb{C}}}-z_{1})R_{1}({\bf T}_{\mathbb{C}},{\bf
z})+(T_{2{\mathbb{C}}}-z_{2})R_{2}({\bf T}_{\mathbb{C}},{\bf z})={\bf I},$
which, according to [15], implies that ${\bf z}$ is in the joint resolvent of
${\bf T}_{\mathbb{C}}$. A similar argument shows that, in this case the point
${\bf z}^{c}$ belongs to the joint resolvent of ${\bf T}_{\mathbb{C}}^{c}$. In
addition, if $\sigma(T_{\mathbb{C}})$ designates the Taylor spectrum of
$T_{\mathbb{C}}$, we have the inclusion
$\sigma(T_{\mathbb{C}})\subset\sigma_{{\mathbb{C}}^{2}}({\bf T})$. In
particular, for every complex-valued function $f$ analytic in a neighborhood
of $\sigma_{{\mathbb{C}}^{2}}({\bf T})$, the operator $f(\bf T_{\mathbb{C}})$
can be computed via Taylor’s analytic functional calculus. In fact, we have a
Martinelli type formula for the analytic functional calculus:
###### Theorem 6
Let $\mathcal{V}$ be a real Banach space, let ${\bf
T}=(T_{1},T_{2})\in\mathcal{B(V)}^{2}$ be a pair of commuting operators, let
$U\subset{\mathbb{C}}^{2}$ be an open set, let $D\subset U$ be a bounded
domain containing $\sigma_{{\mathbb{C}}^{2}}({\bf T})$, with piecewise-smooth
boundary $\Sigma$, and let $f\in\mathcal{O}(U)$. Then we have
$f({\bf T}_{\mathbb{C}})=\frac{1}{(2\pi i)^{2}}\int_{\Sigma}f({\bf z}))L({\bf
z,T_{\mathbb{C}}})^{-2}(\bar{z}_{1}-T_{1{\mathbb{C}}})d\bar{z}_{2}-(\bar{z}_{2}-T_{2{\mathbb{C}}})d\bar{z}_{1}]dz_{1}dz_{2},$
where
$L({\bf
z,T_{\mathbb{C}}})=T_{1{\mathbb{C}}}^{2}+T_{2{\mathbb{C}}}^{2}-2\Re{z_{1}}T_{1{\mathbb{C}}}-2\Re{z_{2}}T_{2{\mathbb{C}}}+|z_{1}|^{2}+|z_{2}|^{2}.$
Proof. Theorem III.9.9 from [19] implies that the map $\mathcal{O}(U)\ni
f\mapsto f({\bf T}_{\mathbb{C}})\in\mathcal{B(V_{\mathbb{C}})}$, defined in
terms of Taylor’s analytic functional calculus, is unital, linear,
multiplicative, and ordinary complex polynomials in ${\bf z}$ are transformed
into polynomials in ${\bf T}_{\mathbb{C}}$ by simple substitution, where
$\mathcal{O}(U)$ is the algebra of all analytic functions in the open set
$U\subset{\mathbb{C}}^{2}$, provided $U\supset\sigma({\bf T}_{\mathbb{C}})$.
The only thing to prove is that, when $U\supset\sigma_{{\mathbb{C}}^{2}}({\bf
T})$, Taylor’s functional calculus is given by the stated (canonical) formula.
In order to do that, we use an argument from the proof of Theorem III.8.1 in
[19], to make explicit the integral III(9.2) from [19] (see also [12]).
We consider the exterior algebra
$\Lambda[e_{1},e_{2},\bar{\xi_{1}},\bar{\xi_{2}},\mathcal{O}(U)\otimes\mathcal{V}_{\mathbb{C}}]=\Lambda[e_{1},e_{2},\bar{\xi_{1}},\bar{\xi_{2}}]\otimes\mathcal{O}(U)\otimes\mathcal{V}_{\mathbb{C}},$
where the indeterminates $e_{1},e_{2}$ are to be associated with the pair
${\bf T}_{\mathbb{C}}$, we put $\bar{\xi_{j}}=d\bar{z}_{j},\,j=1,2$, and
consider the operators $\delta=(z_{1}-T_{1{\mathbb{C}}})\otimes
e_{1}+(z_{2}-T_{2{\mathbb{C}}})\otimes
e_{2},\,\bar{\partial}=(\partial/\partial\bar{z_{1}})\otimes\bar{\xi_{1}}+(\partial/\partial\bar{z_{2}})\otimes\bar{\xi_{2}}$,
acting naturally on this exterior algebra, via the calculus with exterior
forms.
To simplify the computation, we omit the symbol $\otimes$, and the exterior
product will be denoted simply par juxtaposition.
We fix the exterior form $\eta=\eta_{2}=fye_{1}e_{2}$ for some
$f\in\mathcal{O}(U)$ and $y\in\mathcal{X}_{\mathbb{C}}$, which clearly satisfy
the equation $(\delta+\bar{\partial})\eta=0$, and look for a solution $\theta$
of the equation $(\delta+\bar{\partial})\theta=\eta$. We write
$\theta=\theta_{0}+\theta_{1}$, where $\theta_{0},\theta_{1}$ are of degree
$0$ and $1$ in $e_{1},e_{2}$, respectively. Then the equation
$(\delta+\bar{\partial})\theta=\eta$ can be written under the form
$\delta\theta_{1}=\eta,\,\delta\theta_{0}=-\bar{\partial}\theta_{1}$, and
$\bar{\partial}\theta_{0}=0$. Note that
$\theta_{1}=fL({\bf
z,T_{\mathbb{C}}})^{-1}[(\bar{z}_{1}-T_{1{\mathbb{C}}})ye_{2}-(\bar{z}_{2}-T_{2{\mathbb{C}}})]ye_{1}$
is visibly a solution of the equation $\delta\theta_{1}=\eta$. Further, we
have
$\bar{\partial}\theta_{1}=fL({\bf
z,T_{\mathbb{C}}})^{-2}[(z_{1}-T_{1{\mathbb{C}}})(\bar{z}_{2}-T_{2{\mathbb{C}}})y\bar{\xi}_{1}e_{1}-(z_{1}-T_{1{\mathbb{C}}})(\bar{z}_{1}-T_{1{\mathbb{C}}})y\bar{\xi}_{2}e_{1}+$
$(z_{2}-T_{2{\mathbb{C}}})(\bar{z}_{2}-T_{2{\mathbb{C}}})y\bar{\xi}_{1}e_{2}-(z_{2}-T_{2{\mathbb{C}}})(\bar{z}_{1}-T_{1{\mathbb{C}}})y\bar{\xi}_{2}e_{2}]=$
$\delta[fL({\bf
z,T_{\mathbb{C}}})^{-2}(\bar{z}_{1}-T_{1{\mathbb{C}}})y\bar{\xi}_{2}-fL({\bf
z,T_{\mathbb{C}}})^{-2}(\bar{z}_{2}-T_{2{\mathbb{C}}})y\bar{\xi}_{1}],$
so we may define
$\theta_{0}=-fL({\bf
z,T_{\mathbb{C}}})^{-2}(\bar{z}_{1}-T_{1{\mathbb{C}}})y\bar{\xi}_{2}+fL({\bf
z,T_{\mathbb{C}}})^{-2}(\bar{z}_{2}-T_{2{\mathbb{C}}})y\bar{\xi}_{1}.$
Formula III(8.5) from [19] shows that
$f({\bf T}_{\mathbb{C}})y=-\frac{1}{(2\pi
i)^{2}}\int_{U}\bar{\partial}(\phi\theta_{0})dz_{1}dz_{2}=$ $\frac{1}{(2\pi
i)^{2}}\int_{\Sigma}f({\bf z}))L({\bf
z,T_{\mathbb{C}}})^{-2}[(\bar{z}_{1}-T_{1{\mathbb{C}}})yd\bar{z}_{2}-(\bar{z}_{2}-T_{2{\mathbb{C}}})yd\bar{z}_{1}]dz_{1}dz_{2},$
for all $y\in\mathcal{X}_{\mathbb{C}}$, via Stokes’s formula, where $\phi$ is
a smooth function such that $\phi=0$ in a neighborhood of
$\sigma_{{\mathbb{C}}^{2}}({\bf T})$, $\phi=1$ on $\Sigma$ and the support of
$1-\phi$ is compact.
###### Remark 10
(1) We may extend the previous functional calculus to
$\mathcal{B(V}_{\mathbb{C}})$-valued analytic functions, setting, for such a
function $F$ and with the notation from above,
$F({\bf T}_{\mathbb{C}})=\frac{1}{(2\pi i)^{2}}\int_{\Sigma}F({\bf z}))L({\bf
z,T_{\mathbb{C}}})^{-2}(\bar{z}_{1}-T_{1{\mathbb{C}}})d\bar{z}_{2}-(\bar{z}_{2}-T_{2{\mathbb{C}}})d\bar{z}_{1}]dz_{1}dz_{2}.$
In particular, if $F({\bf z})=\sum_{j,k\geq
0}A_{jk{\mathbb{C}}}z_{1}^{j}z_{2}^{k}$, with $A_{j,k}\in\mathcal{B(V)}$,
where the series is convergent in neighborhood of
$\sigma_{{\mathbb{C}}^{2}}({\bf T})$, we obtain
$F({\bf T}):=F({\bf T}_{\mathbb{C}})|\mathcal{V}=\sum_{j,k\geq
0}A_{jk}T_{1}^{j}T_{2}^{k}\in\mathcal{B(V)}.$
(2) The connexion of the spectral theory of pairs with the algebra of
quaternions is even stronger in the case of complex Hilbert spaces.
Specifically, if $\mathcal{H}$ is a complex Hilbert space and ${\bf
V}=(V_{1},V_{2})$ is a commuting pair of bounded linear operators on
$\mathcal{H}$, a point ${\bf z}=(z_{1},z_{2})\in{\mathbb{C}}^{2}$ is in the
joint resolvent of ${\bf V}$ if and only if the operator $Q({\bf V})-Q({\bf
z})$ is invertible in $\mathcal{H}^{2}$, where
$Q({\bf V})=\left(\begin{array}[]{cc}V_{1}&V_{2}\\\
-V_{2}^{*}&V_{1}^{*}\end{array}\right).$
(see [17] for details). In this case, there is also a Martinelli type formula
which can be used to construct the associated analytic functional calculus
(see [18],[19]). An approach to such a construction in Banach spaces, by using
a so-called splitting joint spectrum, can be found in [14].
## References
* [1] A. G. Baskakov and A. S. Zagorskii: Spectral Theory of Linear Relations on Real Banach Spaces, Mathematical Notes (Russian: Matematicheskie Zametki), 2007, Vol. 81, No. 1, pp. 15-27.
* [2] S. Bochner: Analytic and meromorphic continuation by means of Green’s formula, Ann. of Math. (2) , 44 : 4 (1943) pp. 652-673.
* [3] J. L. Brenner: Matrices of quaternions, Pacific J. Math. 1 (1951), 329-335.
* [4] F. Colombo, J. Gantner, D. P. Kimsey: Spectral Theory on the S-Spectrum for Quaternionic Operators, Birkhäuser, 2018.
* [5] F. Colombo, I. Sabadini and D. C. Struppa: Noncommutative Functional Calculus, Theory and Applications of Slice Hyperholomorphic Functions: Progress in Mathematics, Vol. 28 Birkhäuser/Springer Basel AG, Basel, 2011.
* [6] N. Dunford and J. T. Schwartz: Linear Operators, Part I: General Theory, Interscience Publishers, New York, London, 1958.
* [7] G. Gentili and D. C. Struppa: A new theory of regular functions of a quaternionic variable, Advances in Mathematics 216 (2007) 279-301.
* [8] R. Ghiloni , V. Moretti and A. Perotti: Continuous slice functional calculus in quaternionic Hilbert spaces, Rev. Math. Phys. 25 (2013), no. 4, 1350006, 83 p.
* [9] L. Ingelstam: Real Banach algebras. Ark. Mat. 5 (1964), 239–270 (1964).
* [10] I. Kaplansky: Normed algebras, Duke. Math. J. 16, 399-418 (1949).
* [11] S. H. Kulkarni: Representations of a Class of Real $B^{*}$-Algebras as Algebras of Quaternion-Valued Functions, Proceedings of the American Mathematical Society, Vol. 116, No. 1 (1992), 61-66.
* [12] R. Levi: Notes on the Taylor joint spectrum of commuting operators. Spectral theory (Warsaw, 1977), 321–332, Banach Center Publ., 8, PWN, Warsaw, 1982.
* [13] E. Martinelli: Alcuni teoremi integrali per le funzioni analitiche di più variabili complesse, Accad. Ital. Mem. Cl. Sci. fis. mat. nat. 9 (1938), 269-283.
* [14] V. Müller and V. Kordula: Vasilescu-Martinelli formula for operators in Banach spaces, Studia Math. 113 (1995), no. 2, 127-139.
* [15] J. L. Taylor: A joint spectrum for several commuting operators. J. Functional Anal. 6 1970 172-191.
* [16] J. L. Taylor: The analytic functional calculus for several commuting operators, Acta Math. 125 (1970), 1-38.
* [17] F.-H. Vasilescu: On pairs of commuting operators, Studia Math. 62 (1978), 203-207.
* [18] F.-H. Vasilescu: A Martinelli type formula for the analytic functional calculus, Rev. Roumaine Math. Pures Appl. 23 (1978), no. 10, 1587-1605.
* [19] F.-H. Vasilescu: Analytic functional calculus and spectral decompositions, D. Reidel Publishing Co., Dordrecht and Editura Academiei R. S. R., Bucharest, 1982.
* [20] F.-H. Vasilescu: Analytic Functional Calculus in Quaternionic Framework, http://arxiv.org/abs/1902.03850
* [21] F.-H. Vasilescu: Quaternionic Regularity via Analytic Functional Calculus, Integral Equations and Operator Theory, DOI: 10.1007/s00020-020-2574-7
|
2024-09-04T02:54:59.172215 | 2020-03-11T12:54:41 | 2003.05265 | {
"authors": "D. S. Fern\\'andez, \\'A. G. L\\'opez, J. M. Seoane, and M. A. F.\n Sanju\\'an",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26164",
"submitter": "Diego S\\'anchez Fern\\'andez",
"url": "https://arxiv.org/abs/2003.05265"
} | arxiv-papers | # Transient chaos under coordinate transformations in relativistic systems
D. S. Fernández Á. G. López J. M. Seoane M. A. F. Sanjuán Nonlinear
Dynamics, Chaos and Complex Systems Group, Departamento de Física, Universidad
Rey Juan Carlos, Tulipán s/n, 28933 Móstoles, Madrid, Spain
###### Abstract
We use the Hénon-Heiles system as a paradigmatic model for chaotic scattering
to study the Lorentz factor effects on its transient chaotic dynamics. In
particular, we focus on how time dilation occurs within the scattering region
by measuring the time in a clock attached to the particle. We observe that the
several events of time dilation that the particle undergoes exhibit
sensitivity to initial conditions. However, the structure of the singularities
appearing in the escape time function remains invariant under coordinate
transformations. This occurs because the singularities are closely related to
the chaotic saddle. We then demonstrate using a Cantor-like set approach that
the fractal dimension of the escape time function is relativistic invariant.
In order to verify this result, we compute by means of the uncertainty
dimension algorithm the fractal dimensions of the escape time functions as
measured with inertial and comoving with the particle frames. We conclude
that, from a mathematical point of view, chaotic transient phenomena are
equally predictable in any reference frame and that transient chaos is
coordinate invariant.
###### pacs:
05.45.Ac,05.45.Df,05.45.Pq
## I Introduction
Chaotic scattering in open Hamiltonian systems is a fundamental part of the
theoretical study of dynamical systems. There are many applications such as
the interaction between the solar wind and the magnetosphere tail seoane2013 ,
the simulation in several dimensions of the molecular dynamics lin2013 , the
modeling of chaotic advection of particles in fluid mechanics daitche2014 , or
the analysis of the escaping mechanism from a star cluster or a galaxy
zotos2017 ; navarro2019 , to name a few. A scattering phenomenon is a process
in which a particle travels freely from a remote region and encounters an
obstacle, often described in terms of a potential, which affects its
evolution. Finally, the particle leaves the interaction region and continues
its journey freely. This interaction is typically nonlinear, possibly leading
the particle to perform transient chaotic dynamics, i.e., chaotic dynamics
with a finite lifetime lai2010 ; grebogi1983 . Scattering processes are
commonly studied by means of the scattering functions, which relate the
particle states at the beginning of its evolution once the interaction with
the potential has already taken place. Thus, nonlinear interactions can make
these functions exhibit self-similar arrangements of singularities, which
hinder the system predictability aguirre2009 . Transient chaos is a
manifestation of the presence in phase space of a chaotic set called non-
attracting chaotic set, also called chaotic saddle ott1993 . This phenomenon
can be found in a wide variety of situations tel2015 , as for example the
dynamics of decision making, the doubly transient chaos of undriven autonomous
mechanical systems or even in the sedimentation of volcanic ash.
There have been numerous efforts to characterize chaos in relativistic systems
in an observer-independent manner hobill1994 . It has been rigorously
demonstrated that the sign of the Lyapunov exponents is invariant under
coordinate transformations that satisfy four minimal conditions motter2003 .
More specifically, such conditions consider that a valid coordinate
transformation has to leave the system autonomous, its phase space bounded,
the invariant measure normalizable and the domain of the new time parameter
infinite motter2003 . As a consequence, chaos is a property of relativistic
systems independent of the choice of the coordinate system in which they are
described. In other words, homoclinic and heteroclinic tangles cannot be
untangled by means of coordinate transformations. We shall utilize the Lorentz
transformations along this paper, which satisfy this set of conditions
motter2009 . Although we utilize a Hamiltonian system in its open regime, from
the point of view of Lyapunov exponents the phase space can be considered
bounded because of the presence of the chaotic saddle. This set is located in
a finite region of the system’s phase space and contains all the non-escaping
orbits in the hyperbolic regime. Hence, the Lyapunov exponents are well-
defined because these trajectories stay in the saddle forever. On the other
hand, concerning the computation of the escape time function, we shall only
consider along this work the finite part of the phase space where the escaping
orbits remain bounded, and similarly from the point of view of the finite-time
Lyapunov exponents the phase space can be considered bounded as well
vallejo2003 .
Despite the fact that the sign of the Lyapunov exponents is invariant, the
precise values of these exponents, which indicate “how chaotic” a dynamical
system is, are noninvariant. Therefore, this lack of invariance leaves some
room to explore how coordinate transformations affect the unpredictability in
dynamical systems with transient chaos. In the present work we analyze the
structure of singularities of the scattering functions under a valid
coordinate transformation. In particular, we compute the fractal dimension of
the escape time function as measured in an inertial reference frame and
another non-inertial reference frame comoving with the particle, respectively.
We then characterize the system unpredictability by calculating this fractal
dimension, since it enables to infer the dimension of the chaotic saddle
aguirre2001 . Indeed, this purely geometrical method has been proposed as an
independent-observer procedure to determine whether the system behaves
chaotically motter2001 .
Relevant works have been devoted to analyze the relationship between
relativity and chaos in recent decades barrow1982 ; chernikov1989 ; ni2012 .
More recently, the Lorentz factor effects on the dynamical properties of the
system have also been studied in relativistic chaotic scattering bernal2017 ;
bernal2018 . In this paper, we focus on how changes of the reference frame
affect typical phenomena of chaotic scattering. We describe the model in Sec.
II, which consists of a relativistic version of the Hénon-Heiles system. Two
well-known scattering functions are explored in Sec. III, such as the exit
through which the particle escapes and its escape time. In Sec. IV, we
demonstrate the fractal dimension invariance under a coordinate transformation
by using a Cantor-like set approach. Subsequently, we quantify the
unpredictability of the escape times and analyze the effect of such a
reference frame modification. We conclude with a discussion of the main
results and findings of the present work in Sec. V.
## II Model description
Figure 1: (a) The three-dimensional representation of the Hénon-Heiles
potential $V(x,y)=\frac{1}{2}(x^{2}+y^{2})+x^{2}y-\frac{1}{3}y^{3}$. (b) The
isopotential curves in the physical space show that the Hénon-Heiles system is
open and has triangular symmetry. If the energy of the particle is higher than
a threshold value, related to the potential saddle points, there exist
unbounded orbits. Following these trajectories the particle leaves the
scattering region through any of the three exits.
The Hénon-Heiles system was proposed in 1964 to study the existence of a third
integral of motion in galactic models with axial symmetry henon1964 . We
consider a single particle whose total mechanical energy can be denoted as
$E_{N}$ in the Newtonian approximation. This energy is conserved along the
trajectory described by the particle, which is launched from the interior of
the potential well, within a finite region of the phase space called the
scattering region. We have utilized a dimensionless form of the Hénon-Heiles
system, so that the potential is written as
$V(x,y)=\frac{1}{2}(x^{2}+y^{2})+x^{2}y-\frac{1}{3}y^{3},$ (1)
where $x$ and $y$ are the spatial coordinates. When the energy is above a
threshold value, the potential well exhibits three exits due to its triangular
symmetry in the physical space, i.e., the plane $(x,y)$, as visualized in Fig.
1. We call Exit 1 the exit located at the top $(y\to+\infty)$, Exit 2 the one
located downwards to the left $(x\to-\infty,y\to-\infty)$ and Exit 3 the one
at the right $(x\to+\infty,y\to-\infty)$. One of the characteristics of open
Hamiltonian systems with escapes is the existence of highly unstable periodic
orbits known as Lyapunov orbits contopoulos1990 , which are placed near the
saddle points. In fact, when a trajectory crosses through a Lyapunov orbit, it
escapes to infinity and never returns back to the scattering region.
Furthermore, we recall that the energy of the particle determines also the
dynamical regime. We can distinguish two open regimes in which escapes are
allowed. On the one hand, in the nonhyperbolic regime the KAM tori coexist
with the chaotic saddle and the phase space exhibits regions where dynamics is
regular and also chaotic sideris2006 , whereas the chaotic saddle rules the
dynamics in the hyperbolic regime, making it completely chaotic.
When the speed of the particle is comparable to the speed of light, the
relativistic effects have to be taken into account ohanian2001 . In the
present work we consider a particle which interacts in the limit of weak
external fields, and therefore we deal with a special relativistic version of
the Hénon-Heiles system, whose dynamics is governed by the conservative
Hamiltonian lan2011 ; chanda2018 ; kovacs2011 ; calura1997
$H=c\sqrt{c^{2}+p^{2}+q^{2}}+V(x,y),$ (2)
where $c$ is the value of the speed of light, and $p$ and $q$ are the momentum
coordinates. On the other hand, the Lorentz factor is defined as
$\gamma=\frac{1}{\sqrt{1-\frac{\textbf{v}^{2}}{c^{2}}}}=\frac{1}{\sqrt{1-\beta^{2}}},$
(3)
where v is the velocity vector of the particle and $\beta=|\textbf{v}|/c$ the
ratio between the speed of the particle and the speed of light. The Lorentz
factor $\gamma$ and $\beta$ are two equivalent ways to express how large is
the speed of the particle compared to the speed of light. These two factors
vary in the ranges $\gamma\in[1,+\infty)$ and $\beta\in[0,1)$, respectively.
For convenience, we shall use $\beta$ as a parameter along this work.
Hamilton’s canonical equations can be derived from Eq. (2), yielding the
equations of motion
$\displaystyle\dot{x}=$ $\displaystyle\frac{\partial H}{\partial
p}=\frac{p}{\gamma},$ $\displaystyle\dot{p}=-\frac{\partial H}{\partial
x}=-x-2xy,$ (4) $\displaystyle\dot{y}=$ $\displaystyle\frac{\partial
H}{\partial q}=\frac{q}{\gamma},$ $\displaystyle\dot{q}=-\frac{\partial
H}{\partial y}=y^{2}-x^{2}-y,$
where the Lorentz factor can be alternatively written in the momentum-
dependent form as $\gamma=\frac{1}{c}\sqrt{c^{2}+p^{2}+q^{2}}$. Although the
complete phase space is four-dimensional, the conservative Hamiltonian
constrains the dynamics to a three-dimensional manifold of the phase space,
known as the energy shell.
Some recent works aim at isolating the effects of the variation of the Lorentz
factor $\gamma$ (or $\beta$ equivalently) from the remaining variables of the
system bernal2017 ; bernal2018 . In order to accomplish this, they modify the
initial value of $\beta$ and use it as the only parameter of the dynamical
system. Since $\beta$ is a quantity that depends on $|\textbf{v}|$ and $c$,
they choose to vary the numerical value of $c$. Needless to say, the value of
the speed of light $c$ remains constant during the particle trajectory. The
fundamental reason for deciding to increase the kinetic energy of the system
by reducing the numerical value of the speed of light is simply as follows. If
we keep the Hénon-Heiles potential constant and increase the speed of the
particle to values close to the speed of light, the potential will be in a
much lower energy regime compared to the kinetic energy of the particle.
Therefore, the potential becomes negligible and the interaction between them
becomes irrelevant. Consequently, each time we select a value of the speed of
light we are scaling the system, and hence the ratio of the kinetic energy and
the potential as well. The sequence of potential wells with different values
of $\beta$ represents potential wells with the Hénon-Heiles morphology, but at
different scales in which the interaction of a relativistic particle is not
trivial. In this way, the effects of the Lorentz factor on the dynamics are
isolated from the other system variables, because the Lorentz factor is the
only parameter that differentiates all these scaled systems.
We then consider the same initial value of the particle speed
$|\textbf{v}_{0}|$ in every simulation with a different value of $\beta$,
launching the particle from the minimum potential, which is located at
$(x_{0},y_{0})=(0,0)$ and where the potential energy is null. We have
arbitrarily chosen $|\textbf{v}_{0}|\approx 0.5831$ (as in bernal2017 ;
bernal2018 ), which corresponds to the open nonhyperbolic regime with energy
$E_{N}=0.17$, close to the escape energy in the Newtonian approximation. Thus,
we analyze how the relativistic parameter $\beta$, as its value increases,
affects the dynamical properties starting from the nonhyperbolic regime. The
numerical value of $c$ varies, as shown in Fig. 2, and for instance if the
simulation is carried out for a small $\beta$, where $|\textbf{v}_{0}|\ll c$,
the initial speed of the particle only represents a very low percentage of the
speed of light. In this case, we recover the Newtonian approximation and the
classical version of the Hénon-Heiles system. On the contrary, if the
simulation takes place with a value of $\beta$ near one, the speed of the
particle represents a high percentage of the speed of light and the
relativistic effects on the dynamics become more intense.
Numerical computations reveal that the KAM tori are mostly destroyed at
$\beta\approx 0.4$, and hence the dynamics is hyperbolic for higher values of
$\beta$ bernal2018 . If some small tori survive, they certainly do not rule
the system overall dynamics. As we focus on the hyperbolic regime, the
simulations are run for values of $\beta\in[0.5,0.99]$ and by means of a fixed
step fourth-order Runge-Kutta method press1992 . We recall that the initial
values of the momentum $(p_{0},q_{0})$ depend on the chosen initial value of
$\beta$, and therefore this computational technique (to vary the value of
$\beta$ fixing $|\textbf{v}_{0}|$) is an ideal method to increase the particle
kinetic energy to the relativistic regime. For example, a particle trapped in
the KAM tori can escape if the initial value of $\beta$ is high enough, as
shown in Fig. 2.
Figure 2: The evolution of a particle launched within the scattering region
from the same initial condition for different values of $\beta$. (a) For a
very low $\beta$ (Newtonian approximation), the particle is trapped in the KAM
tori and describes a bounded trajectory. (b) The value of $\beta$ is large
enough to destroy the KAM tori and the particle leaves the scattering region
following a trajectory typical of transient chaos. (c) Finally, a larger value
of $\beta$ than in (b) makes the particle escape faster.
## III Escape times in inertial and non-inertial frames
The scattering functions enable us to represent the relation between input and
output dynamical states of the particle, i.e., how the interaction of the
particle with the potential takes place. The potential of Hénon-Heiles leads
the particle to describe chaotic trajectories before converging to a specific
exit, which makes the scattering functions exhibit a fractal structure. In
order to verify the sensitivity of the system to exits and escape times, we
launch particles from the potential minimum slightly varying the shooting
angle $\theta$ that is formed by the initial velocity vector and the positive
$x$-axis, as shown in Fig. 3(a).
The maximum value of the kinetic energy is reached at the potential minimum,
as the system is conservative. We define the value of the Lorentz factor
associated with this maximum kinetic energy as the critical Lorentz factor
$\gamma_{c}(\beta)=\frac{1}{\sqrt{1-\beta^{2}}}.$ (5)
We emphasize that the initial Lorentz factor of every particle is the critical
Lorentz factor, since every trajectory is initialized from the potential
minimum in this work. We shall monitor the Lorentz factor of the particle
along its trajectory and use the critical Lorentz factor as the criterion of
whether the particle has escaped or not. This escape criterion is based on the
fact that the value of the kinetic energy remains bounded while the particle
evolves chaotically within the potential well, bouncing back and forth against
the potential barriers before escaping. The Lorentz factor value then varies
between the unity and the critical value inside the scattering region, i.e.,
$\gamma(t)\in[1,\gamma_{c}]$. Eventually, the particle leaves the scattering
region and the value of its Lorentz factor breaks out towards infinity,
because its kinetic energy does not remain bounded anymore. In order to
prevent this asymptotic behavior of the Lorentz factor, it is convenient to
set that the escape happens at the time $t_{e}$ when
$\gamma(t_{e})>\gamma_{c}$. In this manner, we define the scattering region as
the part of the physical space where the dynamics is bounded. This escape
criterion is computationally affordable and useful to implement in any
Hamiltonian system without knowing specific information about the exits. In
addition, it includes all the escapes that take place when the Lyapunov orbit
criterion is considered.
Figure 3: (a) Each of the exits is identified with a different color, such
that Exit 1 (red), Exit 2 (green) and finally Exit 3 (blue). In order to avoid
redundant results due to the triangular symmetry of the well, we only let the
particle evolve from the angular region
$\theta_{0}\in\left[\pi/2,5\pi/6\right]$ (black dashed lines). (b) The
scattering function of the exits $(2000\times 2000)$ given the parameter map
$(\beta\in[0.5,0.99],\theta_{0}\in[\pi/2,5\pi/6])$ in the hyperbolic regime.
A particle launched with $\theta=\pi/2$ escapes directly towards the Exit 1
for every value of $\beta$ as shown in Fig. 3(b), whereas if it is launched
with $\theta=5\pi/6$ the particle bounces against the potential barrier placed
between Exit 1 and Exit 2 and escapes through the Exit 3. The whole structure
of exits in between is apparently fractal. Nonetheless, the exit function
becomes smoother when the value of $\beta$ increases, but it is never
completely smooth. On the other hand, we recall that the chaotic saddle is an
observer-independent set of points formed by the intersection of the stable
and unstable manifolds. Concretely, the stable manifold of an open Hamiltonian
system is defined as the boundary between the exit basins ott1993 . If a
particle starts from a point arbitrarily close to the stable manifold it will
spend an infinite time in converging to an exit, i.e., it never escapes. The
unstable manifold is the set along which particles lying infinitesimally close
to the chaotic saddle will eventually leave the scattering region in the
course of time tel2015 .
The escape time can be easily defined as the time the particle spends evolving
inside the scattering region before escaping to infinity. In nonrelativistic
systems, the particular clock in which the time is measured is irrelevant
since time is absolute. However, here we consider two time quantities: the
time $t$ that is measured by an inertial reference frame at rest and the
proper time $\tau$ as measured by a non-inertial reference frame comoving with
the particle. This proper time is simply the time measured by a clock attached
to the particle.
As is well known, an uniformly moving clock runs slower by a factor
$\sqrt{1-\beta^{2}}$ in comparison to another identically constructed and
synchronized clock at rest in an inertial frame. Therefore, we assume that at
any instant of time the clock of the accelerating particle advances at the
same rate as an inertial clock that momentarily had the same velocity
barton1999 . In this manner, given an infinitesimal time interval $dt$, the
particle clock will measure a time interval
$d\tau=\frac{dt}{\gamma(t)},$ (6)
where $\gamma(t)$ is the particle Lorentz factor at the instant of time $t$.
Since the Lorentz factor is greater than the unity, the proper time interval
always obeys that $d\tau\leq dt$, which is just the mathematical statement of
the twin paradox. When the particle velocities are very close to the speed of
light, the time dilation phenomenon takes place so that the time of the
particle clock runs more slowly in comparison to clocks at rest in the
potential. In the context of special relativity, it is important to bear in
mind that it is assumed that the potential does not affect the clocks rate. In
other words, all the clocks placed at rest in any point of the potential are
ticking at the same rate along this work.
Without loss of generality, Eq. 6 can be expressed as an integral in the form
$\tau_{e}=\int_{0}^{t_{e}}\frac{dt}{\gamma(t)},$ (7)
where the final time of the integration interval is the escape time in the
inertial frame. We shall solve this integral using the Simpson’s rule
jeffreys1988 . Since each evolution of the Lorentz factor is unique because
each particle describes a distinct chaotic trajectory, every particle clock
measures a different proper time at any instant of time $t$. Nonetheless, as
the dynamics is bounded in the same energetic conditions given a value of
$\beta$, the Lorentz factor of all trajectories is similar on average at any
instant of time $t$. For this reason, we assume that there exists an average
value of the Lorentz factor along the particle trajectory, and estimate it as
the arithmetic mean between the maximum and minimum values of the bounded
Lorentz factor inside the scattering region, i.e.,
$\bar{\gamma}(\beta)=\frac{1+\gamma_{c}}{2}=\frac{1+\sqrt{1-\beta^{2}}}{2\sqrt{1-\beta^{2}}}.$
(8)
Using this definition to Eq. 7, we can define an average time dilation in the
form $\bar{\tau}_{e}\equiv t_{e}/\bar{\gamma}$. This value should only be
regarded as an approximation, which shall prove of great usefulness to
interpret the numerical results obtained ahead. Accordingly, the difference
between both the average escape time and the time $t_{e}$ is also
approximately linear on average. In this manner, we can also define the
magnitude
$\delta\bar{t}_{e}\equiv
t_{e}-\bar{\tau}_{e}=\frac{1-\sqrt{1-\beta^{2}}}{1+\sqrt{1-\beta^{2}}}t_{e}.$
(9)
We emphasize that this value is again just an approximation representing the
average behavior of the system, which disregards the fluctuations of the
Lorentz factor. It reproduces qualitatively the behavior when the dynamics is
bounded in the well, as shown in Fig. 4(a).
Figure 4: (a) The Lorentz factor evolution $\gamma(t)$ of three different
trajectories: a fast escape (yellow) and two typical transient chaotic
trajectories (red and blue). The dashed guideline represents the Lorentz
factor value of $\bar{\gamma}$ (black), corresponding to $\beta=0.75$. The
time differences $\delta t(t)$ along these trajectories is also shown. (b) The
scattering function of escape times $t_{e}$ in logarithmic scale given the
parameter map $(\beta\in[0.5,0.99],\theta_{0}\in[\pi/2,5\pi/6])$. The two
black dashed lines corresponds to the subfigures (c) and (d), which show the
scattering function of escape time $t_{e}(\theta_{0})$ (blue) and
$\tau_{e}(\theta_{0})$ (red) for $\beta=0.5$ and $\beta=0.8$, respectively.
(e, f) The time difference function $\delta t_{e}(\theta_{0})$ (black) for the
same values of $\beta$.
The escape time function is similar to the exit function, as shown in Fig.
4(b); the longest escape times are located close to the the boundary of the
exit regions, i.e., the mentioned stable manifold, because these trajectories
spend long transient times before escaping. In this manner, the structure of
singularities is again associated to the stable manifold, equally that the
exit function. This is an evidence that the fractality of the escape time
function must be an observer-independent feature, since the exit through which
the particle escapes does not depend on the considered clock. Indeed, we
observe that the escape proper time function exhibits a similar structure of
singularities because of the approximated linear relation described by
$\bar{\tau}_{e}$ (see Figs. 4(c) and 4(d)). Despite being almost identical
structures, the dilation time phenomenon always makes
$\tau_{e}(\theta_{0})<t_{e}(\theta_{0})$.
Importantly, the time difference function $\delta t_{e}(\theta_{0})$ also
preserves the fractal structure as shown in Figs. 4(e) and 4(f). This occurs
because sensitivity to initial conditions is translated into sensitivity to
time dilation phenomena. The longer the time the particle spends in the well,
the more travels from the center to the potential barriers and back. If we
think of each of these travels as an example of a twin paradox journey, we get
an increasing time dilation for particles that spend more time in the well.
Since these times are sensitive to modifications in the initial conditions, so
are time dilation effects. We could then introduce what might be called the
triplet paradox. In this case an additional third sibling leaves the planet
and comes back to the starting point having a different age than their two
other siblings, because of the sensitivity to initial conditions. This
phenomenon in particular illustrates how chaotic dynamics affects typical
relativistic phenomena.
## IV Invariant fractal dimension and persistence of transient chaos
The chaotic saddle and the stable manifold are self-similar fractal sets when
the underlying dynamics is hyperbolic ott1993 . This fact is reflected in the
peaks structure of the escape time functions, which is present at any scale of
initial conditions. In this sense, the escape time functions share with the
Cantor set some properties with regard to their singularities, and therefore
to their fractal dimensions. It is possible to study the fractal dimensions of
the escape time functions in terms of a Cantor-like set lau1991 ; seoane2007 .
In this manner, we can build a Cantor-like set to schematically represent the
escape of particles launched from different initial conditions $\theta_{0}$.
We consider that a certain fraction $\eta_{t}$ of particles escapes from the
scattering region when a minimal characteristic time $t_{0}$ has elapsed. If
these particles were launched from initial conditions centered in the original
interval, two identical segments are created; the trajectories that began in
those segments do not escape at least by a time $t_{0}$. Similarly, a same
fraction of particles $\eta_{t}$ from the two surviving segments escapes by a
time $2t_{0}$. If we continue this iterative procedure for $3t_{0}$, $4t_{0}$
and so on, we obtain a Cantor-like set of Lebesgue measure zero with
associated fractal dimension $d_{t}$ that can be computed as
$d_{t}=\frac{\ln 2}{\ln 2-\ln\left(1-\eta_{t}\right)}.$ (10)
Similarly, if the escape times are measured by a non-inertial reference frame
comoving with a particle, a fraction of particles $\eta_{\tau}$ escapes every
time $\tau_{0}$, and therefore the associated fractal dimension can be defined
as $d_{\tau}$.
The behavior is governed by Poisson statistics in the hyperbolic regime.
Therefore, the average number of particles that escape follow an exponential
decay law. More specifically, the number of particles remaining in the
scattering region according to an inertial reference frame at rest in the
potential is given by
$N(t)=N_{0}e^{-\sigma t}.$ (11)
We note that this decay is homogeneous in an inertial reference frame, whereas
according to an observer describing the decay in a non-inertial reference
frame comoving with a particle, we get the decay law
$\tilde{N}(\tau)\equiv
N(t(\tau))=N_{0}e^{-\sigma\int_{0}^{\tau}\gamma(t(\tau^{\prime}))d\tau^{\prime}},$
(12)
where we have substituted the equality
$t=\int_{0}^{\tau}\gamma(t(\tau^{\prime}))d\tau^{\prime}$ from solving the Eq.
(6). In other words, for an accelerated observer the decay is still
Poissonian, but inhomogeneous. Nevertheless, if we disregard the fluctuations
in the Lorentz factor, an homogeneous statistics can be nicely approximated
once again, by defining the average constant rate
$\bar{\sigma}_{\tau}\equiv\sigma\bar{\gamma}$. We recall that $\gamma(t)$ is
the Lorentz factor along the trajectory of a certain particle, and therefore
$\tilde{N}(\tau)$ here describes the number of particles remaining in the
scattering region according to the accelerated frame co-moving with such a
particle. This particle must be sufficiently close to the chaotic saddle in
order to remain trapped in the well a sufficiently long time so as to render
useful statistics, by counting a high number of escaping test bodies.
Now we calculate, without loss of generality, the fraction of particles that
escape during an iteration according to this reference frame as
$\eta_{\tau}=\frac{\tilde{N}(\tau_{0})-\tilde{N}(\tau_{0}^{\prime})}{\tilde{N}(\tau_{0})}=\frac{N(t_{0})-N(2t_{0})}{N(t_{0})}=\eta_{t},$
(13)
where $\tau_{0}^{\prime}$ is the proper time observed by the accelerated body
when the clocks at rest in the potential mark $2t_{0}$. In this manner, we
obtain that the fraction of escaping particles is invariant under reference
frame transformations, because there exists an unequivocal relation between
the times $t$ and $\tau$ given by $\gamma(t)$. From this result we derive that
the fractal dimension of the Cantor-like set associated with the escape times
function is invariant under coordinate transformations, $d_{t}=d_{\tau}$. This
equality holds for every particle clock evolving in the well, as long as it
stays long enough. On the other hand, this result is in consonance with the
Cantor-like set nature, because its fractal dimension does not depend on how
much time an iteration lasts.
In order to compute the fractal dimensions associated with these scattering
functions, we make use of the uncertainty dimension algorithm lau1991 ;
grebogi1983_2 and the shooting method previously described. We launch a
particle from the potential minimum with a random shooting angle $\theta_{0}$
in the interval $[\pi/2,5\pi/6]$ and measure the escape times
$t_{e}(\theta_{0})$ and $\tau_{e}(\theta_{0})$, and the exit $e(\theta_{0})$
through it escapes. Then, we carry out again the same procedure from a
slightly different shooting angle $\theta_{0}+\epsilon$, where $\epsilon$ can
be considered a small perturbation, and calculate the quantities
$t_{e}(\theta_{0}+\epsilon)$, $\tau_{e}(\theta_{0}+\epsilon)$ and
$e(\theta_{0}+\epsilon)$. We then say that an initial condition $\theta_{0}$
is uncertain in measuring, e.g., the escape time $t_{e}$, if the difference
between the escape times, $|t_{e}(\theta_{0})-t_{e}(\theta_{0}+\epsilon)|$, is
higher than a given time. This time is usually associated with the integration
step $h$ of the numerical method, which is the resolution of an inertial
clock. Conveniently, we set this criterion of uncertain initial conditions as
$3h/2$, i.e., the half between the step and two times the step of the
integrator, for any clock. The reason for it is that the time differences
according to a particle clock are the result of a computation by means of Eq.
(7). Therefore, an initial condition $\theta_{0}$ is uncertain in measuring
the escape time $t_{e}$ if
$\Delta
t_{e}(\theta_{0})=|t_{e}(\theta_{0})-t_{e}(\theta_{0}+\epsilon)|>3h/2.$ (14)
Similarly, an initial condition $\theta_{0}$ is uncertain in measuring the
escape time $\tau_{e}$ if
$\Delta\tau_{e}(\theta_{0})=|\tau_{e}(\theta_{0})-\tau_{e}(\theta_{0}+\epsilon)|>3h/2.$
(15)
Finally, an initial condition is uncertain with respect to the exit through
which the particle escapes if $e(\theta_{0})\neq e(\theta_{0}+\epsilon)$.
We generally expect that the time differences holds
$\Delta\tau_{e}(\theta_{0})<\Delta t_{e}(\theta_{0})$, since we have defined
previously that $\bar{\tau}_{e}\equiv t_{e}/\bar{\gamma}$. Thus, given the
same criterion $3h/2$ in both clocks, there will be some uncertain initial
conditions $\theta_{0}$ in the inertial clock $\left(\Delta
t_{e}(\theta_{0})>3h/2\right)$ that become certain in the particle clock
$\left(\Delta\tau_{e}(\theta_{0})<3h/2\right)$. We show a scheme in Fig. 5(a)
to clarify this physical effect on the escape times unpredictability. It is
easy to see that this effect is caused by the limited resolution of the
hypothetical clocks, and becomes more intense for high values of $\beta$
because it is proportional to the Lorentz factor.
Figure 5: (a) A scheme to visualize the physical effect of a reference frame
modification on the unpredictability of the escape times, where $h=0.005$. (b)
Fractal dimensions according to exits $d_{e}$ (green), escape time $d_{t}$
(blue) and escape proper time $d_{\tau}$ (red) with standard deviations
computed by the uncertainty dimension algorithm versus twenty five equally
spaced values of $\beta\in[0.5,0.98]$.
The fraction of uncertain initial conditions behaves as
$f(\epsilon)\sim\epsilon^{1-d},$ (16)
where $d$ is the value of the fractal dimension, which enables us to quantify
the unpredictability in foreseeing the particle final dynamical state. In
particular, $d=0$ ($d=1$) implies minimum (maximum) unpredictability of the
system lau1991 . All the cases in between, $0<d<1$, imply also
unpredictability, and the closer to the unity the value of the fractal
dimension is, the more unpredictable the system is. According to our
scattering functions, it is expected that the values of their fractal
dimensions decrease as the value of $\beta$ increase, since these functions
become smoother. Taking decimal logarithms in Eq. (16), we obtain
$\log_{10}\frac{f(\epsilon)}{\epsilon}\sim-d\log_{10}\epsilon.$ (17)
This formula allows us to compute the fractal dimension of the scattering
functions from the slope of the linear relation, which obeys a representation
$\log_{10}f(\epsilon)/\epsilon$ versus $\log_{10}\epsilon$. We use an adequate
range of angular perturbations according to our shooting method and the
established criterion of uncertain initial conditions, concretely,
$\log_{10}\epsilon\in[-6,-1]$.
The computed fractal dimensions always hold $d_{e}<d_{t},d_{\tau}$ as shown in
Fig. 5(b). This occurs because it is generally more predictable to determine
the exit through which the particle escapes than exactly its escape time when
the clocks resolution is small. Therefore, there is a greater number of
uncertain conditions concerning escape times than in relation to exits. The
former ones are located outside and over the stable manifold, whereas the
uncertain conditions regarding exits can only be located on the stable
manifold by definition. We obtain computationally $d_{t}\approx d_{\tau}$ for
almost every value of $\beta$. Nonetheless, the physical effect explained
above causes a small difference between the computed fractal dimensions
regarding escape times, implying $d_{\tau}<d_{t}$ in a very energetic regime.
From a mathematical point of view, if we consider a infinitely small clock
resolution, i.e., $h\to 0$, uncertain initial conditions in any clock will be
only the ones whose associated escape time differences are also infinitely
small. Such uncertain conditions will be located on the stable manifold. In
that case, the geometric and observer-independent nature of the fractality
caused by the chaotic saddle is reflected into the values of the fractal
dimensions. It is expected that in this limit the equality
$d_{e}=d_{t}=d_{\tau}$ holds.
This equality extends the very important statement that relativistic _chaos_
is coordinate invariant to _transient chaos_ as well. The result provided in
motter2003 showing that the signs of the Lyapunov exponents of a chaotic
dynamical system are invariant under coordinate transformations can be
perfectly extended to transient chaotic dynamics. For this purpose, it is only
required to consider a chaotic trajectory on the chaotic saddle, which meets
the necessary four conditions described in motter2003 . Since the sign of the
Lyapunov exponents of a trajectory on the chaotic saddle are also invariant,
it is therefore evident that the existence of transient chaotic dynamics can
not be avoided by considering suitable changes of the reference frame. We
believe that this analytical result is at the basis of the results arising
from all the numerical explorations performed in the previous sections.
## V Conclusions
Despite the fact that the Hénon-Heiles system has been widely studied as a
paradigmatic open Hamiltonian system, we have added a convenient definition of
its scattering region. In this manner, the scattering region can be defined as
the part of the physical space where the particle dynamics is bounded, and
therefore a particle escapes when its kinetic energy is greater than the
kinetic energy value at the potential minimum.
Since relativistic chaos has been demonstrated as coordinate invariant, we
have been focused on the special relativistic version of the Hénon-Heiles
system to extend this occurrence to transient chaos. We have then analyzed the
Lorentz factor effects on the system dynamics, concretely, how the time
dilation phenomenon affects the scattering function structure. The exit and
the escape time functions exhibit a fractal structure of singularities as a
consequence of the presence of the chaotic saddle. Since the origin of the
escape time singularities is geometric, the fractality of the escape time
function must be independent of the observer. We conclude that the time
dilation phenomenon does not affect the typical structure of the singularities
of the escape times, and interestingly this phenomenon occurs chaotically.
The escape time function as measured in any clock is closely related to a
Cantor-like set of Lebesgue measure zero, since it is a self-similar set in
the hyperbolic regime. This feature allows us to demonstrate that the fractal
dimension of the escape time function is relativistic invariant. The key point
of the demonstration is that, knowing the evolution of the Lorentz factor,
there exists an unequivocal relation between the transformed times. In order
to verify this result computationally, we have used the uncertainty dimension
algorithm. Furthermore, we have pointed out that the system is more likely to
be predictable in a reference frame comoving with the particle if a limited
clock resolution is considered, even though from a mathematical point of view
the predictability of the system is independent of the reference frame.
The main conclusion of the present work is that transient chaos is coordinate
invariant from a theoretical point of view. This statement extends the
universality of occurrence of chaos and fractals under coordinate
transformations to the realm of transient chaotic phenomena as well.
## ACKNOWLEDGMENTS
We acknowledge interesting discussions with Prof. Hans C. Ohanian. This work
was supported by the Spanish State Research Agency (AEI) and the European
Regional Development Fund (ERDF) under Project No. FIS2016-76883-P.
## References
* (1) J. M. Seoane and M. A. F. Sanjuán, Rep. Prog. Phys. 76, 016001 (2013).
* (2) Y.-D. Lin, A. M. Barr, L. E. Reichl, and C. Jung, Phys. Rev. E 87, 012917 (2013).
* (3) A. Daitche and T. Tél, New J. Phys. 16, 073008 (2014).
* (4) E. E. Zotos and C. Jung, Mon. Notices Royal Astron. Soc. 465, 525–546 (2017)
* (5) J. F. Navarro, Sci. Rep. 9, 13174 (2019).
* (6) Y.-C. Lai and T. Tél, Transient Chaos: Complex Dynamics on Finite-Time Scales, Springer, New York (2010).
* (7) C. Grebogi, E. Ott, and J. A. Yorke, Physica D 7, 181 (1983).
* (8) J. Aguirre, R. L. Viana, and M. A. F. Sanjuán, Rev. Mod. Phys. 81, 333 (2009).
* (9) E. Ott, Chaos in Dynamical Systems (Cambridge University Press, New York, NY, 1993).
* (10) T. Tél, Chaos 25, 097619 (2015).
* (11) D. Hobill, A. Burd, and A. Coley, Deterministic Chaos in General Relativity (Plenum, New York, 1994).
* (12) A. E. Motter, Phys. Rev. Lett. 91, 231101 (2003).
* (13) A. E. Motter and A. Saa, Phys. Rev. Lett. 102, 184101 (2009).
* (14) J. C. Vallejo, J. Aguirre, and M. A. F. Sanjuán, Phys. Lett. A 311, 26–38 (2003).
* (15) J. Aguirre, J. C. Vallejo, and M. A. F. Sanjuán, Phys. Rev. E 64, 066208 (2001).
* (16) A. E. Motter and P. S. Letelier, Phys. Lett. A 285, 127–131 (2001).
* (17) J. D. Barrow, Gen. Relat. Gravit. 14, 523-530 (1982).
* (18) A. A. Chernikov, T. Tél, G. Vattay, and G. M. Zaslavsky, Phys. Rev. A 40, 4072 (1989).
* (19) X. Ni, L. Huang, Y.-C Lai, and L. M. Pecora, EPL 98, 50007 (2012).
* (20) J. D. Bernal, J. M. Seoane, and M. A. F. Sanjuán, Phys. Rev. E 95, 032205 (2017).
* (21) J. D. Bernal, J. M. Seoane, and M. A. F. Sanjuán, Phys. Rev. E 97, 042214 (2018).
* (22) M. Hénon and C. Heiles, Astron. J. 69, 73 (1964).
* (23) G. Contopoulos, Astron. Astrophys. 231, 41 (1990).
* (24) I. V. Sideris, Phys. Rev. E 73, 066217 (2006).
* (25) H. O. Ohanian, Special Relativity: A Modern Introduction, Physics Curriculum $\&$ Instruction, Inc, First Edition (2001).
* (26) B. L. Lan and F. Borondo, Phys. Rev. E 83, 036201 (2011).
* (27) S. Chanda and P. Guha, Int. J. Geom. Methods Mod. Phys. 15, 1850062 (2018).
* (28) T. Kovács, Gy. Bene, and T. Tél, Mon. Not. R. Astron. Soc. 414, 2275–2281 (2011).
* (29) M. Calura, P. Fortini, and E. Montanari, Phys. Rev. D 56, 4782 (1997).
* (30) W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing, Cambridge Univ. Press (1992).
* (31) G. Barton, Introduction to the Relativity Principle: Particles and Plane Waves, John Wiley & Sons (1999).
* (32) H. Jeffreys and B. S. Jeffreys, Methods of Mathematical Physics, 3rd ed., Cambridge University Press (1988).
* (33) J. M. Seoane, M. A. F. Sanjuán, and Y.-C. Lai, Phys. Rev. E 76, 016208 (2007).
* (34) Y.-T. Lau, J. M. Finn, and E. Ott, Phys. Rev. Lett. 66, 978 (1991).
* (35) C. Grebogi, S. W. McDonald, E. Ott, and J. A. Yorke, Phys. Lett. A 99, 415 (1983).
|
2024-09-04T02:54:59.183904 | 2020-03-09T13:49:48 | 2003.05288 | {
"authors": "Chi-Chun Zhou, Ping Zhang, and Wu-Sheng Dai",
"full_text_license": null,
"license": "Creative Commons Zero - Public Domain - https://creativecommons.org/publicdomain/zero/1.0/",
"provenance": "arxiv-papers-0000.json.gz:26165",
"submitter": "Chichun Zhou",
"url": "https://arxiv.org/abs/2003.05288"
} | arxiv-papers | 11footnotetext<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
# The Brownian Motion in an Ideal Quantum Qas
Chi-Chun Zhou Ping Zhang and Wu-Sheng Dai
###### Abstract
A Brownian particle in an ideal quantum gas is considered. The mean square
displacement (MSD) is derived. The Bose-Einstein or Fermi-Dirac distribution,
other than the Maxwell-Boltzmann distribution, provides a different stochastic
force compared with the classical Brownian motion. The MSD, which depends on
the thermal wavelength and the density of medium particles, reflects the
quantum effect on the Brownian particle explicitly. The result shows that the
MSD in an ideal Bose gas is shorter than that in a Fermi gas. The behavior of
the quantum Brownian particle recovers the classical Brownian particle as the
temperature raises. At low temperatures, the quantum effect becomes obvious.
For example, there is a random motion of the Brownian particle due to the
fermionic exchange interaction even the temperature is near the absolute zero.
## 1 Introduction
The Brownian motion is first observed by Robert Brown in 1827 and then
explained by Einstein (1905), Smoluchowski (1905), and Langevin (1908) in the
early 20th century [1]. The early theory of the Brownian motion not only
provides an evidence for the atomistic hypothesis of matter [2], but also
builds a bridge between the microscopic dynamics and the macroscopic
observable phenomena [2].
The classical understanding of the Brownian motion is quite well established.
However, there is an assumption in the early theory of the Brownian motion
that the medium particle obeys the Maxwell-Boltzmann distribution.
The behavior of a Brownian particle in an ideal quantum gas draws some
attentions because to study the motion of a Brownian particle in an quantum
system is now within reach of experimental tests. For example, an electron in
a black body radiation [3]. In such systems, the quantum exchange interaction,
which always leads to real difficulty in mechanics and statistical mechanics
[4, 5, 6, 7], exists and causes the medium particle obeying the Bose-Einstein
or Fermi-Dirac distribution. It is difficult to make exact or even detailed
dynamical calculations [8, 1], since a different stochastic force is provided
by the Bose-Einstein or Fermi-Dirac distribution. At high-temperature and low-
density, the classical theory serves as good approximation.
In this paper, we give an explicit expression of the mean square displacement
(MSD) of a Brownian particle in an ideal quantum gas using, e.g., the virial
expansion. Comparison with the classical Brownian motion, a correction for the
MSD, which depends on the thermal wavelength and the density of medium
particles, is deduced. The result shows that the MSD in an ideal Bose gas is
shorter than that in a Fermi gas. The behavior of the quantum Brownian
particle recovers the classical Brownian particle as the temperature raises.
At low temperature, the quantum effect becomes obvious. For example, there is
a random motion of the Brownian particle due to the fermionic exchange
interaction even the temperature is near the absolute zero.
The early studies of the Brownian motion inspired many prominent developments
in various areas such as physics, mathematics, financial markets, and biology.
In physics, exact solutions of Brownian particles in different cases, such as
in a constant field of force [1] and in a harmonically potential field [1],
are given. The Brownian motion with a time dependent diffusion coefficient is
studied in Ref. [9]. The boundary problem of Brownian motions is studied in
Refs. [10, 11]. The anomalous diffusion process, frequently described by the
scaled Brownian motion, is studied in Refs. [12, 13]. The Kramers-Klein
equation considers the Brownian particle that is in an general field of force
[1]. The generalized Langevin equations and the master equation for the
quantum Brownian motion are studied in Refs. [14, 15]. In mathematics, the
rigorous interpretation of Brownian motions based on concepts of random walks,
martingales, and stochastic processes is given [16, 8, 1]. In financial
markets, the theory of the Brownian motion is used to describe the movement of
the price of stocks and options [8, 1, 17, 18, 19, 20]. The application of the
fractional Brownian motion, which is a generalized Brownian motion, in
financial markets is studied in Refs. [21, 22, 23]. Moreover, the Brownian
motion plays a central and fundamental role in studies of soft matter and
biophysics [8], e.g., active Brownian motions, which can be used to describe
the motion of swarms of animals in fluid, are studied in Refs. [24, 25, 26,
27, 28, 8].
Among many quantities, the MSD, which is measurable, describes the Brownian
motion intuitively. There are studies focus on the MSD related problems. For
examples, the relation between the MSD and the time interval can be generally
written as $\left\langle x_{t}^{2}\right\rangle\sim t^{\alpha}$ [9]. One
distinguishes the subdiffusion with $0<\alpha<1$ and the superdiffusion with
$\alpha>1$ [29, 30, 31]. The relation between the MSD and the time interval of
the so called ultraslow Brownian motions is $\left\langle
x_{t}^{2}\right\rangle\sim\left(\ln t\right)^{\gamma}$ [32].
There are different approaches to build a quantum analog of the Brownian
motion [33, 34, 35, 36]. For examples, the method of the path integral is used
to study the quantum Brownian motion [35]. The approach of a quantum analog or
quantum generalization of the Langevin equation and the master equation, e.g.,
the quantum master equation [3] and the quantum Langevin equation [37] is used
to build a quantum Brownian motion. Among them, the method of quantum
dynamical semigroups [38] is prominent. They point it out that the quantum
equation should be casted into the Lindblad form [38, 39]. A completely
positive master equation describing quantum dissipation for a Brownian
particle is derived in Ref. [39].
This paper is organized as follows. In Sec. 2, for the sake of completeness,
we derive the brownian motion from the perspective of the particle
distribution in an ideal Boltzmann gas. In Sec. 3, we derive the MSD of a
Brownian particle in an ideal quantum gas. High-temperature and low-
temperature expansions are given. The $d$-dimensional case is considered. The
conclusion and outlook are given in Sec. 4. Some details of the calculation is
given in the Appendix.
## 2 A Brownian particle in an ideal classical gas: the Brownian motion
In this section, we consider a Brownian particle in an ideal classical gas.
For the sake of completeness, we derive, in detail, the Brownian motion from
the perspective of the particle distribution.
A brief review on the Langevin equation. For a Brownian particle with mass
$M$, the dynamic equation is given by Paul Langevin [40]
$\displaystyle dv$ $\displaystyle=-\frac{\gamma}{M}vdt+\frac{1}{M}F_{t}dt,$
(2.1) $\displaystyle dx$ $\displaystyle=vdt,$ (2.2)
where $\gamma=6\pi\eta r$ with $\eta$ the viscous coefficient and $r$ the
radius of the medium particles. $F_{t}$ is the stochastic force generated by
numerous collisions of the medium particle. It is reasonable to make the
assumption that $F_{t}$ is isotropic, i.e.,
$\left\langle F_{t}\right\rangle=0.$ (2.3)
If the collision of the medium particle is uncorrelated; that is, for $t\neq
s$, $F_{t}$ is independent of $F_{s}$:
$\left\langle F_{s}F_{t}\right\rangle\propto\delta\left(s-t\right),$ (2.4)
then, the solution of Eqs. (2.1) and (2.2) is [40]
$\displaystyle v_{t}$
$\displaystyle=v_{0}\exp\left(-\frac{\gamma}{M}t\right)+\frac{1}{M}\int_{0}^{t}\exp\left[-\frac{\gamma}{M}\left(t-s\right)\right]F_{s}ds,$
(2.5) $\displaystyle x_{t}$
$\displaystyle=x_{0}+\frac{M}{\gamma}v_{0}\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]+\frac{1}{\gamma}\int_{0}^{t}\left\\{1-\exp\left[-\frac{\gamma}{M}\left(t-s\right)\right]\right\\}F_{s}ds,$
(2.6)
where $x_{0}$ and $v_{0}$ are the initial position and velocity.
The stochastic force determined by the Maxwell-Boltzmann distribution and the
MSD. In an ideal classical gas, the gas particle obeys the Maxwell-Boltzmann
distribution [41]. The number of particles possessing energy within
$\varepsilon$ to $\varepsilon+d\varepsilon$, denoted by
$\tilde{a}_{\varepsilon}$, is proportional to $e^{-\beta\varepsilon}$ [41],
i.e.,
$\tilde{a}_{\varepsilon}=\omega_{\varepsilon}e^{-\beta\varepsilon}d\varepsilon,$
(2.7)
where $\omega_{\varepsilon}$ is the degeneracy of the energy $\varepsilon$ and
$\beta=\left(kT\right)^{-1}$ with $k$ the Boltzmann constant $T$ the
temperature [41]. A collision of the medium particle with energy $\varepsilon$
gives a force of magnitude proportional to $\sqrt{2m\varepsilon}$, which is
the momentum of the particle. We have
$F=\rho\sqrt{2m\varepsilon},$ (2.8)
where $\rho$ is a coefficient and $m$ is the mass of the medium particle.
Thus, the probability of the Brownian particle subjected to a stochastic force
with magnitude within $F$ to $F+dF$ is
$P\left(F\right)dF=\sqrt{\frac{\beta}{2\pi
m\rho^{2}}}\exp\left(-\frac{F^{2}\beta}{2m\rho^{2}}\right)dF.$ (2.9)
In an ideal classical gas, there is no inter-particle interactions among
medium particles, thus, for $t\neq s$, the force $F_{s}$ and $F_{t}$ are
independent. Substituting Eq. (2.9) into Eq. (2.4) gives
$\left\langle
F_{s}F_{t}\right\rangle=\frac{m\rho^{2}}{\beta}\delta\left(s-t\right).$ (2.10)
By using Eqs (2.6), (2.9), and (2.10), a direct calculation of the MSD gives
$\displaystyle\left\langle\left(x_{t}-x_{0}\right)^{2}\right\rangle$
$\displaystyle=\frac{M^{2}}{\gamma^{2}}\left(v_{0}^{2}-\frac{1}{2m\gamma}\frac{m\rho^{2}}{\beta}\right)\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]^{2}$
$\displaystyle+\frac{1}{\gamma^{2}}\frac{m\rho^{2}}{\beta}\left\\{t-\frac{M}{\gamma}\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]\right\\}.$
(2.11)
For a large-scale time, $t\gg 1$, Eq. (2.11) recovers
$\left\langle x_{t}^{2}\right\rangle=\frac{kT}{3\pi\eta r}t,$ (2.12)
where $\rho=\sqrt{12\pi\eta r/m}$ and $x_{0}$ is chosen to be $0$ without lose
of generality. Eq. (2.12) is the famous Einstein’s long-time result of the
MSD. The motion of a brownian particle in an ideal classical gas is the
Brownian motion.
## 3 The MSD of a Brownian particle in an ideal quantum gas
In this section, we give the MSD of a Brownian particle in an ideal quantum
gas. High-temperature and low-temperature expansions explain the quantum
effect intuitively.
### 3.1 The stochastic force determined by the Bose-Einstein or Fermi-Dirac
distribution
In an ideal quantum gas, the gas particle obeys Bose-Einstein or Fermi-Dirac
distribution other than the Maxwell-Boltzmann distribution. The stochastic
force is different from that in an ideal classical gas. In this section, we
discuss the properties of the stochastic force in an ideal quantum gas.
In an ideal quantum gas, the number of particles possessing energy within
$\varepsilon$ to $\varepsilon+d\varepsilon$, denoted by $a_{\varepsilon}$, is
$a_{\varepsilon}=\frac{\omega_{\varepsilon}}{\exp\left(\beta\varepsilon+\alpha\right)+g}d\varepsilon,$
(3.1)
where $\alpha$ is defined by $z=e^{-\alpha}$ with $z$ the fugacity [41]. For
Bose cases, $g=-1$, and for Fermi cases, $g=1$. Thus, the probability of the
Brownian particle subjected to a stochastic force with magnitude within $F$ to
$F+dF$ is
$p\left(F\right)dF=\sqrt{\frac{\beta}{2\rho^{2}m\pi}}\frac{1}{h_{1/2}\left(z\right)}\frac{1}{\exp\left[\beta
F^{2}/\left(2\rho^{2}m\right)+\alpha\right]+g}dF,$ (3.2)
where we $h_{\nu}\left(x\right)$ equals Bose-Einstein integral
$g_{\nu}\left(x\right)$ in Bose cases [41] and Fermi-Dirac integral
$f_{\nu}\left(x\right)$ [41] in Fermi cases.
In an ideal quantum gas, the stochastic force is also isotropic, that is, Eq.
(2.3) holds. However, the collision, due to the overlapping of the wave
package, can be correlated; that is, $\left\langle F_{s}F_{t}\right\rangle$ is
no longer a delta function but a function of $s-t$ with a peak at $s=t$.
However, as the ratio of the thermal wavelength and the average distance
between the medium particles decreases, $\left\langle
F_{s}F_{t}\right\rangle$, can be well approximated by a delta function:
$\left\langle
F_{s}F_{t}\right\rangle\sim\frac{m\rho^{2}}{\beta}\frac{h_{3/2}\left(z\right)}{h_{1/2}\left(z\right)}\delta\left(s-t\right),$
(3.3)
for $n\lambda\ll 1$, where $\lambda=h/\sqrt{2\pi mkT}$ is the thermal
wavelength and $n$ is the density of the medium particle.
In this paper, we consider the case that the ratio of the thermal wavelength
and the average distance between the medium particles is small.
### 3.2 The MSD
For $n\lambda\ll 1$, by using Eqs. (3.2), (2.3), and (2.6), a direct
calculation of MSD gives
$\displaystyle\left\langle x_{t}^{2}\right\rangle$
$\displaystyle=\frac{M^{2}}{\gamma^{2}}\left\\{v_{0}^{2}-\frac{1}{2m\gamma}\frac{m\rho^{2}}{\beta}\frac{h_{3/2}\left(z\right)}{h_{1/2}\left(z\right)}\right\\}\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]^{2}$
$\displaystyle+\frac{1}{\gamma^{2}}\frac{m\rho^{2}}{\beta}\frac{h_{3/2}\left(z\right)}{h_{1/2}\left(z\right)}\left\\{t-\frac{M}{\gamma}\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]\right\\}.$
(3.4)
For a large-scale time, $t\gg 1$, Eq. (3.4) recovers
$\left\langle x_{t}^{2}\right\rangle=\frac{kT}{3\pi\eta
r}t\frac{h_{3/2}\left(z\right)}{h_{1/2}\left(z\right)},$ (3.5)
where $h_{3/2}\left(z\right)/h_{1/2}\left(z\right)$ is a correction for the
MSD due to the Bose-Einstein or Fermi-Dirac distribution, a result of the
quantum exchange interaction among gases particles.
### 3.3 High-temperature and low-temperature expansions
In order to compare with Eq. (2.12) intuitively, we give high-temperature and
low-temperature expansions of Eq. (3.5) by using the state equation of ideal
Bose or Fermi gases [40, 41]
$\displaystyle p$ $\displaystyle=\frac{kT}{\lambda}h_{3/2}\left(z\right),$
(3.6) $\displaystyle\frac{N}{V}$
$\displaystyle=\frac{1}{\lambda}h_{1/2}\left(z\right).$ (3.7)
The high-temperature expansion. At high temperatures, the virial expansion of
Eqs. (3.6) and (3.7) directly gives [40, 41]
$\frac{pV}{N}\sim kT\left[1+ga_{1}\left(T\right)n\lambda+...\right],$ (3.8)
where $a_{1}\left(T\right)$ is the first virial coefficient [40]. For a
$1$-dimensional ideal Bose or Fermi gas, $a_{1}\left(T\right)=0.353553$ [41].
Substituting Eqs. (3.6) and (3.7) into Eq. (3.8) gives
$\frac{h_{3/2}\left(z\right)}{h_{1/2}\left(z\right)}\sim\left[1+ga_{1}\left(T\right)n\lambda+...\right].$
(3.9)
Substituting Eq. (3.9) into Eq. (3.5) gives the MSD at high temperatures:
$\left\langle x_{t}^{2}\right\rangle=\frac{kT}{3\pi\eta
r}t\left[1+ga_{1}\left(T\right)n\lambda+\ldots\right].$ (3.10)
The result, Eq. (3.10), shows that the MSD in an ideal Bose gas is shorter
than that in a Fermi gas. Since $\lambda$ decreases as $T$ raises, the
behavior of the quantum Brownian particle returns the classical Brownian
particle as the temperature raises.
The low-temperature expansion for Fermi cases. At low temperatures, for Fermi
cases, $g=-1$,
$\frac{h_{3/2}\left(z\right)}{h_{1/2}\left(z\right)}=\frac{f_{3/2}\left(z\right)}{f_{1/2}\left(z\right)}.$
(3.11)
The expansion of the Fermi-Dirac integral at large $z$ gives [41]
$f_{\nu}\left(e^{\xi}\right)=\frac{\xi^{\nu}}{\Gamma\left(1+\nu\right)}\left\\{1+2\nu\sum_{j=1,3,5,...}\left[\left(\nu-1\right)....\left(\nu-j\right)\left(1-2^{-j}\right)\frac{\zeta\left(j+1\right)}{\xi^{j+1}}\right]\right\\}.$
(3.12)
Keeping only the first two terms in Eq. (3.12) gives
$f_{\nu}\left(z\right)=\frac{\left(\ln
z\right)^{\nu}}{\Gamma\left(1+\nu\right)}+2\nu\left(\nu-1\right)\frac{1}{2}\frac{\zeta\left(2\right)}{\left(\ln
z\right)^{2}}.$ (3.13)
Substituting Eq. (3.13) into Eqs. (3.6) and (3.7) gives
$\displaystyle p$ $\displaystyle=\frac{kT}{\lambda}\frac{\left(\ln
z\right)^{3/2}}{\Gamma\left(5/2\right)}\left[1+\frac{3}{4}\frac{\zeta\left(2\right)}{\left(\ln
z\right)^{2}}\right],$ (3.14) $\displaystyle\frac{N}{V}$
$\displaystyle=\frac{1}{\lambda}\frac{\left(\ln
z\right)^{1/2}}{\Gamma\left(5/2\right)}\left[1-\frac{1}{4}\frac{\zeta\left(2\right)}{\left(\ln
z\right)^{2}}\right].$ (3.15)
The fugacity $z$ can be solved from Eq. (3.15):
$\ln
z\sim\frac{\epsilon_{f}}{kT}\left[1+\frac{1}{2}\zeta\left(2\right)\left(\frac{kT}{\epsilon_{f}}\right)^{2}\right],$
(3.16)
where
$\epsilon_{f}=\lambda^{2}kT\left[\frac{1}{2}\Gamma\left(\frac{3}{2}\right)n\right]^{2}$
is the Fermi energy [41]. By substituting Eq. (3.13) into Eq. (3.11) with
fugacity $z$ given by Eq. (3.16), we have
$\frac{f_{3/2}\left(z\right)}{f_{1/2}\left(z\right)}=\frac{\Gamma\left(3/2\right)}{\Gamma\left(5/2\right)}\frac{\epsilon_{f}}{kT}\left\\{1+\left[\frac{\zeta\left(2\right)}{2}+\zeta\left(2\right)\right]\left(\frac{kT}{\epsilon_{f}}\right)^{2}\right\\}.$
(3.17)
Substituting Eq. (3.17) into Eq. (3.5) gives the MSD of Fermi cases at low
temperatures:
$\left\langle x_{t}^{2}\right\rangle\sim\frac{1}{3\pi\eta
r}t\frac{\Gamma\left(3/2\right)}{\Gamma\left(5/2\right)}\epsilon_{f}\left[1+\frac{3}{2}\zeta\left(2\right)\left(\frac{kT}{\epsilon_{f}}\right)^{2}\right].$
(3.18)
The first term of Eq. (3.18) is independent of the temperature $T$, which
means that there is a random motion of the Brownian particle due to the
fermionic exchange interaction even the temperature is near the absolute zero.
It is a result of Pauli exclusion principle [41].
The low-temperature expansion for Bose cases. At low temperatures, for Bose
cases, $g=1$,
$\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}=\frac{g_{1+d/2}\left(z\right)}{g_{d/2}\left(z\right)}.$
(3.19)
Expanding $g_{\nu}\left(z\right)$ around $z=1$ gives [41]
$g_{\nu}\left(z\right)=\frac{\Gamma\left(1-\nu\right)}{\left(-\ln
z\right)^{1-\nu}}+\sum_{j=0}^{\infty}\frac{\left(-1\right)^{j}}{j!}\zeta\left(\nu-j\right)\left(-\ln
z\right)^{j}.$ (3.20)
Substituting Eq. (3.20) into Eqs. (3.6) and (3.7) gives
$\displaystyle p$
$\displaystyle=\frac{1}{\lambda^{d}}\Gamma\left(-\frac{1}{2}\right)\left(-\ln
z\right)^{1/2}+\zeta\left(\frac{3}{2}\right)-\zeta\left(\frac{1}{2}\right)\left(-\ln
z\right),$ (3.21) $\displaystyle\frac{N}{V}$
$\displaystyle=\frac{1}{\lambda^{d}}\frac{\Gamma\left(1/2\right)}{\left(-\ln
z\right)^{1/2}}+\zeta\left(\frac{1}{2}\right)-\zeta\left(-\frac{1}{2}\right)\left(-\ln
z\right).$ (3.22)
The fugacity can be solved from Eq. (3.22):
$\ln z=-\frac{\pi}{n^{2}\lambda^{2}}.$ (3.23)
By substituting Eq. (3.20) into Eq. (3.19) with fugacity $z$ given by Eq.
(3.23), we have
$\frac{g_{3/2}\left(z\right)}{g_{1/2}\left(z\right)}=\frac{\zeta\left(3/2\right)}{\sqrt{n^{2}\lambda^{2}}}-\left(2+\frac{\zeta\left(3/2\right)\zeta\left(1/2\right)}{\pi}\right)\frac{\pi}{n^{2}\lambda^{2}}.$
(3.24)
Substituting Eq. (3.24) into Eq. (3.5) gives the MSD of Bose cases at low
temperatures:
$\displaystyle\left\langle x_{t}^{2}\right\rangle$
$\displaystyle\sim\frac{kT}{3\pi\eta
r}t\left[\zeta\left(\frac{3}{2}\right)\frac{1}{n\lambda}-\left(2\pi+\zeta\left(\frac{3}{2}\right)\zeta\left(\frac{1}{2}\right)\right)\frac{1}{n^{2}\lambda^{2}}\right]$
(3.25) $\displaystyle\sim\frac{1}{3\pi\eta
r}\zeta\left(\frac{3}{2}\right)\frac{\sqrt{2\pi
m}}{h}\frac{1}{n}\left(kT\right)^{3/2}t.$ (3.26)
The MSD is proportional to $T^{3/2}$ and is reversely proportional to the
density of particle, which is also different from that of the Brownian motion.
### 3.4 The $d$-dimensional case
In this section, a similar procedure gives the MSD of a Brownian particle in a
$d$-dimensional space. For the sake of clarity, we list the result. The detail
of the calculation can be found in the Appendix.
The MSD. The MSD for a Brownian particle in an ideal quantum gas in a
$d$-dimensional space is
$\left\langle\mathbf{x}_{t}^{2}\right\rangle=\frac{kTd}{3\pi\eta
r}t\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}.$ (3.27)
The high-temperature expansion. At high temperatures, the MSD, Eq.(3.27),
becomes
$\left\langle\mathbf{x}_{t}^{2}\right\rangle=\frac{kTd}{3\pi\eta
r}t\left[1+ga_{1}\left(T\right)n\lambda^{d}+\ldots\right],$ (3.28)
where $a_{1}\left(T\right)=\frac{1}{2^{1+d/2}}$ and is the first virial
coefficient of ideal Bose or Fermi gases in a $d$-dimensional space [40, 41].
The low-temperature expansion for Fermi cases. At low temperatures, for Fermi
cases, the MSD, Eq. (3.27), becomes
$\left\langle\mathbf{x}_{t}^{2}\right\rangle\sim\frac{d}{3\pi\eta
r}\frac{\Gamma\left(1+d/2\right)}{\Gamma\left(2+d/2\right)}t\epsilon_{f}\left[1+\left(\frac{d}{2}+1\right)\zeta\left(2\right)\left(\frac{kT}{\epsilon_{f}}\right)^{2}\right],$
(3.29)
where $\epsilon_{f}$ is the Fermi energy in a $d$-dimensional space and
$\epsilon_{f}=\lambda^{2}kT\left[\frac{1}{2}\Gamma\left(1+\frac{d}{2}\right)n\right]^{2/d}$
[40, 41].
The low-temperature expansion for Bose cases with $d=2$. The low-temperature
expansion for a Bose gas at any dimension higher than $2$ is not given in this
section, because the Bose-Einstein condensation (BEC) occurs. Here, we only
consider the $2$-dimensional case. At low temperatures, for Bose cases, the
MSD, Eq. (3.27), becomes
$\left\langle\mathbf{x}_{t}^{2}\right\rangle=\frac{2kT}{3\pi\eta
r}t\left[-\exp\left(-n\lambda^{2}\right)-\frac{\exp\left(-n\lambda^{2}\right)}{n\lambda^{2}}+\frac{\pi^{2}}{6n\lambda^{2}}\right].$
(3.30)
## 4 Conclusions and outlooks
The difficulty in the calculation of the behavior of a Brownian particle in an
ideal quantum gas directly comes from the stochastic force caused by the Bose-
Einstein and Fermi-Dirac distribution other than the Maxwell-Boltzmann
distribution. Comparison with the classical Brownian motion, on one hand, the
distribution of the stochastic force is different; on the other hand, the
collision, due to the overlapping of the wave package, could be correlated,
that is, $\left\langle F_{s}F_{t}\right\rangle$ is no longer a delta function
but a function of $s-t$ with a peak at $s=t$. Thus, it is difficult to make
exact or even detailed dynamical calculations [8, 1].
In this paper, we consider the motion of a Brownian particle in an ideal
quantum gas. We give an explicit expression of the MSD, which depends on the
thermal wavelength and the density of medium particles. High-temperature and
low-temperature expansions explain the quantum effect intuitively. For
examples, the MSD in an ideal Bose gas is shorter than that in a Ferm gas.
There is a random motion of the Brownian particle due to the fermionic
exchange interaction even the temperature is near the absolute zero.
The result in this work can be verified by experiment test.
## 5 Acknowledgments
We are very indebted to Dr G Zeitrauman for his encouragement. This work is
supported in part by NSF of China under Grant No. 11575125 and No. 11675119.
## 6 Appendix
In the appendix, we give the detail of the calculation of Eqs (3.27), (3.28),
(3.29), and (3.30).
The detail of the calculate for the MSD, Eq. (3.27), of a Brownian particle in
a $d$-dimensional space. The Langevin equation in $d$-dimensional is
$\displaystyle M\frac{d\mathbf{v}}{dt}$
$\displaystyle=-\gamma\mathbf{v}+\mathbf{F}_{t},$ (6.1)
$\displaystyle\frac{d\mathbf{x}}{dt}$ $\displaystyle=\mathbf{v.}$ (6.2)
In a $d$-dimensional space, the stochastic force $\mathbf{F}_{t}$ is
isotropic:
$\left\langle\mathbf{F}\right\rangle=0.$ (6.3)
For different time $t$ and $s$, $\mathbf{F}_{s}$ and $\mathbf{F}_{t}$ are
almost independent when the ratio of the thermal wavelength and the average
distance between the medium particles is small, that is,
$\left\langle\mathbf{F}_{s}\cdot\mathbf{F}_{t}\right\rangle\sim\delta\left(s-t\right)$
(6.4)
holds for $n\lambda^{d}\ll 1$. The solution of Eqs. (6.1) and (6.2) is
$\displaystyle\mathbf{v}_{t}$
$\displaystyle=\mathbf{v}_{0}e^{-\frac{\gamma}{M}t}+\frac{1}{M}\int_{0}^{t}\exp\left[-\frac{\gamma}{M}\left(t-s\right)\right]\mathbf{F}_{s}ds,$
(6.5) $\displaystyle\mathbf{x}_{t}$
$\displaystyle=\mathbf{x}_{0}+\frac{M}{\gamma}v_{0}\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]+\frac{1}{\gamma}\int_{0}^{t}\left\\{1-\exp\left[-\frac{\gamma}{M}\left(t-s\right)\right]\right\\}\mathbf{F}_{s}ds.$
(6.6)
In the $d$-dimensional case, the number of particle possessing momentum within
$\mathbf{P}$ to $\mathbf{P+}d\mathbf{P}$, denoted by
$a\left(\mathbf{P}\right)$, is [40, 41]
$a\left(\mathbf{P}\right)=\frac{1}{\exp\left[\beta\left(p_{x^{1}}^{2}+p_{x^{2}}^{2}+...p_{x^{d}}^{2}\right)/\left(2m\right)+\alpha\right]+g}.$
(6.7)
The force given by a collision of a particle with momentum $\mathbf{P}$ is
proportional to $\mathbf{P}$, $\mathbf{F=}\rho\mathbf{P}$. Thus the
probability of the stochastic force with magnitude within
$\left|\mathbf{F}\right|$ to $\left|\mathbf{F+}d\mathbf{F}\right|$ is
$P\left(\mathbf{F}\right)d\mathbf{F}=\left(\sqrt{\frac{\beta}{2\pi
m\rho^{2}}}\right)^{d}\frac{1}{h_{d/2}\left(z\right)}\frac{1}{\exp\left[\beta\left|\mathbf{F}\right|^{2}/\left(2m\rho^{2}\right)+\alpha\right]+g}.$
(6.8)
Substituting Eq. (6.8) into Eq. (6.4) gives
$\left\langle\mathbf{F}_{s}\cdot\mathbf{F}_{t}\right\rangle\sim\frac{dm\rho^{2}}{\beta}\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}\delta\left(s-t\right).$
(6.9)
By using Eqs. (6.6), (6.8), and (6.9), a direct calculation of MSD gives
$\displaystyle\left\langle\mathbf{x}_{t}^{2}\right\rangle$
$\displaystyle=\frac{M^{2}}{\gamma^{2}}\left[\mathbf{v}_{0}^{2}-\frac{d}{2m\gamma}\frac{m\rho^{2}}{\beta}\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}\right]\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]^{2}$
$\displaystyle+\frac{1}{\gamma^{2}}\frac{dm\rho^{2}}{\beta}\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}\left[t-\frac{M}{\gamma}\left[1-\exp\left(-\frac{\gamma}{M}t\right)\right]\right].$
(6.10)
where $\mathbf{x}_{0}$ is chosen to be the origin.
For a large-scale time, $t\gg 1$, Eq. (6.10) recovers Eq. (3.27).
The high-temperature expansion. For the $d$-dimensional case, the state
equation of an ideal quantum gas is [40, 41]
$\displaystyle p$
$\displaystyle=\frac{kT}{\lambda^{d}}h_{1+d/2}\left(z\right),$ (6.11)
$\displaystyle\frac{N}{V}$
$\displaystyle=\frac{1}{\lambda^{d}}h_{d/2}\left(z\right).$ (6.12)
The virial expansion [40, 41] directly gives
$\frac{pV}{N}\sim kT\left[1+ga_{1}\left(T\right)n\lambda^{d}+...\right]$
(6.13)
Substituting Eqs. (6.11) and (6.12) into Eq. (6.13) gives
$\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}\sim\left[1+ga_{1}\left(T\right)n\lambda^{d}+...\right].$
(6.14)
Substituting Eq. (6.14) into Eq. (3.27) gives Eq. (3.28).
The low-temperature expansion for Fermi cases. For Fermi cases, $g=-1$,
$\frac{h_{1+d/2}\left(z\right)}{h_{d/2}\left(z\right)}=\frac{f_{1+d/2}\left(z\right)}{f_{d/2}\left(z\right)}.$
(6.15)
By the expansion of the Fermi-Dirac integral at large $z$, we have [41]
$f_{\nu}\left(e^{\xi}\right)=\frac{\xi^{\nu}}{\Gamma\left(1+\nu\right)}\left\\{1+2\nu\sum_{j=1,3,5,...}\left[\left(\nu-1\right)....\left(\nu-j\right)\left(1-2^{-j}\right)\frac{\zeta\left(j+1\right)}{\xi^{j+1}}\right]\right\\}.$
(6.16)
Keeping only the first two terms gives
$f_{\nu}\left(z\right)=\frac{\left(\ln
z\right)^{\nu}}{\Gamma\left(1+\nu\right)}+2\nu\left(\nu-1\right)\frac{1}{2}\frac{\zeta\left(2\right)}{\left(\ln
z\right)^{2}}.$ (6.17)
Substituting Eq. (6.17) into Eqs. (6.11) and (6.12) gives
$\displaystyle\frac{p}{kT}$
$\displaystyle=\frac{1}{\lambda^{d}}\frac{\left(\ln
z\right)^{1+d/2}}{\Gamma\left(2+d/2\right)}\left[1+\frac{d}{2}\left(1+\frac{d}{2}\right)\frac{\zeta\left(2\right)}{\left(\ln
z\right)^{2}}\right],$ (6.18) $\displaystyle N$
$\displaystyle=\frac{\Omega}{\lambda^{d}}\frac{\left(\ln
z\right)^{d/2}}{\Gamma\left(1+d/2\right)}\left[1+\frac{d}{2}\left(\frac{d}{2}-1\right)\frac{\zeta\left(2\right)}{\left(\ln
z\right)^{2}}\right],$ (6.19)
where $\Omega$ is the volume. The fugacity can be solved from Eq (6.19):
$\ln
z\sim\frac{\epsilon_{f}}{kT}\left[1-\zeta\left(2\right)\left(\frac{d}{2}-1\right)\left(\frac{kT}{\epsilon_{f}}\right)^{2}\right],$
(6.20)
where
$\epsilon_{f}=\lambda^{2}kT\left[\frac{1}{2}\Gamma\left(1+\frac{d}{2}\right)n\right]^{2/d}$
is the Fermi energy. By substituting Eq. (6.17) into Eq. (6.15) with fugacity
$z$ given by Eq. (6.20), we have
$\frac{f_{1+d/2}\left(z\right)}{f_{d/2}\left(z\right)}=\frac{\Gamma\left(1+\frac{d}{2}\right)}{\Gamma\left(2+\frac{d}{2}\right)}\frac{\epsilon_{f}}{kT}\left\\{1+\left[\frac{d\zeta\left(2\right)}{2}+\zeta\left(2\right)\right]\left(\frac{kT}{\epsilon_{f}}\right)^{2}\right\\}.$
(6.21)
Substituting Eq. (6.21) into Eq. (3.27) gives Eq. (3.29).
The low-temperature expansion for Bose cases in the $2$-dimensional space. For
Bose cases, $g=1$,
$\frac{h_{2}\left(z\right)}{h_{1}\left(z\right)}=\frac{g_{2}\left(z\right)}{g_{1}\left(z\right)},$
(6.22)
where $d=2$. For $d=2$,
$g_{1}\left(z\right)=-\ln\left(1-z\right).$ (6.23)
Substituting Eq. (6.23) into Eq. (6.12) gives
$\frac{N}{V}=-\frac{1}{\lambda^{2}}\ln\left(1-z\right).$ (6.24)
Then, the fugacity can be solved from Eq (6.24):
$z=1-\exp\left(-n\lambda^{2}\right).$ (6.25)
Expanding $g_{2}\left(z\right)$ around $z=1$ gives
$\displaystyle g_{2}\left(z\right)$
$\displaystyle=\frac{\pi^{2}}{6}-\left(1-z\right)-\frac{\left(1-z\right)^{2}}{2^{2}}-\frac{\left(1-z\right)^{3}}{3^{2}}-\ldots$
$\displaystyle+\left(1-z\right)\ln\left(1-z\right)+\frac{\left(1-z\right)^{2}}{2}\ln\left(1-z\right)+\frac{\left(1-z\right)^{3}}{3}\ln\left(1-z\right)+\ldots$
(6.26)
Substituting Eqs. (6.26) and (6.23) with fugacity given in Eq. (6.25) into Eq.
(6.22) gives
$\frac{g_{2}\left(z\right)}{g_{1}\left(z\right)}=-\exp\left(-n\lambda^{2}\right)-\frac{\exp\left(-n\lambda^{2}\right)}{n\lambda^{2}}+\frac{\pi^{2}}{6n\lambda^{2}},$
(6.27)
Substituting Eq. (6.27) into Eq. (3.27) gives Eq. (3.30).
## Acknowledgments
We are very indebted to Dr G. Zeitrauman for his encouragement. This work is
supported in part by NSF of China under Grant No. 11575125 and No. 11675119.
## References
* [1] R. M. Mazo, Brownian motion: fluctuations, dynamics, and applications, vol. 112. Oxford University Press on Demand, 2002.
* [2] P. Hänggi and F. Marchesoni, Introduction: 100 years of brownian motion, 2005.
* [3] L. Diosi, Quantum master equation of a particle in a gas environment, EPL (Europhysics Letters) 30 (1995), no. 2 63.
* [4] W.-S. Dai and M. Xie, The explicit expression of the fugacity for weakly interacting bose and fermi gases, Journal of Mathematical Physics 58 (2017), no. 11 113502.
* [5] W.-S. Dai and M. Xie, Interacting quantum gases in confined space: Two-and three-dimensional equations of state, Journal of Mathematical Physics 48 (2007), no. 12 123302.
* [6] W. Dai and M. Xie, Hard-sphere gases as ideal gases with multi-core boundaries: An approach to two- and three-dimensional interacting gases, EPL (Europhysics Letters) 72 (2005), no. 6 887.
* [7] C.-C. Zhou and W.-S. Dai, Canonical partition functions: ideal quantum gases, interacting classical gases, and interacting quantum gases, Journal of Statistical Mechanics: Theory and Experiment 2018 (2018), no. 2 023105.
* [8] X. Bian, C. Kim, and G. E. Karniadakis, 111 years of brownian motion, Soft Matter 12 (2016), no. 30 6331–6346.
* [9] A. S. Bodrova, A. V. Chechkin, A. G. Cherstvy, and R. Metzler, Ultraslow scaled brownian motion, New Journal of Physics 17 (2015), no. 6 063038\.
* [10] K. Huang and I. Szlufarska, Effect of interfaces on the nearby brownian motion, Nature communications 6 (2015) 8558.
* [11] S. Gür and K. Pötzelberger, Sensitivity of boundary crossing probabilities of the brownian motion, Monte Carlo Methods and Applications 25 (2019), no. 1 75–83.
* [12] H. Safdari, A. G. Cherstvy, A. V. Chechkin, F. Thiel, I. M. Sokolov, and R. Metzler, Quantifying the non-ergodicity of scaled brownian motion, Journal of Physics A: Mathematical and Theoretical 48 (2015), no. 37 375002.
* [13] J.-H. Jeon, A. V. Chechkin, and R. Metzler, Scaled brownian motion: a paradoxical process with a time dependent diffusivity for the description of anomalous diffusion, Physical Chemistry Chemical Physics 16 (2014), no. 30 15811–15817.
* [14] M. Carlesso and A. Bassi, Adjoint master equation for quantum brownian motion, Physical Review A 95 (2017), no. 5 052119.
* [15] U. Weiss, Quantum dissipative systems, vol. 13. World scientific, 2012.
* [16] J.-F. Le Gall, Brownian motion, martingales, and stochastic calculus, vol. 274. Springer, 2016.
* [17] K. Kanazawa, T. Sueshige, H. Takayasu, and M. Takayasu, Derivation of the boltzmann equation for financial brownian motion: Direct observation of the collective motion of high-frequency traders, Physical review letters 120 (2018), no. 13 138301.
* [18] A. Gairat and V. Shcherbakov, Density of skew brownian motion and its functionals with application in finance, Mathematical Finance 27 (2017), no. 4 1069–1088.
* [19] M. Kijima, Stochastic processes with applications to finance. Chapman and Hall/CRC, 2016.
* [20] Z. Yang and D. ALDOUS, Geometric brownian motion model in financial market, University of California, Berkeley (2015).
* [21] C. Czichowsky, R. Peyre, W. Schachermayer, and J. Yang, Shadow prices, fractional brownian motion, and portfolio optimisation under transaction costs, Finance and Stochastics 22 (2018), no. 1 161–180.
* [22] B. P. Rao, Pricing geometric asian power options under mixed fractional brownian motion environment, Physica A: Statistical Mechanics and its Applications 446 (2016) 92–99.
* [23] E. Neuman, M. Rosenbaum, et al., Fractional brownian motion with zero hurst parameter: a rough volatility viewpoint, Electronic Communications in Probability 23 (2018).
* [24] P. Romanczuk, M. Bär, W. Ebeling, B. Lindner, and L. Schimansky-Geier, Active brownian particles, The European Physical Journal Special Topics 202 (2012), no. 1 1–162.
* [25] X. Ao, P. K. Ghosh, Y. Li, G. Schmid, P. Hänggi, and F. Marchesoni, Active brownian motion in a narrow channel, The European Physical Journal Special Topics 223 (2014), no. 14 3227–3242.
* [26] B. Lindner and E. Nicola, Diffusion in different models of active brownian motion, The European Physical Journal Special Topics 157 (2008), no. 1 43–52.
* [27] W. Ebeling and L. Schimansky-Geier, Swarm dynamics, attractors and bifurcations of active brownian motion, The European Physical Journal Special Topics 157 (2008), no. 1 17–31.
* [28] P. Pietzonka, K. Kleinbeck, and U. Seifert, Extreme fluctuations of active brownian motion, New Journal of Physics 18 (2016), no. 5 052001\.
* [29] J.-P. Bouchaud and A. Georges, Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications, Physics reports 195 (1990), no. 4-5 127–293.
* [30] R. Metzler and J. Klafter, The random walk’s guide to anomalous diffusion: a fractional dynamics approach, Physics reports 339 (2000), no. 1 1–77.
* [31] G. Sikora, K. Burnecki, and A. Wyłomańska, Mean-squared-displacement statistical test for fractional brownian motion, Physical Review E 95 (2017), no. 3 032110.
* [32] R. Metzler, J.-H. Jeon, A. G. Cherstvy, and E. Barkai, Anomalous diffusion models and their properties: non-stationarity, non-ergodicity, and ageing at the centenary of single particle tracking, Physical Chemistry Chemical Physics 16 (2014), no. 44 24128–24164.
* [33] A. Lampo, M. Á. G. March, and M. Lewenstein, Quantum brownian motion, in Quantum Brownian Motion Revisited, pp. 19–39. Springer, 2019.
* [34] H. Grabert, P. Schramm, and G.-L. Ingold, Quantum brownian motion: The functional integral approach, Physics reports 168 (1988), no. 3 115–207.
* [35] A. O. Caldeira and A. J. Leggett, Path integral approach to quantum brownian motion, Physica A: Statistical mechanics and its Applications 121 (1983), no. 3 587–616.
* [36] V. Ambegaokar, Quantum brownian motion and its classical limit, Berichte der Bunsengesellschaft für physikalische Chemie 95 (1991), no. 3 400–404.
* [37] G. Ford and M. Kac, On the quantum langevin equation, Journal of statistical physics 46 (1987), no. 5-6 803–810.
* [38] G. Lindblad, On the generators of quantum dynamical semigroups, Communications in Mathematical Physics 48 (1976), no. 2 119–130.
* [39] B. Vacchini, Completely positive quantum dissipation, Physical review letters 84 (2000), no. 7 1374.
* [40] L. Reichl, A Modern Course in Statistical Physics. Physics textbook. Wiley, 2009.
* [41] R. Pathria, Statistical Mechanics. Elsevier Science, 2011.
|
2024-09-04T02:54:59.199255 | 2020-03-11T14:42:58 | 2003.05336 | {
"authors": "Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26166",
"submitter": "Yoshiki Higo",
"url": "https://arxiv.org/abs/2003.05336"
} | arxiv-papers | # On Tracking Java Methods with Git Mechanisms
Yoshiki Higo<EMAIL_ADDRESS>Shinpei Hayashi<EMAIL_ADDRESS>Shinji Kusumoto<EMAIL_ADDRESS>Graduate School of Information
Science and Technology, Osaka University,
Yamadaoka 1–5, Suita, Osaka 565–0871, Japan School of Computing, Tokyo
Institute of Technology,
Ookayama 2–12–1–W8–71, Ookayama, Meguro-ku, Tokyo 152–8550, Japan
###### Abstract
Method-level historical information is useful in various research on mining
software repositories such as fault-prone module detection or evolutionary
coupling identification. An existing technique named Historage converts a Git
repository of a Java project to a finer-grained one. In a finer-grained
repository, each Java method exists as a single file. Treating Java methods as
files has an advantage, which is that Java methods can be tracked with Git
mechanisms. The biggest benefit of tracking methods with Git mechanisms is
that it can easily connect with any other tools and techniques build on Git
infrastructure. However, Historage’s tracking has an issue of accuracy,
especially on small methods. More concretely, in the case that a small method
is renamed or moved to another class, Historage has a limited capability to
track the method. In this paper, we propose a new technique, FinerGit, to
improve the trackability of Java methods with Git mechanisms. We implement
FinerGit as a system and apply it to 182 open source software projects, which
include 1,768K methods in total. The experimental results show that our tool
has a higher capability of tracking methods in the case that methods are
renamed or moved to other classes.
###### keywords:
Mining software repositories , Source code analysis , Tracking Java methods
††journal: Journal of Systems and Software
## 1 Introduction
One feature of version control systems is the ability to know file-level
change information. Thus, it is easy to identify which files were changed in
given commits or counting changes for files in a given repository. However,
many approaches in mining software repositories (in short, MSR) require
information on finer-grained units such as Java methods or C functions. If we
want to count changes for Java methods, we need to parse source files to
identify method positions and then we need to match method positions with
changed code positions to identify which methods were changed. To conduct
finer-grained analyses, developers have to implement code/scripts. Besides,
incorrect analysis results will be obtained if the implemented code/scripts
include bugs.
Hata et al. proposed a technique, Historage, which enables Java methods to be
tracked with Git mechanisms [1]. Historage takes a Git repository of a Java
project as its input, and it outputs another Git repository in which each
method gets extracted as a file. Treating Java methods as files realizes that
developers/practitioners can obtain method-level historical information only
by executing Git commands such as git-log.
(a) Git repository.
(b) Historage repository.
Figure 1: Differences between Git and Historage repositories.
Figure 1 shows a simple model of Git and Historage repositories. In the Git
repository, file Person.java is managed. We can see that Person.java was
changed in two commits c100 and c101. Information for the changes on
Person.java can be retrieved by executing git-log. However, if we want to know
which methods were changed in the two commits, we have to parse Person.java to
obtain the positions of the methods and then we have to match method positions
with the positions of the changed code in the two commits. On the other hand,
in the Historage repository, each method exists as a file. Thus, just
executing git-log is sufficient to know in which commits the two methods were
changed. The command identifies that getLength() in Person.java was changed in
commit c100 and setLength(int) was changed in c101.
However, Historage has a limited capability of tracking methods in the case
that methods are renamed or moved to other classes. We explain the issue with
Figure 2, which shows refactorings on file Person.java in Figure 1. The
refactorings include the following four changes.
LABEL:Rename_Class:
Person $\rightarrow$ Engineer
LABEL:Rename_Field:
length $\rightarrow$ height
LABEL:Rename_Method (Getter):
getLength $\rightarrow$ getHeight
LABEL:Rename_Method (Setter):
setLength $\rightarrow$ setHeight
(a) Git.
(b) Historage.
Figure 2: Trackability differences between Git and Historage repositories.
In the case of the changes in Figure 2LABEL:sub@fig:GitTrackingModel, the Git
rename detection function can identify that file Person.java was renamed to
Engineer.java because the two files sufficiently share the identical lines. On
the other hand, in the Historage repository, files of Java methods get much
smaller than their original file as shown in Figure
2LABEL:sub@fig:HistorageTrackingModel. Thus, the ratio of the changed lines
against all the lines gets higher, which makes the Git function not work well.
Hata et al. addressed that changing the threshold for the Git rename function
is a way to realize a better method tracking [1]. They recommend using 30%
instead of 60%, which is a default value of Git. However, we consider that
only using a lower threshold may produce incorrect tracking results. For
example, if we use 30% instead of 60%, the Git rename function can identify
that Engineer/getHeight() is a renamed file of Person/getLength(). However, at
the same time, Person/getLength() can be tracked wrongly from
Engineer/setHeight(int) because their similarity is 1/3, which is higher than
30%.
Tracking method accurately is essential. If not, MSR approaches using
historical data gets affected. Hora et al. reported that between 10 and 21% of
changes at the method level in 15 large Java systems were untracked in the
context of refactoring detection [2]. They also found that 37% of the top-25%
most changed entities (classes and methods) have at least one untracked change
in their histories. By assessing two MSR approaches, they detected that their
results could be improved when untracked changes were resolved.
In this paper, we propose a new technique named FinerGit to improve the
trackability of Java methods. Several research areas benefit from FinerGit.
FinerGit is useful for studies in the context of assessing bug introducing
changes [3, 4, 5] or detecting code authorship [6, 7]. More broadly, any study
that compares two versions of methods can be benefited, for example, API
evolution detection [8, 9], code warning prioritization [10, 11], and many
other.
Figure 3: Tracking files with our technique.
The main contributions of this paper are the followings.
* 1.
We raise an issue on method trackability in Historage.
* 2.
We propose a new technique, FinerGit, to increase method trackability with Git
mechanisms.
* 3.
We provide a software tool based on FinerGit. The tool is open to the public
on GitHub 111https://github.com/kusumotolab/FinerGit. The tool is sufficiently
fast even for huge repositories, as shown in the evaluation.
* 4.
We show the experimental results on the tracking results of 182 open source
software (OSS) projects. These experiments have two aspects. First, they
clarify the advantage of FinerGit with an existing technique, Historage.
Second, they are the first attempt of large-scale empirical studies for the
tracking results of method-level repositories.
The remainder of this paper is organized as follows: in Section 2, we explain
our research goal and our key idea to achieve the goal; in Section 3, we
propose our new technique named FinerGit on the top of the key idea; Section 4
describes an implementation of FinerGit; then, we report the evaluation
results with the implementation in Section 5; we also describe threats to
validity on the experiments in Section 7; related work is introduced in
Section 8; lastly, we conclude this paper in Section 9.
## 2 Basic Approach
At present, there are various techniques of tracking source code entities [12,
13, 14, 15]. Those techniques utilize many types of information such as text
similarities, data dependencies, and call dependencies. On the other hand, in
this research, we utilize only line-based text similarity to track Java
methods. The reason is that our research goal is realizing accurate method
tracking with Git mechanisms.
The biggest benefit of tracking methods with Git mechanisms is that it can
easily connect with any other tools and techniques built on Git
infrastructure. For example, the following analyses can be easily performed by
using the basic commands provided by Git.
* 1.
We can know how many times each method was changed in the past by git-log.
* 2.
We can know how many developers changed a specified method in the past by
collecting author names of the commits in which the method was changed.
Git performs file comparisons by using hash values. If the size of a line is
equal to or shorter than 64 bytes, a hash value is calculated from the entire
line. If the size of a line is longer than 64 bytes, the line is chunked by 64
bytes, and a hash value is calculated from each chunk. Thus, even if just a
single token in a given line (which is shorter than 64 bytes) has been
changed, Git regards that the entire line has been changed.
Method-level tracking with Git mechanisms can be realized by treating each
method as a single file (a _method file_ hereafter). Based on this idea, Hata
et al. developed technique named Historage [16]. However, as explained with
Figure 2, simple extraction as files are inadequate for small methods. In this
research, we propose a file format that each line includes only a single
token. By using this format, each hash is calculated from a single token. In
Figure 2LABEL:sub@fig:HistorageTrackingModel, Git regards that the two red
lines of methods getLength and setLength were changed, though only the method
name and the field name were changed in methods. As a result, the ratio of
unchanged lines becomes 1/3, which is less than 60% of Git’s default value so
that the method is not tracked with Git mechanisms.
We state two restrictions for the techniques to improve method tracking with
Git mechanisms as follows.
* 1.
Since the file tracking mechanism in Git is based on line-based text
similarity, the characteristics of methods to be used in comparison must be
represented as a sequence of text lines. Based on this restriction, complex
comparison techniques of file contents such as tf/idf are not applicable.
* 2.
Since the contents of method files are visible and are utilized by developers,
they should follow a representation of source code in an understandable way by
users. Users may apply git-diff command to a method file to see how a method
was modified, and the obtained difference should represent the difference of
method contents in this case. Based on this restriction, converting method
contents to a sequence of computed numeric values used only for a comparison
purpose is not suitable.
Figure 3 shows how the changes in Figure 2LABEL:sub@fig:HistorageTrackingModel
are treated in FinerGit. The file changing mechanism in this technique
satisfies the above restrictions. The ratio of unchanged lines becomes 8/10
for getLength and 11/15 for setLength. Both values are higher than 60%, so
that both methods are tracked with Git mechanisms.
## 3 Proposed Technique
Herein, we explain our proposed technique named FinerGit to realize a better
method tracking with Git mechanisms. FinerGit is designed on the top of the
basic approach explained in Section 2. FinerGit consists of (1) naming
convention and (2) two heuristics.
### 3.1 Naming Convention
In FinerGit, a file name for a Java method includes the following information:
* 1.
a class name including the method,
* 2.
access modifiers of the method,
* 3.
a return type of the method,
* 4.
a name of the method, and
* 5.
a list of parameter types of the method.
For example, the file name for method setLength in Figure 2 becomes as
follows.
`Person#public_void_setLength(int).mjava`
Extension .mjava means that this is a method file and the file includes source
code of a Java method. Including the above information in the file name
reflects code changes around a given method as follows.
* 1.
If the name of the class including the given method is changed, the file name
of the given method gets changed, but its contents are not changed.
* 2.
If another method in the class including the given method is changed, neither
file name nor contents of the given method are changed.
* 3.
If the signature of the given method is changed, the file name of the given
method gets changed and its contents are also slightly changed since the
contents include the tokens of the method signature.
* 4.
If the contents of the given method are changed, the file name of the given
method does not get changed while its contents get changed.
We can track methods with Git mechanisms in any of the above cases if either
of them occurs alone. However, if a signature of a method is changed and its
contents are also changed broadly, it is difficult to track the method.
(a) Historage.
(b) w/o Heuristic-1.
(c) w/ Heuristic-1.
Figure 4: Tracking files w/o and w/ Heuristic-1.
### 3.2 Introducing Heuristics
It is not difficult to imagine that Git tracks wrong methods with FinerGit
because each line has only a single token and such lines will coincidentally
match with many other lines. Thus, we introduce two heuristics to reduce such
coincidental matches of unrelated lines.
Heuristic-1:
Classifying brackets, parentheses, and semicolons of termination characters in
detail.
Heuristic-2:
Removing tokens existing in all methods from the targets of similarity
calculation.
#### 3.2.1 Heuristic-1
Some termination characters such as brackets, parentheses, and semicolons are
omnipresent in Java source code. Such termination characters are used as a
part of various program elements. For example, brackets (“{” and “}”) are used
to initialize arrays in addition to code blocks such as if-statements and for-
statements. Thus, if just a bracket is placed on a line, brackets of different
roles are coincidentally matched with each other. Such accidental matchings
make the similarity between deleted and added methods inappropriately higher.
To prevent such accidental matchings, we classify termination characters in
detail. More concretely, we add a token explanation to each line. Token
explanations prevent accidental matchings of different-role characters from
being matched. In this heuristic, semicolons, brackets, and parentheses are
classified into 18, 21, and 20 categories, respectively.
Figure 4 shows how Heuristic-1 affects method tracking. Figure
4LABEL:sub@fig:Heuristic1Historage is a method file that Historage outputs.
The deleted method includes an if-statement for checking whether variable a is
null or not. The added method includes a while-statement for adding variable b
to variable total repeatedly. Those are different methods, which means a lower
similarity between them is better. In the case of Historage, the last line of
the if-statement coincidentally matches with the last line of the while-
statement so that the similarity between them becomes 1/3 (=33%). In the case
of FinerGit without Heuristic-1, the parentheses and the brackets of the if-
statement coincidentally matches with ones of the while-statement. Moreover,
the semicolon of the return-statement coincidentally matches with the one of
the expression-statement. As a result, the similarity between them becomes
5/12 (=42%). If we introduce Heuristic-1 to this example, the parentheses, the
brackets, and the semicolons get unmatched. Thus, the similarity between them
becomes 0/12 (=0%).
(a) w/o Heuristic-2.
(b) w/ Heuristic-2.
Figure 5: Tracking files w/ and w/o Heuristic-2.
#### 3.2.2 Heuristic-2
The parentheses for parameters and the brackets for method bodies are
omnipresent in compilable Java methods. The fact means that at least the four
tokens always match between any Java methods. Thus, the similarity between
non-related methods gets inappropriately higher. If methods include many
tokens, the impact of the four tokens is negligible. However, if methods are
small such as getters and setters, the impact of the four tokens become
serious. Consequently, we decided not to put the four tokens into files for
methods. By removing the four tokens, we prevent the similarity of two non-
related methods from getting higher inappropriately.
Figure 5 shows how Heuristic-2 affects tracking. This example shows a
similarity calculation between getLength (before refactoring) and setHeight
(after refactoring) in Figure 2. A lower similarity between the two methods is
better because they are different methods. In the case that we calculate a
similarity without Heuristic-2, the similarity becomes 5/10 (=50%). However,
in the case that we adopt Heuristic-2, the similarity becomes 1/6 (=17%)
because the four tokens are ignored.
## 4 Implementation
We have implemented a tool based on FinerGit. Our tool is open to the public
in GitHub, and anyone can use it freely. Our tool takes a Git repository of a
Java project, and it outputs another Git repository where each Java method
gets extracted as a file. In FinerGit repositories, method files have
extension .mjava. By executing git-log command with option \--follow for
.mjava files, we can get their histories.
The name of a method file includes the information of the signature of the
method and the class name including the method so that the file name
occasionally becomes very long. Very long file names are not compatible with
widely-used operating systems. For example, in the case of Windows 10, the
absolute path of a file must not exceed 260 characters. If a file name
violates the restriction, its file cannot be accessed with Windows’ file
manager and some other problems occur. In the case of Linux and MacOS, a file
name (not a file path) must not exceed 255 characters. For practical use in
such widely-used operating systems, if a file name becomes longer than the
restriction of operating systems, our tool cuts the file name in the middle
and then it appends a hash value that is calculated from the entire file name.
This manipulation can shorten the file name while keeping its identity.
There are three types of comments in Java source code: line comments, block
comments, and Javadoc comments. Line and block comments are removed from
.mjava files while Javadoc comments are included in .mjava files as they are
in .java files. This means that a Javadoc comment exists in the header part of
.mjava file if its original method has it.
Our tool also has a function to extract each field in Java source code as a
single file. Files for fields have extension .fjava. A field declaration
includes multiple tokens such as field name, field type, modifiers,
initializations, and annotations. Thus, fields can be tracked as well as
methods by placing a single token on a line. A file name for a Java field
include the following information:
* 1.
a class including the field,
* 2.
access modifiers of the field,
* 3.
a type of the field, and
* 4.
a name of the field.
For example, the file name for field length in Figure 2 becomes as follows.
`Person#private_int_length.fjava`
Including the above information in the file name reflects code changes around
a given field as follows.
* 1.
If the name of class including the given field is changed, the file name of
the given method gets changed, but its contents are not changed.
* 2.
If another method or field in the class including the given field is changed,
neither file name nor the contents of the given method are changed.
* 3.
If the access modifiers, type, or name of the field is changed, the file name
of the given field gets changed and its contents are also changed.
* 4.
If the annotations and/or initializations of the field are changed, the file
name of the given field does not get changed while its contents get changed.
In Historage repository, a file path of a method includes its signature
information. Historage makes a directory for each Java class. Methods included
in a class are placed in its corresponding directory. On the other hand, our
technique places files of Java methods in the same directory of their original
Java files. A reason why FinerGit does not make new directories for Java
classes is that the conversion time of Historage is long and making a large
number of directories in the conversion process is a factor of taking a long
time. Both FinerGit and Historage make a large number of files because each
Java method is extracted as a single file, but our technique does not make new
directories for Java classes. In both FinerGit and Historage, file name
collisions for extracted files do not occur as long as their source code is
compilable.
## 5 Evaluation
We evaluated FinerGit by comparing it with Historage [1]. We did not use the
published version of Historage
implementation222https://github.com/niyaton/kenja but we added Historage’s
functionality to our tool. By using the same implementation for FinerGit and
Historage, we can avoid different tracking results due to the differences in
implementation details. For example, original Historage makes directories for
each Java class while our Historage implementation outputs files of Java
methods in the same directory as their original files. The file name
convention of our Historage implementation is the same as FinerGit. Thus, in
this way, we can evaluate how much method trackability with Git mechanisms
gets improved by FinerGit.
We selected 182 Java projects in GitHub as our evaluation targets. In the
process of our target selection, we used Borges dataset [17]. This dataset
includes 2,279 popular projects in GitHub. Firstly, we extracted 202 projects
that are labeled as “Java projects”. Borges et al. classified the projects in
the dataset into six categories: Application software, System software, Web
libraries and frameworks, Non-web libraries and frameworks, Software tools,
and Documentation. Secondly, we extracted 185 projects that are other than
Documentation projects because they are repositories with documentation,
tutorials, source code examples, etc. (e.g., java-design-
patterns333https://github.com/iluwatar/java-design-patterns). Documentation
projects are outside of the scope of this evaluation. Then, we cloned the 185
repositories to our local storage on March 4th 2019. Unfortunately, we found
that three of the 185 projects did not include .java file. The three projects
(google/iosched, afollestad/material-dialogs, and googlesamples/android-
topeka) are Kotlin projects. Finally, we removed the three projects from the
185 projects.
Figure 6: Project size.
Figure 6 shows the distributions of the number of commits and LOC of the
target projects. The two largest repositories in the targets are
platform_frameworks_base444https://github.com/aosp-
mirror/platform_frameworks_base and intellij-
community555https://github.com/JetBrains/intellij-community. The two
repositories include approximate 380K and 240K commits, and their latest
revisions consist of about 3.7M and 5.0M LOC, respectively.
We generated FinerGit repositories and Historage ones from the 182 target
projects. Herein, FinerGit repositories have the file format of including a
single token per line with the two heuristics while Historage repositories
have the same line format as the original repositories.
We have evaluated FinerGit from the five viewpoints:
* 1.
tracking accuracy,
* 2.
heuristics impacts,
* 3.
project-level tracking results,
* 4.
method-size-level tracking results, and
* 5.
execution time.
Hereafter in this section, we report the results in detail.
### 5.1 Tracking Accuracy
It is not realistic to manually check whether FinerGit generates correct
tracking results for each method in the target projects. Thus, we make an
oracle for a method for each target project with the following procedure.
1. 1.
A method was randomly selected from each target project. In total, 182 methods
were selected.
2. 2.
Each of the methods in FinerGit repositories was tracked with the following
command.
`> git log --follow -U15 -M20% -C20% -p`
` -- `path/to/method`.mjava`
With the above command, a specified file is tracked even if the file was
renamed. If there is a file that has a 20% or more similarity, Git regards
that file renaming or copying occurred.
3. 3.
The tracking results were examined, and oracles of renaming and copying
history were made by two of the authors independently. Each author spent
several hours on this task. The two authors made different oracles for 34 out
of the 182 methods.
4. 4.
The two authors discussed the 34 methods so that they obtain consensus for
them. After a two-hour discussion, they got consensus oracles for the 34
methods.
With the above procedure, we obtained consensus oracles of tracking results
for the 182 methods. Finally, we obtained the resulting oracle set consisting
of 426 renaming/copying changes for the 182 methods in total.
Next, we track the methods in FinerGit’s repositories and Historage’s ones
with different thresholds. We used the following command to count how many
times Git found renaming and copying with a specified threshold.
`> git log --follow --oneline -M`t` -C`t` -p`
` -- `path/to/method`.mjava`
` | grep -e "^rename from\|^copy from"`
` | wc -l`
In the above command, t is the threshold that Git regards given two files have
a renaming or copying relationship. We tracked the target methods with 13
different thresholds (i.e., 20%, 25%, 30%, $\ldots$, 80%). If tracking results
for a method include a higher number of renaming/copying than its oracle, we
regard renaming/copying in the over-tracking part as false positives. If
tracking results for a method include a lower number of renaming/copying than
its oracle, we regard renaming/copying that are not detected as false
negatives. We calculated precision, recall, and F-measure for each threshold
by summing up the number of false positives and false negatives of all the
methods.
Figure 7: Precision, recall, and F-measure values.
Figure 7 shows how precision, recall, and F-measure changes according to given
thresholds. The graphs of Historage and FinerGit have the following features.
* 1.
Precision of Historage is very high. Historage has 93.01% of precision even in
the case of threshold 20%.
* 2.
Recall of Historage is low. Historage has only 57.04% of recall in the case of
threshold 20%.
* 3.
FinerGit has high precision in the case of high thresholds, but precision gets
rapidly decreased for lower thresholds.
* 4.
FinerGit has higher recall than Historage for all the thresholds. The recall
differences between FinerGit and Historage get bigger for lower thresholds.
Historage has a low possibility to track wrong methods while it often misses
renaming and copying. On the other hand, in FinerGit repositories, precision
gets decreased for lower thresholds while recall improves much. The highest
F-measure on FinerGit is 84.52% on threshold 50% while the highest F-measure
on Historage is 70.72% and 70.23% on thresholds 20% and 25%, respectively.
### 5.2 Heuristics Impacts
(a) Precision.
(b) Recall.
(c) F-measure.
(d) Rename count.
Figure 8: Precision, recall, F-measure, and rename count when heuristics 1 and
2 are on and/or off.
To reveal how each heuristic impacts on method tracking, we measured
precision, recall, and F-measure and we also counted found renames for the
following four types of fine-grained repositories. The target methods are the
same as Subsection 5.1. Herein, rename count means the sum of found renames
for all the target methods in a type of repositories.
H1 OFF, H2 OFF:
neither heuristics are applied to.
H1 ON, H2 OFF:
only Heuristic-1 is applied to.
H1 OFF, H2 ON:
only Heuristic-2 is applied to.
H1 ON, H2 ON:
both heuristics are applied to. This is the same repository as what we used in
Subsection 5.1.
Figure 8 shows the results. Applying only Heuristic-1 makes it possible to
find more renaming so that precision gets decreased while recall gets
increased. On the other hand, applying only Heuristic-2 slightly shorten
method tracking. As a result, precision gets increased while recall gets
decreased. The reasons why applying Heuristic-1 and Heuristic-2 have opposite
impacts on method tracking are as follows.
* 1.
Applying Heuristic-1 reduces similarities between methods. How much the
similarities are decreased depends on the contents on methods. Thus, a
different method can be tracked at a commit compared to the case that
Heuristic-1 is not applied to.
* 2.
Applying Heuristic-2 reduces similarities between all methods. Unlike
Heuristic-1, Heuristic-2 does not make a different method tracked. Thus,
Heuristic-2 just shortens method tracking.
Table 1 shows the maximum F-measure for each type of finer-grained
repositories. In this table, the maximum F-measure is the greatest F-measure
in all data. All types have almost the same maximum values. This table also
shows the maximum recall when we track methods with over 95% precision. These
results show that more method renames are found with keeping 95% precision by
applying both heuristics.
Table 1: Maximum F-measure and Maximum Recall Repository type | Max F-measure (thr.) | Max Recall (thr.)
---|---|---
H1 OFF, | H2 OFF | 82.63% (40%) | 58.45% (55%)
H1 ON, | H2 OFF | 83.77% (55%) | 56.81% (65%)
H1 OFF, | H2 ON | 83.26% (35%) | 60.09% (50%)
H1 ON, | H2 ON | 84.52% (50%) | 68.78% (55%)
### 5.3 Project-Level Tracking Results
(a) Ratio of different tracking results.
(b) Average change counts.
Figure 9: Project-level comparisons. (a) shows the ratio of methods whose
tracking results are different between FinerGit and Historage for each
project. (b) shows the average of change counts for all the methods for each
project.
In this evaluation, we measured the ratio of methods whose tracking results
are different between the two tools for each project. We compare how much the
number of detected renames is different from FinerGit and Historage under the
same precision. As shown in the previous subsection, the two tools have
different precision values for different thresholds. To realize a fair
comparison, we decided to select different thresholds for FinerGit and
Historage that satisfy the following condition: method tracking results with
the thresholds have the same precision values and the precision values are as
high as possible. Thus, we used threshold 55% for FinerGit and 25% for
Historage. The precision of FinerGit on threshold 55% is 95.73%, and Historage
on threshold 25% is 96.60%. Those precision values are almost the same and
high enough.
Figure 9 shows the comparison results. In Figure
9LABEL:sub@fig:ProjectLevelComparison:Ratio, the blue boxplot shows the ratio
of methods for which FinerGit found more renames than Historage per project
and the red boxplot shows the opposite one. FinerGit found more renames for
22.71% methods on average while the ratio of methods that Historage found more
renames than FinerGit is only 5.26%. In Figure
9LABEL:sub@fig:ProjectLevelComparison:Count, the blue boxplot shows the
average number of changes identified by FinerGit for all methods of each
project. The red one shows the average number of changes identified by
Historage. The median values of those boxplots are 3.67 and 2.86,
respectively. These results mean that FinerGit can find more renames for all
the methods on average.
Next, we show that the tracking improvement by FinerGit is effective via the
following two ways:
* 1.
considering the fact that some methods were never changed after their initial
creation, and
* 2.
conducting statistical testing for the tracking results.
#### 5.3.1 Considering Never-Changed Methods
In software development, some methods are never changed after their initial
creation. If the 182 target projects include many never-changed methods, it is
quite natural that the comparison results between FinerGit and Historage are
not so different from each other. Thus, we investigate how many never-changed
methods are included in the projects. It is not realistic to manually collect
real never-changed methods. In this experiment, we decided to regard methods
that both FinerGit and Historage were not able to detect any changes as never-
changed methods.
Figure 10 shows the relationship between the ratio of never-changed methods
and the ratio of methods for which FinerGit found more renames than Historage.
The 25 percentile, the median, and the 75 percentile of never-changed methods
are 6.88%, 15.27%, and 26.50%, respectively. The figure indicates that the
more never-changed methods there are, the fewer methods FinerGit found more
renames for. Figure 11 shows the same figures as Figure
9LABEL:sub@fig:ProjectLevelComparison:Ratio only for the projects that include
50% or more never-changed methods. As shown in Figure
11LABEL:sub@fig:ProjectLevel50on, the differences between FinerGit and
Historage are small because the majority of their methods is never-changed.
Figure 11LABEL:sub@fig:ProjectLevel50off shows the differences after we
removed never-changed methods from the projects. We can see that the
differences between the two tools get much larger. MSR approaches are
naturally applied to methods that have change histories. Never-changed methods
are exempt from MSR approaches.
We also investigated how many methods only FinerGit or Historage found at
least a change for. The former number is 97,629 and the latter one is 35,553.
They are 5.52% and 2.01% of all methods, respectively. Finding changes for
more methods means that various MSR approaches requiring past changes can be
applied more broadly.
Figure 10: Relationships between the ratio of methods for which FinerGit found
more renames than Historage and the ratio of never-changed methods.
(a) w/ never-changed methods.
(b) w/o never-changed methods.
Figure 11: The ratio of methods whose tracking results are different between
FinerGit and Historage for projects where 50% or more methods are never-
changed ones.
#### 5.3.2 Conducting Statistical Testing
We applied Paired Wilcoxson’s signed ranked test to the comparison results
between FinerGit and Historage shown in Figure 9. The test showed that the
comparison results include significant differences regarding both aspects of
the ratio ($p$-value $<$ 0.001) and average change counts ($p$-value $<$
0.001). We also applied Cliff’s Delta to the comparison results to see the
effect size. The resulting values were computed as 0.712 for the ratio and
0.221 for the average change counts, which revealed a _large_ and a _small_
effect size of the improvement achieved by using FinerGit, respectively.
Consequently, we can say that FinerGit significantly improves tracking Java
methods compared to Historage.
### 5.4 Method-Size-Level Tracking Results
We also conducted comparisons based on method size. In this comparison, we
made several method groups based on their size. Then, we compared the tracking
results for each group. Figure 12 shows the comparison results. We can see
that there are 1,036K methods whose LOC is in the range between 1 and 5.
Herein, the LOC was computed using the original format, not the single-token-
per-line one. FinerGit generated longer tracking results for 26.21% of the
1,036K methods. Our research motivation was improving the trackability for
small methods, but surprisingly FinerGit improved the trackability for methods
of any size.
This figure also shows the average rename counts that were found by FinerGit
and Historage. We can see that FinerGit found more renames for methods of any
size than Historage. Interestingly, more renames tend to be found for larger
methods by both tools.
Consequently, we conclude that the method tracking capability of FinerGit is
higher than Historage.
(a) Ratio of methods for which FinerGit or Historage found more renames than
the other tool.
(b) Average renames that were found by FinerGit or Historage.
Figure 12: Comparison based on method size.
### 5.5 Execution Time
Figure 13: Execution time of FinerGit.
We measured the time that FinerGit reconstructed the repositories of the
target projects on MacBook Pro666CPU: 2.7GHz quad-core Intel Core i7, memory
size: 16 GBytes. Figure 13 shows the measurement results. This figure shows
that FinerGit is scalable enough for large repositories. In the longest case,
FinerGit took 4,209 seconds to reconstruct the repository of intellij-
community, which includes more than 240K commits. Of course, this execution
time can be shorter if a higher specification computer is used777We also
measured execution time with our workstation whose CPU is 3.6GHz octet-core
Intel Core i9 and memory size is 32 GBytes. The execution time was
approximately 22% of MacBook Pro’s one..
Figure 13 includes the regression line for all the data. The regression line
shows that FinerGit takes around 100 seconds to process each 10K commits for
large repositories.
## 6 Comparisons with Other Techniques
We also compared FinerGit with two other techniques, AURA and RefactoringMiner
(RMiner). The first comparison target is AURA, which is a technique that takes
two versions of Java source code and generates mappings of methods between
them [15]. AURA performs call dependency and text similarity analyses to
generate mappings. The second comparison target is RMiner, which is a
technique that detects refactorings from commit history [18]. RMiner’s
refactoring detection is based on an AST-based statement matching algorithm.
RMiner defines different rules for different refactoring patterns. RMiner
checks if matching results of two ASTs before and after changes in a given
commit follow any of the rules.
We conducted this comparison on the development history of JHotDraw between
releases 5.2 and 5.3. This development history is one of the evaluation
targets in AURA’s literature [15]. Releases 5.2 and 5.3 include 1,519 and
1,981 methods, respectively. There are 19 commits between releases 5.2 and
5.3.
### 6.1 AURA
Table 2: Refactorings detected by RMiner Refactoring pattern | # of detected instances
---|---
LABEL:Change_Parameter_Type | 56
LABEL:Change_Return_Type | 10
LABEL:Move_Method | 3
LABEL:Rename_Method | 44
LABEL:Rename_Parameter | 45
Total | 158
We made FinerGit’s repository and tracked the 1,981 methods with 20% threshold
with the command shown in Subsection 5.1. The tracking results of 185 methods
included renaming and the total number of renaming was 241. Two of the authors
independently examined the tracking results to make oracles. Each author spent
several hours on this task. The two authors make different oracles for 18 out
of the 185 methods. The authors had a discussion on the 18 methods to obtain
consensus for them. After a one-hour discussion, they got consensus oracles
for the 18 methods. Our consensus oracle includes 161 renamings on 124
methods.
Next, we tracked the 1,981 methods with 50% threshold, which is the best
F-measure threshold in the evaluation in Subsection 5.1. As a result, we
obtained 161 renamings on 124 methods. By comparing the tracking results of
50% threshold with the consensus oracle, We calculated two kinds of precision
and recall: one was calculated based on renaming instances; the other was
calculated based on methods whose tracking results included at least one
renaming in the consensus oracle.
* 1.
From the viewpoint of renaming instances, precision and recall were 91.30% and
83.52%, respectively.
* 2.
From the viewpoint of methods including renames, precision and recall were
86.29% and 83.59%, respectively.
According to AURA’s literature [15], AURA generated mappings for 97 rules888A
rule is a mapping group of multiple methods. and its precision was 92.38%. By
comparing those results, we conclude that FinerGit generated mappings for more
methods with slightly-lower precision.
AURA utilizes text similarity and call dependency to generate mappings while
FinerGit utilizes only text similarity. On the other hand, AURA takes only two
versions of source code to generate mappings while FinerGit utilizes all
commits to track methods. Those are the reason why the precision values of the
two tools were not so different.
### 6.2 RefactoringMiner
We performed RMiner 999RMiner is available at
https://github.com/tsantalis/RefactoringMiner. We used the latest version of
the tool at 17th November, 2019. The commit ID is
4bb0e11550b781b61ce1c382a58ea182a2f46944. on the commit history of JHotDraw
between release 5.2 and 5.3. RMiner has a capability of detecting 38 types of
refactoring patterns and the following five refactoring patterns correspond to
renamings that FinerGit detects: LABEL:Change_Parameter_Type,
LABEL:Change_Return_Type, LABEL:Move_Method, LABEL:Rename_Method, and
LABEL:Rename_Parameter. RMiner detected 158 refactoring instances of the five
patterns. The detail numbers of refactorings detected by RMiner are shown in
Table 2. We compared the 158 refactorings with the 161 renamings detected by
FinerGit with 50% threshold. The number of common instances was 65, which was
41.14% of RMiner’s refactorings and 40.37% of FinerGit’s renamings.
Table 3: Precision and Recall of RMiner in literature [18] Refactoring pattern | Precision | Recall
---|---|---
LABEL:Move_Method | 95.17% | 76.36%
LABEL:Rename_Method | 97.78% | 83.28%
The FinerGit evaluation in Subsection 6.1 shows that FinerGit’s tracking
accuracy on JHotDraw is high (precision and recall are 91.30% and 83.52%,
respectively in 50% threshold). Table 3 shows precision and recall of RMiner
for each refactoring pattern in literature
[18]101010LABEL:Change_Parameter_Type, LABEL:Change_Return_Type, and
LABEL:Rename_Parameter were not investigated in the literature because those
refactoring patterns have been recently supported by RMiner.. According to
this table, precision and recall of RMiner are also high. However, the common
instances between FinerGit and RMiner do not occupy a large portion of all
instances detected by either of the techniques. We manually investigated
renames and refactorings that had been detected only either of the techniques
and found that the results faithfully reflected their different inheritances.
There were two major cases of renames that were detected only by FinerGit.
* 1.
New parameters were added to methods or return types of methods were changed
according to the changes in method’s bodies. Those changes were not
refactorings but functional enhancements.
* 2.
Access modifiers (public, protected, and private) were added/removed/changed.
Such changes were refactorings; however they were not supported by RMiner.
On the other hand, refactorings that were detected only by RMiner had changed
a large part of method’s bodies. Thus, line similarities of method’s bodies
between such refactorings become low, which leaded to fail to be detected as a
renaming by FinerGit.
Herein, we compared FinerGit with RMiner; however their purposes are different
from each other. The FinerGit’s purpose is tracking Java methods with high
accuracy. No matter what kinds of changes are made, FinerGit is able to track
methods if a line similarity of the method’s bodies between a change is higher
than a given threshold. On the other hand, the purpose of RMiner is detecting
refactorings in a commit history. No matter how unsimilar between method’s
bodies are between a refactoring, RMiner is able to detect the refactoring if
the refactoring is supported by RMiner.
## 7 Threats to Validity
In the experiment, we used 182 Java projects, and we investigated on tracking
results on 1,768K methods in total. Those numbers of projects and methods are
large enough so that we expect that the same results are obtained if we
conduct another experiment on different Java projects.
To measure precision, recall, and F-measure of method tracking by FinerGit and
Historage, we manually constructed oracle for 182 methods. Firstly, two of the
authors made oracle for all the 182 methods independently, and then they
discussed for which they made different oracle. This process of making oracle
is designed to avoid making mistakes and to reduce subjective view on
constructing oracle as much as possible.
One more thing about oracle is that, essentially, oracle should be made
independently from tracking results of FinerGit and Historage. However,
constructing oracle with a fully-manual work is extraordinarily difficult even
for a small number of methods. Consequently, in the experiment, we firstly
obtained high-recall tracking results with an enough low threshold, and then,
we checked how many false positives were included in the tracking results. We
consider that this construction process does not ensure 100%-correct oracle
but high enough for comparing different techniques. In other word, we made
oracle of reasonable quality with a realistic time cost.
In the manual investigation, we checked surrounding 15 lines (as shown in
Subsection 5.1) of changes in commits to judge whether method tracking by
FinerGit was correct or not. The number 15 came from our experiences with
FinerGit because we had checked tracking results of FinerGit before conducting
the experiment in this paper.
In the experiment, we discussed the comparison results by focusing on whether
FinerGit had found more renaming and copying for Java methods than Historage.
However, we also need to see the fact that there were some cases that short
tracking results by FinerGit were better than long tracking results by
Historage. Such cases mean that FinerGit was able to avoid tracking methods
incorrectly. We investigated some of such cases, and then we found that the
reason why Historage found a higher number of renames is due to the existences
of coincidentally matched lines as shown in Figure
4LABEL:sub@fig:Heuristic1Historage.
## 8 Related Work
The research that is most related to this paper is of course Historage [1].
Historage is useful in research on mining software repositories because
researchers can obtain Java method histories without implementing code/scripts
by themselves. Historage has been used in many research before now.
* 1.
Hata et al. researched predicting fault-prone Java methods by using method
histories obtained with Historage [19]. Their experimental results showed that
the method-level prediction outperformed package-level and file-level
predictions from the viewpoint of efforts for finding bugs.
* 2.
Hata et al. also used Historage to infer restructuring operations on the
logical structure of Java source code [16].
* 3.
Fujiwara et al. developed a hosting service of Historage repositories,
Kataribe111111http://sdlab.naist.jp/kataribe/ [20]. Kataribe enables
researchers/practitioners to browse method histories on the web, and they can
clone Historage repositories in Kataribe into their local storages if they
want to conduct further analyses.
* 4.
Tantithamthavorn et al. investigated the impact of granularity levels (class-
level and function-level) on a feature location technique [21]. The results
indicated that function-level feature location technique outperforms class-
level feature location technique. Moreover, function-level feature location
technique also required seven times less effort than class-level feature
location technique to localize the first relevant source code entity.
* 5.
Kashiwabara et al. proposed a technique to recommend appropriate verbs for a
method name of a given method so that developers can use various verbs
consistently [22]. Their technique recommends candidate verbs by using
association rules extracted from existing methods. They extracted renamed
methods from repositories of target projects using Historage.
* 6.
Oliveira et al. presented an approach to analyze the conceptual cohesion of
the source code associated with co-changed clusters of fine-grained entities
[23]. They obtained change histories of Java methods with Historage. By using
the change histories, they identified a set of methods that were frequently
changed together.
* 7.
Yamamori et al. proposed to use two types of logical couplings of Java methods
for recommending code changes [24]. The first type is logical couplings that
are extracted from code repositories. They used Historage and Kataribe to
obtain logical couplings of Java methods. The second type is logical couplings
that are extracted from interaction data. They used a dataset that had been
collected by Mylyn [25]. Their experimental results showed that there was a
significant improvement in the efficiency of the change recommendation
process.
* 8.
Yuzuki et al. conducted an empirical study to investigate how often change
conflicts happen in large projects and how they are resolved [26]. In their
empirical study, they used Historage to conduct method-level analysis. As a
result, they found that 44% of conflicts were caused by changing concurrently
the same positions of methods, 48% is by deleting methods, and 8% is by
renaming methods. They also found that 99% of the conflicts were resolved by
adopting one method directly.
* 9.
Suzuki et al. investigated relationships between method names and their
implementation features [27]. They showed that focusing on the gap between
method names and their implementation features is useful to predict fault-
prone methods. They used Historage to collect change histories of Java methods
in the investigation.
All the above research can be conducted with FinerGit instead of Historage.
Moreover, the experimental results may change if FinerGit is used because
there is a significant difference in the tracking results between FinerGit and
Historage.
We are not the first research group that has used single-token-per-line format
for Git repositories. To the best of our knowledge, the study by German et al.
was the first attempt to follow this approach [28]. They proposed to rearrange
source files with single-token-per-line for enabling fine-grained git-blame.
By using their technique, we can see the person who changed last for each
token of the source code. They showed that blame-by-token reports the correct
commit that adds a given source code token between 94.5% and 99.2% of the
times, while the traditional approach of blame-by-line reports the correct
commit that adds a given token between 74.8% and 90.9%. German developed a
system cregit 121212https://github.com/cregit/ based on their proposed
technique. cregit has being used in Linux development
community131313https://cregit.linuxsources.org/. cregit does not extract Java
methods as files, which is a difference between cregit and FinerGit.
Heuristic-1, which is described in Subsection 3.2, is refining symbols in
source code. On the one hand, symbol refinements are often performed in the
process of code clone detection techniques. In the context of clone detection,
some symbols are replaced with special ones prior to the matching process. For
example, in CCFinder [29] and NICAD [30], which are representative code clone
detection techniques, all variables and literals are replaced with a specific
wildcard symbol. The purpose of replacements is to detect syntactically-
similar code as code clones as much as possible. Such replacements can realize
that the matching process ignores differences in variables or literals. On the
other hand, in the context of FinerGit, we do not want to ignore differences
in variables or literals. If we ignore such differences, the similarity
between non-related methods can rise accidentally, which leads FinerGit to
make wrong method tracking. The purpose of our Heuristic-1 is to calculate
lower similarity values between non-related methods.
There are many research studies of program element matching other than
Historage [13]. Lozano et al. and Saha et al. implemented method tracking
techniques since they need to track method-level clones in their experiments
[31, 32]. Their method-level tracking techniques are line-based comparisons
and their comparisons compute numerical similarity values by comparing lines
as texts. Thus, in the case that only a small part of a line is changed, the
similarity between a before-change line and its after-change line should be
high while a simple line-based comparison like diff regards that a before-
change line is completely different from its after-change line. However, their
comparisons are still line-based ones, which include some flaws compared to
token-based ones.
* 1.
In the cases that the first token of the line is moved to the previous line or
the last token of the line is moved to the next line (e.g., left bracket (“{”)
is moved to the next line due to format change), their line-based techniques
regard that multiple lines have been changed while our technique regards that
no lines have been changed.
* 2.
The same changes have different impacts on lines of different length. For
example, variable abc is changed to def in a 10-character line, the similarity
becomes 7/10 while the same change occur in a 40-character line, the
similarity becomes 37/40.
Godfrey and Zou detected merging and splitting source code entities such as
files and functions. They extended origin analysis [33] to track source code
entities. They utilize various information for entities such as entity names,
caller/callee relationship, and code metrics values. Wu et al. proposed a
technique to identify change rules for one-replaced-by-many and many-replaced-
by-one methods [15]. Their approach is a hybrid one, which means that it uses
two kinds of data: caller/callee relationship and text similarity. Kim et al.
proposed a technique to track functions even if their names get changed [14].
Their technique computes function similarities between given two methods. They
introduced eight similarity factors such as complexity metrics and clone
existences to determine if a function is renamed from another function. Dig et
al. proposed a technique to detect refactorings performed during component
evolution [12]. Their technique can track methods even if refactorings change
their names. Their detection algorithm uses a combination of a fast syntactic
analysis to detect refactoring candidates and a more expensive semantic
analysis to refine the results. There are many other approaches for
identifying refactorings, and many of them support refactorings that changes
method names/signatures such as LABEL:Rename_Method and
LABEL:Parameterize_Method pattern [34, 35, 36, 37, 18, 38, 39]. The advantage
of the proposed technique against the above approach should be the ease to use
because it utilizes Git mechanisms to track methods. A researcher/practitioner
who wants method evolution data does not have to learn how to use new tools.
## 9 Conclusion
In this paper, we firstly discuss Historage, which is proposed in literature
[16]. Historage is a tool that converts a Git repository to a finer-grained
one. In the finer-grained repository, each Java method exists as a single
file. Thus, we can track Java method with Git commands such as git-log.
However, tracking small methods with Git mechanisms does not work well because
small methods do not have good chemistry with the Git rename detection
function. Thus, we proposed a new technique that puts only a single token of
Java methods per line. In other words, in our technique, each line includes
only a single token. We also derived two heuristics to reduce incorrect
tracking.
We implemented a software tool based on the proposed technique. We applied our
tool and Historage to 182 repositories of Java OSS projects to compare the two
tools. The 182 repositories include 1,768K methods in total, which are the
targets our comparisons. We found that FinerGit scored 84.52% as maximum
F-measure while Historage scored 70.23%. We also confirmed that the proposed
technique worked well for methods of any size in spite that our research
motivation was to realize better tracking for small methods. Furthermore, we
showed that our tool took only short time to construct finer-grained
repositories even for large repositories.
In the future, we are going to replicate some experiments of existing research
with FinerGit to check whether the better tracking of our tool changes
experimental results or not.
## Acknowledgements
This work was supported by JSPS KAKENHI Grant Number JP17H01725 and
JP18K11238.
## References
* [1] H. Hata, O. Mizuno, T. Kikuno, Historage: Fine-grained version control system for Java, in: Proceedings of the 12th International Workshop on Principles of Software Evolution and the 7th Annual ERCIM Workshop on Software Evolution, 2011, pp. 96–100.
* [2] A. Hora, D. Silva, M. Tulio, R. Robbes, Assessing the threat of untracked changes in software evolution, in: Proceedings of the 40th International Conference on Software Engineering, 2018, pp. 1102–1113.
* [3] S. Kim, T. Zimmermann, K. Pan, E. J. J. Whitehead, Automatic identification of bug-introducing changes, in: Proceedings of the 21st IEEE/ACM International Conference on Automated Software Engineering, 2006, pp. 81–90.
* [4] S. Kim, E. J. Whitehead, Jr., Y. Zhang, Classifying software changes: Clean or buggy?, IEEE Transactions on Software Engineering 34 (2) (2008) 181–196.
* [5] J. Śliwerski, T. Zimmermann, A. Zeller, When do changes induce fixes?, in: Proceedings of the 2nd International Workshop on Mining Software Repositories, 2005, pp. 1–5.
* [6] X. Meng, B. P. Miller, W. R. Williams, A. R. Bernat, Mining software repositories for accurate authorship, in: Proceedings of the 29th IEEE International Conference on Software Maintenance, 2013, pp. 250–259.
* [7] F. Rahman, P. Devanbu, Ownership, experience and defects: A fine-grained study of authorship, in: Proceedings of the 33rd International Conference on Software Engineering, 2011, pp. 491–500.
* [8] M. Kim, D. Cai, S. Kim, An empirical investigation into the role of API-level refactorings during software evolution, in: Proceedings of the 33rd International Conference on Software Engineering, 2011, pp. 151–160.
* [9] G. Soares, R. Gheyi, D. Serey, T. Massoni, Making program refactoring safer, IEEE Software 27 (4) (2010) 52–57.
* [10] V. Balachandran, Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation, in: Proceedings of the 35th International Conference on Software Engineering, 2013, pp. 931–940.
* [11] S. Kim, M. D. Ernst, Which warnings should I fix first?, in: Proceedings of the the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, 2007, pp. 45–54.
* [12] D. Dig, C. Comertoglu, D. Marinov, R. Johnson, Automated detection of refactorings in evolving components, in: Proceedings of the 20th European Conference on Object-Oriented Programming, 2006, pp. 404–428.
* [13] M. W. Godfrey, L. Zou, Using origin analysis to detect merging and splitting of source code entities, IEEE Transctions on Software Engineering 31 (2) (2005) 166–181.
* [14] S. Kim, K. Pan, E. J. Whitehead, Jr., When functions change their names: Automatic detection of origin relationships, in: Proceedings of the 12th Working Conference on Reverse Engineering, 2005, pp. 143–152.
* [15] W. Wu, Y.-G. Guéhéneuc, G. Antoniol, M. Kim, AURA: A hybrid approach to identify framework evolution, in: Proceedings of the 32nd International Conference on Software Engineering, 2010, pp. 325–334.
* [16] H. Hata, O. Mizuno, T. Kikuno, Inferring restructuring operations on logical structure of java source code, in: Proceedings of 3rd International Workshop on Empirical Software Engineering in Practice, 2011, pp. 17–22.
* [17] M. T. V. Hudson Borges, Andre Hora, Understanding the factors that impact the popularity of GitHub repositories, in: Proceedings of the 32nd International Conference on Software Maintenance and Evolution, 2016, pp. 1–11.
* [18] N. Tsantalis, M. Mansouri, L. M. Eshkevari, D. Mazinanian, D. Dig, Accurate and efficient refactoring detection in commit history, in: Proceedings of the 40th International Conference on Software Engineering, 2018, pp. 483–494.
* [19] H. Hata, O. Mizuno, T. Kikuno, Bug prediction based on fine-grained module histories, in: Proceedings of the 34th International Conference on Software Engineering, 2012, pp. 200–210.
* [20] K. Fujiwara, H. Hata, E. Makihara, Y. Fujihara, N. Nakayama, H. Iida, K. Matsumoto, Kataribe: A hosting service of Historage repositories, in: Proceedings of the 11th Working Conference on Mining Software Repositories, 2014, pp. 380–383.
* [21] C. Tantithamthavorn, A. Ihara, H. Hata, K. Matsumoto, Impact analysis of granularity levels on feature location technique, in: Proceedings of 1st Asia Pacific Requirements Engineering Symposium, 2014, pp. 135–149.
* [22] Y. Kashiwabara, T. Ishio, H. Hata, K. Inoue, Method verb recommendation using association rule mining in a set of existing projects, IEICE Transactions on Information and Systems E98-D (3) (2015) 627–636.
* [23] M. C. D. Oliveira, R. B. D. Almeida, G. N. Ramos, M. Ribeiro, On the conceptual cohesion of co-change clusters, in: Proceedings of the 29th Brazilian Symposium on Software Engineering, 2015, pp. 120–129.
* [24] A. Yamamori, A. M. Hagward, T. Kobayashi, Can developers’ interaction data improve change recommendation?, in: Proceedings of 41st Annual Computer Software and Applications Conference, 2017, pp. 128–137.
* [25] M. Kersten, G. C. Murphy, Mylar: A degree-of-interest model for IDEs, in: Proceedings of the 4th International Conference on Aspect-oriented Software Development, 2005, pp. 159–168.
* [26] R. Yuzuki, H. Hata, K. Matsumoto, How we resolve conflict: An empirical study of method-level conflict resolution, in: Proceedings of 1st International Workshop on Software Analytics, 2015, pp. 21–24.
* [27] S. Suzuki, H. Aman, M. Kawahara, Empirical study of fault-prone method’s name and implementation: Analysis on three prefixes—get, set and be, in: Proceedings of 2nd International Conference on Big Data, Cloud Computing, Data Science & Engineering, 2017, pp. 266–271.
* [28] D. M. German, B. Adams, K. Stewart, cregit: Token-level blame information in git version control repositories, Empirical Software Engineering 24 (4) (2019) 2725–2763.
* [29] T. Kamiya, S. Kusumoto, K. Inoue, CCFinder: A multilinguistic token-based code clone detection system for large scale source code, IEEE Transactions on Software Engineering 28 (7) (2002) 654–670.
* [30] C. K. Roy, J. R. Cordy, NICAD: Accurate detection of near-miss intentional clones using flexible pretty-printing and code normalization, in: Proceedings of the 16th IEEE International Conference on Program Comprehension, 2008, pp. 172–181.
* [31] M. W. Angela Lozano, Assessing the effect of clones on changeability, in: Proceedings of the 24th IEEE International Conference on Software Maintenance, 2008, pp. 227–236.
* [32] R. K. Saha, C. K. Roy, K. A. Schneider, An automatic framework for extracting and classifying near-miss clone genealogies, in: Proceedings of the 27th IEEE International Conference on Software Maintenance, 2011, pp. 293–302.
* [33] Q. Tu, M. Godfrey, An integrated approach for studying software architectural evolution, in: Proceedings of 10th International Workshop on Program Comprehension, 2002, pp. 127–136.
* [34] M. Kim, M. Gee, A. Loh, N. Rachatasumrit, Ref-Finder: A refactoring reconstruction tool based on logic query templates, in: Proceedings of the 18th International Symposium on Foundations of Software Engineering, 2010, pp. 371–372.
* [35] N. A. Milea, L. Jiang, S.-C. Khoo, Vector abstraction and concretization for scalable detection of refactorings, in: Proceedings of the 22nd International Symposium on Foundations of Software Engineering, 2014, pp. 86–97.
* [36] K. Prete, N. Rachatasumrit, N. Sudan, M. Kim, Template-based reconstruction of complex refactorings, in: Proceedings of the 26th International Conference on Software Maintenance, 2010, pp. 1–10.
* [37] D. Silva, M. T. Valente, RefDiff: Detecting refactorings in version histories, in: Proceedings of the 14th International Conference on Mining Software Repositories, 2017, pp. 269–279.
* [38] P. Weissgerber, S. Diehl, Identifying refactorings from source-code changes, in: Proceedings of the 21st International Conference on Automated Software Engineering, 2006, pp. 231–240.
* [39] Z. Xing, E. Stroulia, UMLDiff: An algorithm for object-oriented design differencing, in: Proceedings of the 20th International Conference on Automated Software Engineering, 2005, pp. 54–65.
|
2024-09-04T02:54:59.212489 | 2020-03-11T14:58:05 | 2003.05342 | {
"authors": "Andronikos Paliathanasis",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26167",
"submitter": "Andronikos Paliathanasis",
"url": "https://arxiv.org/abs/2003.05342"
} | arxiv-papers | # Dynamics of Chiral Cosmology
Andronikos Paliathanasis<EMAIL_ADDRESS>Institute of Systems Science,
Durban University of Technology, Durban 4000, Republic of South Africa
Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile,
Valdivia 5090000, Chile
###### Abstract
We perform a detailed analysis for the dynamics of Chiral cosmology in a
spatially flat Friedmann-Lemaître-Robertson-Walker universe with a mixed
potential term. The stationary points are categorized in four families.
Previous results in the literature are recovered while new phases in the
cosmological evolution are found. From our analysis we find nine different
cosmological solutions, the eight describe scaling solutions, where the one is
that of a pressureless fluid, while only one de Sitter solution is recovered.
Cosmology; Scalar field; Chiral Cosmology; Stability; Dark energy; Dynamics
###### pacs:
98.80.-k, 95.35.+d, 95.36.+x
## I Introduction
A detailed analysis of the recent cosmological observations dataacc1 ;
dataacc2 ; data1 ; data2 ; Hinshaw:2012aka ; Ade:2015xua indicates that the
universe has gone through two acceleration phases during its evolution. In
particular into a late-time acceleration phase which is attributed to dark
energy, and into an early acceleration phase known as inflation. Inflation was
proposed four decades ago guth in order to explain why in large scales the
universe appears isotropic and homogeneous. The inflationary era is described
by a scalar field known as the inflaton which when dominates drives the
dynamics of the universe such that the observations are explained.
In addition, scalar fields have been used to describe the recent acceleration
epoch of the universe, that is, they have been applied as a source of the dark
energy Ratra . In scalar field theory the gravitational field equations remain
of second-order with extra degrees of freedom as many as the scalar fields and
corresponding conservation equations hor1 ; hor2 ; hor3 . These extra degrees
of freedom can attribute the geometrodynamical degrees of freedom provided by
invariants to the modification of Einstein-Hilbert action in the context of
modified/alternative theories of gravity mod1 ; mod2 ; mod3 .
The simplest scalar field theory proposed in the literature is the
quintessence model Ratra . Quintessence is described by a minimally coupled
scalar field $\phi\left(x^{\kappa}\right)~{}$with a potential function
$V\left(\phi^{\kappa}\right)$. The scalar field satisfies the weak energy
condition, i.e. $\rho\geq 0,~{}\rho+p\geq 0,$ while the equation of state
parameters $w_{Q}=\frac{p}{\rho}$ is bounded as $\left|w_{Q}\right|\leq 1$.
For some power-law quintessence models, the gravitational field equations
provide finite-time singularities during inflation leading to chaotic dynamics
sing ; page . On the other hand, for some kind of potentials the quintessence
can describe the late-time acceleration udm .
In the cosmological scenario of a Friedmann-Lemaître-Robertson-Walker universe
(FLRW) exact and analytic solutions of the field equations for different
potentials are presented in jdbnew ; muslinov ; ellis ; barrow1 ; newref2 ;
ref001 ; ref002 and references therein. Results of similar analysis on the
dynamics of quintessence models are summarized in the recent review gen01 .
Other scalar field models which have been proposed in the literature are:
phantom fields, Galileon, scalar tensor, multi-scalar field models and others
ph1 ; ph2 ; ph3 ; ph4 ; ph5 ; ph7 ; ph8 ; ph9 ; ph10 ; ph11 . Multi-scalar
field models have been used to provide alternative models for the description
of inflation hy1 ; hy2 ; hy3 , such as hybrid inflation, double inflation,
$\alpha$-attractors hy4 ; atr1 ; atr3 and as alternative dark energy models.
Multi-scalar field models which have drawn the attention of cosmologists are,
the quintom model and the Chiral model. A common feature of these two theories
is that they are described by two-scalar fields, namely
$\phi\left(x^{\kappa}\right)$ and $\psi\left(x^{\kappa}\right)$. For the
quintom model, one of the two fields is quintessence while the second scalar
field is phantom which means that the energy density of the field can be
negative. One of the main characteristics of quintom cosmology is that the
parameter for the equation of state for the effective cosmological fluid can
cross the phantom divide line more than once qq1 ; qq2 . The general dynamics
of quintom cosmology is presented in qq3 .
In Chiral theory, the two scalar fields have a mixed kinetic term. The two
scalar fields are defined on a two-dimensional space of constant nonvanishing
curvature atr6 ; atr7 . That model is inspired by the non-linear sigma
cosmological model sigm0 . Chiral cosmology is linked with the
$\alpha-$attractor models atr3 . Exact solutions and for specific cases the
dynamics of Chiral cosmology were studied before in andimakis , while analytic
solutions in Chiral cosmology are presented in 2sfand . In the latter
reference, it was found that pressureless fluid is provided by the model,
consequently, the model can also be seen as an alternative model for the
description of the dark sector of the universe. Last but not least scaling
attractors in Chiral theory were studied in andimakis ; per1 .
In this piece of work we are interested in the evolution of the dynamics for
the gravitational field equations of Chiral cosmology in a spatially flat FLRW
background space. We consider a general scenario where an interaction term for
the two scalar fields exists in the potential term $V\left(\phi,\psi\right)$
of the two fields, that is, $V_{,\phi\psi}\neq 0$. Specifically, we determine
the stationary points of the cosmological equations and we study the stability
of these points. Each stationary point describes a solution in the
cosmological evolution. Such an analysis is important in order to understand
the general behaviour of the model and to infer about its viability. This
approach has been applied in various gravitational theories with important
results for the viability of specific theories of gravity, see for instance
dyn1 ; dyn2 ; dyn3 ; dyn4 ; dyn5 ; dyn6 ; dyn7 ; dyn8 ; dyn9 and references
therein. From such an analysis we can conclude about for which eras of the
cosmological history can be provided by the specific theory, we refer the
reader in the discussion of dyn1 . The plan of the paper is as follows.
In Section II we present the model of our consideration which is that of
Chiral cosmology in a spatially flat FLRW spacetime with a mixed potential
term. We write the field equations which are of second-order. By using the
energy density and pressure variables we observe that the interaction of the
two fields depends on the pressure term. In Section III, we rewrite the field
equations by using dimensionless variables in the $H-$normalization. We find
an algebraic-differential dynamical system consists of one algebraic
constraint and six first-order ordinary differential equations. We consider a
specific form for the potential in order to reduce dynamical system the system
by one-dimension; and with the use of the constraint equation we end with a
four-dimensional system.
The main results of this work are presented in Section IV. We find the
stationary points of the field equations which form four different families.
The stationary points of family A are those of quintessence, in family B only
the kinetic part of the second scalar field contributes to the cosmological
solutions. On the other hand, the points of family C are those where only the
dynamic part of the second field contributes. Furthermore, for the
cosmological solutions at the points of family D all the components of the
second field contributes to the cosmological fluid. For all the stationary
points we determine the physical properties which describe the corresponding
exact solutions, as also we determine the stability conditions. An application
of this analysis is presented in Section V with some numerical results.
Moreover, for completeness of our study we present an analytic solution of the
field equations by using previous results of the literature, from where we can
verify the main results of this work. In Section VII we discuss the additional
stationary points when matter source is included in the cosmological model.
Finally, in Section VIII we draw our conclusions.
## II Chiral cosmology
We consider the gravitational Action Integral to be 2sfand
$S=\int\sqrt{-g}dx^{4}R-\int\sqrt{-g}dx^{4}\left(\frac{1}{2}g^{\mu\nu}H_{AB}\left(\Phi^{C}\right)\nabla_{\mu}\Phi^{A}\nabla_{\nu}\Phi^{B}+V\left(\Phi^{C}\right)\right)$
(1)
where
$\Phi^{A}=\left(\phi\left(x^{\mu}\right),\psi\left(x^{\mu}\right)\right)$,
$H_{AB}\left(\Phi^{C}\right)$ is a second rank tensor which defines the
kinetic energy of the scalar fields, while $V\left(\Phi^{C}\right)$ is the
potential.
The Action Integral (1) describes a interacting two-scalar field cosmological
model where the interaction follows by the potential
$V\left(\Phi^{C}\right)=V\left(\phi,\psi\right),$ and the kinetic part.
In this work we assume that $H_{AB}\left(\Phi^{C}\right)$ is diagonal and
admits at least one isometry such that (1)
$S=\int\sqrt{-g}dx^{4}R-\int\sqrt{-g}dx^{4}\left(\frac{1}{2}g^{\mu\nu}\left(\phi_{;\mu}\phi_{;\nu}+M\left(\phi\right)\psi_{;\mu}\psi_{;\mu}\right)+V\left(\Phi^{C}\right)\right)$
(2)
where $M\left(\phi\right)_{,\phi}\neq 0$ and $M\left(\phi\right)\neq
M_{0}\phi^{2}$. In the latter two cases, $H_{AB}\left(\Phi^{C}\right)$
describe a two-dimensional flat space and if it is of Lorentzian signature
then it describes the quintom model. Functional of forms of
$M\left(\phi\right)$ where $H_{AB}\left(\Phi^{C}\right)$ is a maximally
symmetric space of constant curvature $R_{0}$, are given by the second-order
differential equation
$2M_{,\phi\phi}M-\left(M_{,\phi}\right)^{2}+2M^{2}R_{0}=0.$ (3)
A solution of the latter equation is $M\left(\phi\right)=M_{0}e^{\kappa\phi}$,
which can be seen as the general case since new fields can be defined under
coordinate transformations to rewrite the form of
$H_{AB}\left(\Phi^{C}\right)$. This is the case of Chiral model that we study
in this work.
Variation with respect to the metric tensor of (1) provides the gravitational
field equations
$G_{\mu\nu}=H_{AB}\left(\Phi^{C}\right)\nabla_{\mu}\Phi^{A}\nabla_{\nu}\Phi^{B}-g_{\mu\nu}\left(\frac{1}{2}g^{\mu\nu}H_{AB}\left(\Phi^{C}\right)\nabla_{\mu}\Phi^{A}\nabla_{\nu}\Phi^{B}+V\left(\Phi^{C}\right)\right),$
(4)
while variation with respect to the fields $\Phi^{A}$ give the Klein-Gordon
vector-equation
$g^{\mu\nu}\left(\nabla_{\mu}\left(H_{~{}B}^{A}\left(\Phi^{C}\right)\nabla_{\nu}\Phi^{B}\right)\right)+H_{~{}B}^{A}\left(\Phi^{C}\right)\frac{\partial
V\left(\Phi^{C}\right)}{\partial\Phi^{B}}=0.$ (5)
According to the cosmological principle, the universe in large scales is
isotropic and homogeneous described by the spatially flat FLRW spacetime with
line element
$ds^{2}=-dt^{2}+a^{2}\left(t\right)\left(dx^{2}+dy^{2}+dz^{2}\right).$ (6)
where $a\left(t\right)$ denotes the scale factor and the Hubble function is
defined as $H\left(t\right)=\frac{\dot{a}}{a}$.
For the line element (6) and the second-rank tensor
$H_{AB}\left(\Phi^{C}\right)$ of our consideration the field equations are
written as follows
$3H^{2}=\frac{1}{2}\left(\dot{\phi}^{2}+M\left(\phi\right)\dot{\psi}^{2}\right)+V\left(\phi\right)+M\left(\phi\right)U\left(\psi\right),$
(7)
$2\dot{H}+3H^{2}=-\left(\frac{1}{2}\left(\dot{\phi}^{2}+M\left(\phi\right)\dot{\psi}^{2}\right)-V\left(\phi\right)-M\left(\phi\right)U\left(\psi\right)\right),$
(8)
$\ddot{\phi}+3H\dot{\phi}-\frac{1}{2}M_{,\phi}\dot{\psi}^{2}+V_{,\phi}\left(\phi\right)+M_{,\phi}U\left(\psi\right)=0,$
(9)
$\ddot{\psi}+3H\dot{\psi}+\frac{M_{,\phi}}{M}\dot{\phi}\dot{\psi}+U_{,\psi}=0.$
(10)
where we replaced
$V\left(\phi,\psi\right)=V\left(\phi\right)+M\left(\phi\right)U\left(\psi\right)$
and we have assumed that the fields $\phi,\psi$ inherit the symmetries of the
FLRW space such that $\phi\left(x^{\mu}\right)=\phi\left(t\right)$ and
$\psi\left(x^{\mu}\right)=\psi\left(t\right)$. At this point we remark that
the field equations (8)-(10) can be produced by the variation principle of the
point-like Lagrangian
$\mathcal{L}\left(a,\dot{a},\phi,\dot{\phi},\psi,\dot{\psi}\right)=-3a\dot{a}^{2}+\frac{1}{2}a^{3}\left(\dot{\phi}^{2}+M\left(\phi\right)\dot{\psi}^{2}\right)-a^{3}\left(V\left(\phi\right)+M\left(\phi\right)U\left(\psi\right)\right),$
(11)
while equation (7) can be seen as the Hamiltonian constraint of the time-
independent Lagrangian (11).
An equivalent way to write the field equations (7), (8) is by defining the
quantities
$\rho_{\phi}=\frac{1}{2}\dot{\phi}^{2}+V\left(\phi\right)~{},~{}p_{\phi}=\frac{1}{2}\dot{\phi}^{2}-V\left(\phi\right),$
(12)
$\rho_{\psi}=\left(\frac{1}{2}\dot{\psi}^{2}+U\left(\psi\right)\right)M\left(\phi\right)~{},~{}p_{\psi}=\left(\frac{1}{2}\dot{\psi}^{2}-U\left(\psi\right)\right)M\left(\phi\right),$
(13)
that is,
$3H^{2}=\rho_{\phi}+\rho_{\psi},$ (14)
$2\dot{H}+3H^{2}=-\left(p_{\phi}+p_{\psi}\right),$ (15)
$\dot{\rho}_{\phi}+3H\left(\rho_{\phi}+p_{\phi}\right)=\dot{\phi}\frac{\partial}{\partial\phi}p_{\psi},$
(16)
$\dot{\rho}_{\psi}+3H\left(\rho_{\psi}+p_{\psi}\right)=-\dot{\phi}\frac{\partial}{\partial\phi}p_{\psi}.$
(17)
The latter equations give us an interesting observation, since we can write
the interacting functions of the two fields. The interaction models, with
interaction between dark matter and dark energy have been proposed as an
potential mechanism to explain the cosmic coincidence problem and provide a
varying cosmological constant. Some interaction models which have been studied
before in the literature are presented in Amendola:2006dg ; Pavon:2007gt ;
Chimento:2009hj ; Arevalo:2011hh ; an001 ; an002 while some cosmological
constraints on interacting models can be found in in1 ; in2 ; in3 ; in4 .
## III Dimensionless variables
We consider the dimensionless variables in the $H$-normalization cop
$\dot{\phi}=\sqrt{6}xH~{},~{}V\left(\phi\right)=3y^{2}H^{2}~{},~{}\dot{\psi}=\frac{\sqrt{6}}{\sqrt{M\left(\phi\right)}}zH~{},~{}U\left(\psi\right)=\frac{3}{M\left(\phi\right)}u^{2}H^{2}$
(18)
or
$x=\frac{\dot{\phi}}{\sqrt{6}H}~{},~{}y^{2}=\frac{V\left(\phi\right)}{3H^{2}}~{},~{}z=\frac{\sqrt{M\left(\phi\right)}\dot{\psi}}{\sqrt{6}H}~{},~{}u^{2}=\frac{M\left(\phi\right)U\left(\psi\right)}{3H^{2}},$
(19)
where the field equations become
$\displaystyle\frac{dx}{d\tau}$
$\displaystyle=\frac{3}{2}x\left(x^{2}-\left(1+u^{2}+y^{2}-z^{2}\right)\right)-\frac{\sqrt{6}}{2}\left(\lambda
y^{2}+\kappa\left(u^{2}-z^{2}\right)\right),$ (20)
$\displaystyle\frac{dy}{d\tau}$
$\displaystyle=\frac{3}{2}y\left(1+x^{2}+z^{2}-y^{2}-u^{2}\right)+\frac{\sqrt{6}}{2}\lambda
xy,$ (21) $\displaystyle\frac{dz}{d\tau}$
$\displaystyle=\frac{3}{2}z\left(z^{2}-\left(1+u^{2}+y^{2}-x^{2}\right)\right)-\frac{\sqrt{6}}{2}\left(\kappa
xz+\mu u^{2}\right),$ (22) $\displaystyle\frac{du}{d\tau}$
$\displaystyle=\frac{3}{2}u\left(1+x^{2}+z^{2}-y^{2}-u^{2}\right)+\frac{\sqrt{6}}{2}u\left(\kappa
x+\mu z\right),$ (23) $\displaystyle\frac{d\mu}{d\tau}$
$\displaystyle=\sqrt{\frac{3}{2}}\mu\left(2\mu
z\bar{\Gamma}\left(\mu,\lambda\right)-\kappa x-2\mu z\right),$ (24)
$\displaystyle\frac{d\lambda}{d\tau}$
$\displaystyle=\sqrt{6}\lambda^{2}x\left(\Gamma\left(\lambda\right)-1\right),$
(25)
in which
$\tau=\ln
a,~{}\lambda\left(\phi\right)=\frac{V_{,\phi}}{V}~{},~{}\kappa\left(\lambda\right)=\frac{M_{,\phi}}{M}~{},~{}\mu\left(\phi,\psi\right)=\frac{1}{\sqrt{M\left(\phi\right)}}\frac{U_{,\psi}}{U},~{}$
(26)
and functions
$\Gamma\left(\lambda\right),~{}\bar{\Gamma}\left(\mu,\lambda\right)$ are
defined as
$\Gamma\left(\lambda\right)=\frac{V_{,\phi\phi}V}{\left(V_{,\phi}\right)^{2}}~{},~{}\bar{\Gamma}\left(\mu,\lambda\right)=\frac{U_{,\psi\psi}U}{\left(U_{,\psi}\right)^{2}},$
(27)
while the constraint equation is
$1-x^{2}-y^{2}-z^{2}-u^{2}=0.$ (28)
The equation of state parameter for the effective cosmological fluid
$w_{tot},~{}$is given in terms of the dimensionless parameters as follows
$w_{tot}=-1-\frac{2}{3}\frac{\dot{H}}{H^{2}}=x^{2}+z^{2}-y^{2}-u^{2}$ (29)
while we define the variables
$\Omega_{\phi}=x^{2}+y^{2}~{},~{}\Omega_{\psi}=z^{2}+u^{2},$ (30)
with equation of state parameters
$w_{\phi}=-1+\frac{2x^{2}}{x^{2}+y^{2}}~{},~{}w_{\psi}=-1+\frac{2z^{2}}{z^{2}+u^{2}}.$
(31)
At this point it is important to mention that since the two fields interact
that is not the unique definition of the physical variables $\Omega_{\phi}$
and $\Omega_{\psi}$, $w_{\phi}$ and $w_{\psi}$. Moreover, from the constraint
equation (28) it follows that the stationary points are on the surface of a
four-dimensional unitary sphere, while the field equations remain invariant
under the transformations $\left\\{y,u\right\\}\rightarrow\left(-y,-u\right)$,
that is, the variables $\left\\{x,y,z,u\right\\}$ take values in the following
regions $\left|x\right|\leq 1~{},~{}\left|z\right|\leq 1~{},~{}0\leq y\leq
1~{}\ $and $0\leq u\leq 1$.
For the arbitrary functions $V\left(\phi\right),$ $U\left(\psi\right)$ and
$M\left(\phi\right)$, there are six dependent, namely
$\left\\{x,y,z,u,\lambda,\mu\right\\}$, where in general
$\kappa=\kappa\left(\lambda\right)$, however the dimension of the system can
be reduced by one, if we apply the constraint condition (28).
In the following Section, we determine the stationary points for the cases
where $M\left(\phi\right)=M_{0}e^{\kappa\phi},~{}\
V\left(\phi\right)=V_{0}e^{\lambda\phi},~{}$ and
$U\left(\psi\right)=U_{0}\psi^{\frac{1}{\sigma}}$. Consequently, we calculate
$\Gamma\left(\lambda\right)=1$ and
$\bar{\Gamma}\left(\mu,\lambda\right)=1-\sigma$ and $\kappa=const$. Therefore,
$\frac{d\lambda}{d\tau}=0$ is satisfied identically and the dimension of the
dynamical system is reduced by one. Therefore we end with the dynamical system
(20)-(24) with constraint (28). We remark that in Chiral model, the kinetic
parts of the two fields are defined on a two-dimensional space of constant
curvature.
## IV Dynamical behaviour
The stationary points of the dynamical system have coordinates which make the
rhs of equations (20)-(24) vanish. We categorize the stationary points into
four families. Family A, are the points with coordinates
$\left(x_{A},y_{A},z_{A},u_{A},\mu_{A}\right)=\left(x_{A},y_{A},0,0,0\right)$
and correspond to the points of the minimally coupled scalar field cosmology
cop .
The points with coordinates
$\left(x_{B},y_{B},z_{B},u_{B},\mu_{B}\right)=\left(x_{B},y_{B},z_{B},0,\mu_{B}\right)$
and $z_{B}\neq 0$ define the points of Family B. These points describe
physical solutions without any contribution of the potential
$U\left(\psi\right)$ to the energy density of the total fluid source, but only
when $\mu_{B}=0$ there is not any contribution of potential
$U\left(\psi\right)$ to the dynamics. When $\mu_{B}=0$, the stationary points
are those found before in andimakis .
Points of family $C$ have coordinates
$\left(x_{C},y_{C},z_{C},u_{C},\mu_{C}\right)=\left(x_{C},y_{C},0,u_{C},\mu_{C}\right),~{}u_{C}\neq
0$ which describe exact solutions with no contribution of the kinetic part of
the scalar fields $\psi$. Finally, the points of family D have coordinates of
the form
$\left(x_{C},y_{C},z_{C},u_{C},\mu_{C}\right)~{}$with$~{}z_{D}u_{D}\neq 0$.
Let $P$ be a stationary point of the dynamical system (20)-(24), that
is,$~{}\dot{q}^{A}=f^{A}\left(q^{B}\right)$, where $f^{A}\left(P\right)=0$. In
order to study the stability properties of the critical point $P$, we write
the linearized system which is $\delta\dot{x}^{A}=J_{B}^{A}\delta
x^{B}~{}$where $J_{B}^{A}$ is the Jacobian matrix at the point $P$,
i.e.$~{}J_{B}^{A}=\frac{\partial f^{A}\left(P\right)}{\partial x^{B}}$. The
eigenvalues $\mathbf{e}\left(P\right)$ of the Jacobian matrix determine the
stability of the station point. When all the eigenvalues have negative real
part then point $P$ is an attractor and the exact solution at the point is
stable, otherwise the exact solution at the critical point is unstable and
point $P$ is a source, when all the eigenvalues have a positive real part, or
$P$ is a saddle point.
### IV.1 Family A
There are three stationary points which describe cosmological solutions
without any contribution of the second field $\psi$. The points have
coordinates cop
$A_{1}^{\pm}=\left(\pm
1,0,0,0,0\right)~{},~{}A_{2}=\left(-\frac{\lambda}{\sqrt{6}},\sqrt{1-\frac{\lambda^{2}}{6}},0,0,0\right).$
(32)
Points $A_{1}^{\pm}$ describe universes dominated by the kinetic part of the
scalar field $\phi,~{}$that is by the term $\frac{1}{2}\dot{\phi}^{2}$. The
physical quantities are derived
$\left(w_{tot}\left(A_{1}^{\pm}\right),w_{\phi}\left(A_{1}^{\pm}\right),w_{\psi}\left(A_{1}^{\pm}\right),\Omega_{\phi}\left(A_{1}^{\pm}\right),\Omega_{\psi}\left(A_{1}^{\pm}\right)\right)=\left(1,1,\nexists,1,0\right).$
Point $A_{2}$ is physically accepted when $\left|\lambda\right|<\sqrt{6},$ the
physical quantities are calculated
$\left(w_{tot}\left(A_{2}\right),w_{\phi}\left(A_{2}\right),w_{\psi}\left(A_{2}\right),\Omega_{\phi}\left(A_{2}\right),\Omega_{\psi}\left(A_{2}\right)\right)=\left(-1+\frac{\lambda^{2}}{3},-1+\frac{\lambda^{2}}{3},\nexists,1,0\right).$
Therefore, point $A_{2}$ describes a scaling solution. The latter solution is
that of an accelerated universe when $\left|\lambda\right|<\sqrt{2}$.
In the case of quintessence scalar field cosmology, points $A_{1}^{\pm}$ are
always unstable, while $A_{2}$ is the unique attractor of the dynamical system
when $\left|\lambda\right|<\sqrt{3}$. However, for the model of our analysis
the stability conditions are different.
In order to conclude for the stability of the stationary points we determine
the eigenvalues of the linearized dynamical system (20)-(24) around to the
stationary points. For the points $A_{1}^{\pm}$ it follows
$\displaystyle e_{1}\left(A_{1}^{\pm}\right)$ $\displaystyle=3,$
$\displaystyle e_{2}\left(A_{1}^{\pm}\right)$
$\displaystyle=\frac{1}{2}\left(6\pm\sqrt{6}\lambda\right),$
$\displaystyle~{}e_{3}\left(A_{1}^{\pm}\right)$
$\displaystyle=\frac{1}{2}\left(6\pm\sqrt{6}\kappa\right),$
$\displaystyle~{}e_{4}\left(A_{1}^{\pm}\right)$
$\displaystyle=\mp\sqrt{\frac{3}{2}}\kappa,~{}$ $\displaystyle
e_{5}\left(A_{1}^{\pm}\right)$ $\displaystyle=\mp\sqrt{\frac{3}{2}}\kappa,$
from where we conclude that points $A_{1}^{\pm}~{}$are saddle points, while
the solutions at points $A_{1}^{\pm}$ are always unstable’ because at least
one of the eigenvalues is always positive, i.e. eigenvalue
$e_{1}\left(A_{1}^{\pm}\right)>0$.
For the stationary point $A_{2}$ the eigenvalues are derived
$\displaystyle e_{1}\left(A_{2}\right)$
$\displaystyle=\frac{1}{2}\left(\lambda^{2}-6\right),$ $\displaystyle
e_{2}\left(A_{2}\right)$ $\displaystyle=\lambda^{2}-3,$
$\displaystyle~{}e_{3}\left(A_{2}\right)$
$\displaystyle=\frac{1}{2}\kappa\lambda,$
$\displaystyle~{}e_{4}\left(A_{2}\right)$
$\displaystyle=\frac{1}{2}\left(\lambda^{2}-\kappa\lambda\right),~{}$
$\displaystyle e_{5}\left(A_{2}\right)$
$\displaystyle=\frac{1}{2}\left(\lambda^{2}-6+\kappa\lambda\right),$
that is, the exact solution at point $A_{2}$ is always unstable. However. from
the two eigenvalues $e_{1}\left(A_{2}\right),~{}e_{2}\left(A_{2}\right)$ we
can infer that in the surface $\left\\{x,y\right\\}$ of the phase space the
stationary point $A_{2}$ acts like an attractor for
$\left|\lambda\right|<\sqrt{3}$, which however becomes a saddle point for the
higher-dimensional phase space.
We remark that we determined the stability of the stationary points without
using the constant equation and reducing the dynamical system by one-
dimension. However, by replacing $z^{2}=1-x^{2}-y^{2}-u^{2}$ in the (20)-(24)
we end with a four-dimensional system, from where we find the same results,
that is, the exact solutions at the points $A_{1}^{\pm}$ and $A_{2}$ are
always unstable.
### IV.2 Family B
For $z_{B}\neq 0$ and $u_{B}=0,$ we found four stationary points which are
$\displaystyle B_{1}^{\pm}$
$\displaystyle=\left(-\frac{\sqrt{6}}{\kappa+\lambda},\sqrt{\frac{\kappa}{\kappa+\lambda}},\pm\sqrt{\frac{\lambda^{2}+\kappa\lambda-6}{\left(\kappa+\lambda\right)^{2}}},0,0\right),$
(33) $\displaystyle B_{2}^{\pm}$
$\displaystyle=\left(-\frac{\sqrt{6}}{\kappa+\lambda},\sqrt{\frac{\kappa}{\kappa+\lambda}},\pm\sqrt{\frac{\lambda^{2}+\kappa\lambda-6}{\left(\kappa+\lambda\right)^{2}}},0,\sqrt{\frac{3}{2}}\frac{\kappa}{\sqrt{\left(\lambda^{2}+\kappa\lambda-6\right)}}\right),$
(34)
which are real and are physically accepted when
$\left\\{\kappa>0,\lambda>\sqrt{6}\right\\}$ or
$\left\\{0<\lambda\leq\sqrt{6},~{}\kappa>\frac{6-\lambda^{2}}{\lambda}\right\\}$
or $\left\\{\lambda<-\sqrt{6},\kappa<0\right\\}$ or
$\left\\{-\sqrt{6}<\lambda<0,\kappa<\frac{6-\lambda^{2}}{\lambda}\right\\}$.
The latter region plots are presented in Fig. 1.
Figure 1: Region plot in the space $\left\\{\lambda,\kappa\right\\}$ where
points $\mathbf{B=}\left(B_{1}^{\pm},B_{2}^{\pm}\right)$ are real.
The stationary points have the same physical properties, that is, the points
describe universes with the same physical properties, where the physical
quantities have the following values
$w_{tot}\left(\mathbf{B}\right)=1-\frac{2\kappa}{\kappa+\lambda}~{},~{}w_{\phi}\left(\mathbf{B}\right)=-1+\frac{12}{6+\kappa\left(\kappa+\lambda\right)}~{},~{}w_{\psi}\left(\mathbf{B}\right)=1~{},$
(35)
$\Omega_{\phi}\left(\mathbf{B}\right)=1-\Omega_{\psi}\left(\mathbf{B}\right)~{},~{}\Omega_{\psi}\left(\mathbf{B}\right)=\left|\frac{\lambda\left(\kappa+\lambda\right)-6}{\left(\kappa+\lambda\right)^{2}}\right|.$
(36)
From $w_{tot}\left(\mathbf{B}\right)$ it follows that the points describe
scaling solutions and the de Sitter universe is recovered only when
$\lambda=0$, which is excluded because for $\lambda=0$, the stationary points
are not real. We continue by studying the stability of the stationary points.
In Fig. 2, we present counter plots for the physical parameters
$w_{tot}\left(\mathbf{B}\right),~{}w_{\phi}\left(\mathbf{B}\right)\,$ and
$\Omega_{\psi}\left(\mathbf{B}\right)$ in the space of variables
$\left\\{\lambda,\kappa\right\\}$.
Figure 2: Qualitative evolution of the physical variables
$w_{tot}\left(\mathbf{B}\right),~{}w_{\phi}\left(\mathbf{B}\right)\,$ and
$\Omega_{\psi}\left(\mathbf{B}\right)$ of the exact solutions at the critical
points $\mathbf{B=}\left(B_{1}^{\pm},B_{2}^{\pm}\right)$ for various values of
the free variables $\left\\{\lambda,\kappa\right\\}$.
For the stationary points $B_{1}^{\pm}$ two of the five eigenvalues are
expressed as
$e_{1}\left(B_{1}^{\pm}\right)=3\frac{\kappa}{\kappa+\lambda},~{}e_{2}\left(B_{1}^{\pm}\right)=-3\frac{\kappa-\lambda}{\kappa+\lambda},$
from where we observe that $e_{1}\left(B_{1}^{\pm}\right)>0$ in order for the
points to be real, consequently the exact solutions at the stationary points
$B_{1}^{\pm}$ are unstable.
We use the constraint $z^{2}=1-x^{2}-y^{2}-u^{2}$ such that the dynamical
system is reduced by one-dimension. Thus, for the new four-dimensional system
the eigenvalues of the linearized system around points $B_{1}^{\pm}$ are found
$\displaystyle e_{1}\left(B_{1}^{\pm}\right)$
$\displaystyle=3\frac{\kappa}{\kappa+\lambda},~{}e_{2}\left(B_{1}^{\pm}\right)=-3\frac{\kappa-\lambda}{\kappa+\lambda},$
$\displaystyle e_{3}\left(B_{1}^{\pm}\right)$
$\displaystyle=-\frac{3\kappa+i\sqrt{3\kappa\left(4\lambda^{3}+8\kappa\lambda^{2}+4\left(\kappa^{2}-6\right)\lambda-27\kappa\right)}}{2\left(\kappa+\lambda\right)},$
$\displaystyle e_{4}\left(B_{1}^{\pm}\right)$
$\displaystyle=-\frac{3\kappa-i\sqrt{3\kappa\left(4\lambda^{3}+8\kappa\lambda^{2}+4\left(\kappa^{2}-6\right)\lambda-27\kappa\right)}}{2\left(\kappa+\lambda\right)},$
from where we conclude again that the exact scaling solutions at points
$B_{1}^{\pm}$ are unstable. In particular points
Similarly, the eigenvalues of the linearized system around the points
$B_{2}^{\pm}$ are calculated
$\displaystyle e_{1}\left(B_{2}^{\pm}\right)$
$\displaystyle=-3\frac{\kappa}{\kappa+\lambda},~{}e_{2}\left(B_{2}^{\pm}\right)=-3\frac{2\sigma\left(\kappa-\lambda\right)-\kappa}{2\sigma\left(\kappa+\lambda\right)},$
$\displaystyle e_{3}\left(B_{2}^{\pm}\right)$
$\displaystyle=e_{3}\left(B_{1}^{\pm}\right),~{}e_{4}\left(B_{2}^{\pm}\right)=e_{3}\left(B_{1}^{\pm}\right),$
Hence, we infer that the stationary points $B_{2}^{\pm}$ are attractors, and
the exact solutions at the points are stable when the free parameters
$\left\\{\lambda,\kappa,\sigma\right\\}$ are constraints as follows
$\lambda\leq-\sqrt{6}:\left\\{\kappa<\lambda,\sigma<0,\sigma>\frac{\kappa}{2\left(\kappa-\lambda\right)}\right\\}\cup\left\\{\kappa=\lambda,\sigma<0\right\\}\cup\left\\{\lambda<\kappa<0,\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma<0\right\\},$
$-\sqrt{6}<\lambda<-\sqrt{3}:\left\\{\kappa<\lambda,\sigma<0,\sigma>\frac{\kappa}{2\left(\kappa-\lambda\right)}\right\\}\cup\left\\{\kappa=\lambda,\sigma<0\right\\}\cup\left\\{\lambda<\kappa<\frac{6-\lambda^{2}}{\lambda},\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma<0\right\\},$
$\lambda=-\sqrt{3}:\left\\{\kappa<-\sqrt{3},~{}\sigma<0\right\\}\cup\left\\{\kappa<-\sqrt{3},~{}\frac{\kappa}{2\left(\sqrt{3}+\kappa\right)}<\sigma\right\\},$
$-\sqrt{3}<\lambda<0:\left\\{\kappa<\frac{6-\lambda^{2}}{\lambda},\sigma<0\right\\}\cup\left\\{\kappa<\frac{6-\lambda^{2}}{\lambda},\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma\right\\},$
$0<\lambda<\sqrt{3}:\left\\{\kappa>\frac{6-\lambda^{2}}{\lambda},\sigma<0\right\\}\cup\left\\{\kappa>\frac{6-\lambda^{2}}{\lambda},\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma\right\\},$
$\lambda=\sqrt{3}:\left\\{\kappa<\sqrt{3},~{}\sigma<0\right\\}\cup\left\\{\kappa<-\sqrt{3},~{}-\frac{\kappa}{2\left(\sqrt{3}-\kappa\right)}<\sigma\right\\},$
$\sqrt{3}<\lambda<\sqrt{6}:\left\\{\frac{6-\lambda^{2}}{\lambda}<\kappa<\lambda,\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma<0\right\\}\cup\left\\{\kappa\geq\lambda,\sigma<0\right\\}\cup\left\\{\kappa>\lambda,\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma\right\\},$
$\lambda\geq\sqrt{6}:\left\\{0<\kappa<\lambda,\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma<0\right\\}\cup\left\\{\kappa\geq\lambda,\sigma<0\right\\}\cup\left\\{\kappa>\lambda,\frac{\kappa}{2\left(\kappa-\lambda\right)}<\sigma\right\\}.$
In Figs. 3 and 4 we plot the regions where the stationary points $B_{2}^{\pm}$
are attractors and the exact solutions on the stationary points points are
stable.
Figure 3: Region plot in the space of variabels
$\left\\{\kappa,\lambda,\sigma\right\\}$ where the points $B_{2}^{\pm}$ are
attractors. Figure 4: Region plots in the the planes
$\kappa-\sigma,~{}\lambda-\sigma$ and $\lambda-\kappa$ where points
$B_{2}^{\pm}$ are attractors. Left figures present the region in the plane
$\kappa-\sigma$ for $\lambda=-2$ and $\lambda=2$; middle figures present the
region in the plane $\lambda-\sigma$, for $\kappa=-2$ and $\kappa-2$ while
right figures are in the plane for $\lambda-\kappa$ for $\sigma=-1$ and
$\sigma=1$.
### IV.3 Family C
The stationary points of Family C are two and they have coordinates
$\displaystyle C_{1}$
$\displaystyle=\left(-\frac{\kappa}{\sqrt{6}},0,0,\sqrt{1-\frac{\kappa^{2}}{6}},0\right),$
(37) $\displaystyle C_{2}$
$\displaystyle=\left(0,\sqrt{\frac{\kappa}{\kappa-\lambda}},0,\sqrt{\frac{\lambda}{\lambda-\kappa}},0\right).$
(38)
Point $C_{1}$ is real when $\left|\kappa\right|\leq\sqrt{6}$ and the physical
quantities of the exact solution at the point are
$\left(w_{tot}\left(C_{1}\right),w_{\phi}\left(C_{1}\right),w_{\psi}\left(C_{1}\right),\Omega_{\phi}\left(C_{1}\right),\Omega_{\psi}\left(C_{1}\right)\right)=\left(-1+\frac{\kappa^{2}}{3},1,-1,\frac{\kappa^{2}}{6},1-\frac{\kappa^{2}}{6}\right).$
(39)
Thus, stationary point $C_{1}$ describes a scaling solution. The scaling
solution describes an accelerated universe when
$\left|\kappa\right|<\sqrt{2}$.
Furthermore, the exact solution at the stationary point $C_{2}$ describes a de
Sitter universe, where the two scalar fields mimic the cosmological constant,
the physical quantities are
$\left(w_{tot}\left(C_{2}\right),w_{\phi}\left(C_{2}\right),w_{\psi}\left(C_{2}\right),\Omega_{\phi}\left(C_{2}\right),\Omega_{\psi}\left(C_{2}\right)\right)=\left(-1,-1,-1,\frac{\kappa}{\kappa-\lambda},\frac{\lambda}{\lambda-\kappa}\right).$
(40)
Point $C_{2}$ is real and physically accepted when $\lambda\kappa<0$, i.e.
$\left\\{\lambda<0,\kappa>0\right\\}$ or
$\left\\{\lambda>0,\kappa<0\right\\}$.
The linearized four-dimensional system around the stationary point $C_{1}$
admits the eigenvalues
$\displaystyle e_{1}\left(C_{1}\right)$ $\displaystyle=\frac{\kappa^{2}}{2},$
$\displaystyle e_{2}\left(C_{1}\right)$
$\displaystyle=-\frac{1}{2}\left(6-\kappa^{2}\right)$ $\displaystyle
e_{3}\left(C_{1}\right)$ $\displaystyle=2\left(\kappa^{2}-3\right)$
$\displaystyle e_{4}\left(C_{1}\right)$
$\displaystyle=\frac{1}{2}\kappa\left(\kappa-\lambda\right)$
from where we infer that the exact solution at the stationary point is always
unstable. Specifically, point $C_{1}$ is a saddle point.
For the stationary point $C_{2}$, we find that one of the eigenvalues of the
linearized system around $C_{2}$ is zero. That eigenvalue corresponds to the
linearize equation (24). As far as concerns the other three eigenvalues we
plot numerically their values and we find that they have negative real parts
for all the range of parameters $\left\\{\lambda,\kappa\right\\}$ where the
point exists. In Fig. 5 we plot the real parts of the three nonzero
eigenvalues of the linearized system. Therefore, we infer that the there
exists a four-dimensional stable submanifold around the stationary point.
However, because of the eigenvalues has zero real part the center manifold
theorem (CMT) should be applied.
For simplicity on our calculations we apply the CMT for the five dimensional
system. We find that the variables with nonzero real part on their
eigenvalues, that is, variables $\left\\{x,y,z,u\right\\}$, according to the
CMT theorem are approximated as functions of variable $\mu$ as follows
$\displaystyle x$
$\displaystyle=x_{00}\mu^{2}+x_{10}\mu^{3}+x_{20}\mu^{4}+O\left(\mu^{5}\right)~{},~{}y=y_{00}\mu^{2}+y_{10}\mu^{3}+y_{20}\mu^{4}+O\left(\mu^{5}\right),~{}$
$\displaystyle z$
$\displaystyle=z_{00}\mu^{2}+z_{10}\mu^{3}+z_{20}\mu^{4}+O\left(\mu^{5}\right)~{},~{}u=u_{00}\mu^{2}+u_{10}\mu^{3}+u_{20}\mu^{4}+O\left(\mu^{5}\right)$
where
$\left\\{x_{00},y_{00},z_{00},u_{00}\right\\}=\left(0,0,z_{00},0\right)$;
$x_{10}=-\frac{z_{00}}{\kappa},~{}y_{10}=\sqrt{\frac{3}{2}}\frac{z_{00}}{\sqrt{\kappa^{3}\left(\kappa-\lambda\right)}},~{}$etc.
Hence, the fifth equation, i.e. equation (20) is written
$\frac{d\mu}{d\tau}=\alpha\mu^{4}+a_{1}\mu^{5}+O\left(\mu^{6}\right)~{}$where
$\alpha=\frac{\sqrt{6}\left(\kappa\lambda-2\left(\kappa\lambda+3\right)\sigma\right)}{2\kappa\lambda+6}z_{00}-\frac{6\kappa\left(\sqrt{\lambda\left(\lambda-\kappa\right)}\right)}{2\kappa\lambda+6}u_{10}$.
Therefore, the point is always unstable for $a\neq 0$, however from the
coefficient term $a_{1}\mu^{5}$ we find that the point can be stable.
Figure 5: Qualitative evolution for the real parts of the nonzero eigenvalues
of the linearized system around the stationary point $C_{2}$.
### IV.4 Family D
The fourth family of stationary points is consists of the following six
stationary points
$D_{1}^{\pm}=\left(-\sqrt{\frac{3}{2}}\frac{1}{\kappa},0,\pm\frac{\sqrt{\kappa^{2}-3}}{\sqrt{2}\kappa},\frac{1}{\sqrt{2}},0\right),$
(41) $D_{2}^{\pm}=\left(x_{D_{2}},0,\pm
z_{D_{2}},\sqrt{1-\left(x_{D_{2}}\right)^{2}-\left(z_{D_{2}}\right)^{2}},\mu_{D_{2}}\right),$
(42) $D_{3}^{\pm}=\left(x_{D_{3}},0,\pm
z_{D3},\sqrt{1-\left(x_{D_{3}}\right)^{2}-\left(z_{D_{3}}\right)^{2}},\mu_{D_{3}}\right),$
(43)
with
$\displaystyle x_{D_{2}}$
$\displaystyle=-\frac{\kappa^{2}(2\sigma-1)+\sqrt{-4\kappa^{4}\sigma+\kappa^{4}+4\left(\kappa^{2}-3\right)^{2}\sigma^{2}}+6\sigma}{\sqrt{6}\kappa(4\sigma-1)},$
$\displaystyle z_{D_{2}}$
$\displaystyle=\frac{\sqrt{-\kappa^{4}(1-2\sigma)^{2}+6\kappa^{2}\sigma\left(8\sigma^{2}-2\sigma+1\right)-\sqrt{-4\kappa^{4}\sigma+\kappa^{4}+4\left(\kappa^{2}-3\right)^{2}\sigma^{2}}\left(\kappa^{2}(2\sigma-1)+24\sigma^{2}\right)-144\sigma^{3}}}{2\sqrt{3}\kappa\sqrt{\sigma}(4\sigma-1)},$
$\displaystyle\mu_{D_{2}}$
$\displaystyle=z_{D_{2}}\frac{\sqrt{6}\left(\kappa^{2}(1-2\sigma)^{2}+2\sigma\left(\sqrt{-4\kappa^{4}\sigma+\kappa^{4}+4\left(\kappa^{2}-3\right)^{2}\sigma^{2}}-6\sigma\right)\right)}{\kappa^{2}(1-2\sigma)^{2}-24\sigma^{2}},$
$\displaystyle x_{D_{3}}$
$\displaystyle=\frac{\kappa^{2}(1-2\sigma)+\sqrt{-4\kappa^{4}\sigma+\kappa^{4}+4\left(\kappa^{2}-3\right)^{2}\sigma^{2}}-6\sigma}{\sqrt{6}\kappa(4\sigma-1)},$
$\displaystyle z_{D_{3}}$
$\displaystyle=\frac{\sqrt{-\kappa^{4}(1-2\sigma)^{2}+6\kappa^{2}\sigma\left(8\sigma^{2}-2\sigma+1\right)+\sqrt{-4\kappa^{4}\sigma+\kappa^{4}+4\left(\kappa^{2}-3\right)^{2}\sigma^{2}}\left(\kappa^{2}(2\sigma-1)+24\sigma^{2}\right)-144\sigma^{3}}}{2\sqrt{3}\kappa\sqrt{\sigma}(4\sigma-1)},$
$\displaystyle\mu_{D_{3}}$
$\displaystyle=z_{D_{3}}\frac{\sqrt{6}\left(2\sigma\left(\sqrt{-4\kappa^{4}\sigma+\kappa^{4}+4\left(\kappa^{2}-3\right)^{2}\sigma^{2}}+6\sigma\right)-\kappa^{2}(1-2\sigma)^{2}\right)}{\kappa^{2}(1-2\sigma)^{2}-24\sigma^{2}}\text{.
}$
Points $D_{1}^{\pm}$ describe a scaling solution where the effective fluid is
pressureless, that is, it describes a dust fluid source and the scale factor
is $a\left(t\right)=a_{0}t^{\frac{2}{3}}$. The physical parameters of the
exact solution at points $D_{1}^{\pm}$ are
$w_{tot}\left(D_{1}^{\pm}\right)=0~{},~{}w_{\phi}\left(D_{1}^{\pm}\right)=1~{},~{}w_{\psi}\left(D_{1}^{\pm}\right)=\frac{3}{3-2\kappa^{2}}~{},$
(44)
$\Omega_{\phi}\left(D_{1}^{\pm}\right)=\frac{3}{2\kappa^{2}}~{},~{}\Omega_{\psi}\left(D_{1}^{\pm}\right)=1-\frac{3}{2\kappa^{2}}.$
(45)
Remark that points $D_{1}^{\pm}$ are real when $\left|\kappa\right|>\sqrt{3}$.
The eigenvalues of the four-dimensional linearized system around the
stationary points $D_{1}^{\pm}$ are derived
$\displaystyle e_{1}\left(D_{1}^{\pm}\right)$ $\displaystyle=\frac{3}{2}$
$\displaystyle e_{2}\left(D_{1}^{\pm}\right)$
$\displaystyle=\frac{3}{2}\left(\kappa-\lambda\right)$ $\displaystyle
e_{3}\left(D_{1}^{\pm}\right)$
$\displaystyle=-\frac{3+\sqrt{3\left(51-16\kappa^{2}\right)}}{4}$
$\displaystyle e_{4}\left(D_{1}^{\pm}\right)$
$\displaystyle=-\frac{3-\sqrt{3\left(51-16\kappa^{2}\right)}}{4}$
from where we infer that the stationary points $D_{1}^{\pm}$ are always
unstable. Points $D_{1}^{\pm}$ are saddle points.
Points $D_{2}^{\pm}$ are real and physically accepted when
$\left\\{\sigma\in\left(0,\frac{1}{4}\right)\cup\left(\frac{1}{4},\frac{1}{2}\right),\kappa>\frac{2\sqrt{6}\sigma}{2\sigma-1}\right\\}\cup\left\\{\frac{2\sqrt{6}\sigma}{1-2\sigma}<\kappa<-\sqrt{6}\sqrt{\frac{2\sigma^{2}+\sigma\sqrt{4\sigma-1}}{\left(1-2\sigma\right)^{2}}},\sigma>\frac{1}{2}\right\\}$
and $\left\\{\kappa<0,\sigma\,<0\right\\}$ as they are presented in Fig. 6.
The exact solution at the stationary points describe a scaling solution with
values of the equation of state parameter $w_{tot}\left(\kappa,\sigma\right)$
as they presented in Fig. 6. For the linearized four-dimensional system one of
the eigenvalues is
$e_{1}\left(D_{2}^{\pm}\right)=\frac{A\left(\kappa,\sigma\right)(2\kappa\sigma-\kappa-2\lambda\sigma)}{4\kappa\sigma(4\sigma-1)\left(2\kappa^{2}\sigma-\kappa^{2}+24\sigma^{2}\right)},$
where
$\displaystyle A\left(\kappa,\sigma\right)$
$\displaystyle=4\kappa^{4}\sigma^{2}-4\kappa^{4}\sigma+\kappa^{4}+48\kappa^{2}\sigma^{3}-12\kappa^{2}\sigma^{2}-6\kappa^{2}\sigma$
$\displaystyle+\sqrt{\left(2\kappa^{2}\sigma-\kappa^{2}+24\sigma^{2}\right)^{2}\left(4\kappa^{4}\sigma^{2}-4\kappa^{4}\sigma+\kappa^{4}-24\kappa^{2}\sigma^{2}+36\sigma^{2}\right)}+144\sigma^{3}.$
The other three eigenvalues are only functions of $\kappa,\sigma$, that is
$e_{2,3,4}\left(D_{2}^{\pm}\right)=e_{2,3,4}\left(\kappa,\sigma\right)$.
Numerically, we find that there are not any values of
$\left\\{\kappa,\sigma\right\\}$ where the points $D_{2}^{\pm}$ are defined,
such that all the eigenvalues have real part negative, consequently, the
stationary points are always sources and the exact solutions at the stationary
points $D_{2}^{\pm}$ are always unstable.
Figure 6: Left figure: Region plot in the space
$\left\\{\kappa,\sigma\right\\}$ where points $D_{2}^{\pm}$ are real and
physical accepted. Right Figure: Contour plot of the equation of state
parameter for the effective fluid $w_{tot}\left(\kappa,\sigma\right)$ at the
critical points $D_{2}^{\pm}$. Figure 7: Left figure: Region plot in the
space $\left\\{\kappa,\sigma\right\\}$ where points $D_{3}^{\pm}$ are real and
physical accepted. Right Figure: Contour plot of the equation of state
parameter for the effective fluid $w_{tot}\left(\kappa,\sigma\right)$ at the
critical points $D_{3}^{\pm}$.
Stationary points $D_{3}^{\pm}$ have similar physical properties with points
$D_{2}^{\pm}$, indeed they describe scaling solutions only. The points are
real and physically accepted in the region
$\left\\{\sigma>\frac{1}{2},\kappa<-\sqrt{\frac{6\sigma}{\sqrt{4\sigma-1}-2\sigma}}\right\\}$.
In Fig. 7 we present the region in the space $\left\\{\sigma,\kappa\right\\}$
where the points are defined as also the counter plot of the equation of state
parameter for the effective fluid source which describes the exact solution at
the points $D_{3}^{\pm}$. In a similar way with points $D_{2}^{\pm}$ we find
that there is not any range in the space $\left\\{\kappa,\sigma\right\\}$
where the points are attractors. Consequently, the stationary points
$D_{3}^{\pm}$ are sources. The main physical results of the stationary points
are summarized in Table 1.
Table 1: The physical propreties of the stationary models in chiral cosmology Point | Contribution of $\phi$ | Contribution of $\psi$ | Scaling/de Sitter | Possible $w_{tot}<-\frac{1}{3}$ | Stability
---|---|---|---|---|---
$A_{1}$ | Yes only kinetic part | No | Scaling | No | Unstable
$A_{2}$ | Yes | No | Scaling | Yes | Unstable
$B_{1}^{\pm}$ | Yes | Yes only kinetic part | Scaling | Yes | Unstable
$B_{2}^{\pm}$ | Yes | Yes only kinetic part | Scaling | Yes | Can be Stable
$C_{1}$ | Yes only kinetic | Yes only potential | Scaling | Yes | Unstable
$C_{2}$ | Yes only potential | Yes only potential | de Sitter $\left(w_{tot}=-1\right)$ | Always | CMT
$D_{1}^{\pm}$ | Yes | Yes | Scaling $\left(w_{tot}=0\right)$ | No | Unstable
$D_{2}^{\pm}$ | Yes | Yes | Scaling | Yes | Unstable
$D_{3}^{\pm}$ | Yes | Yes | Scaling | Yes | Unstable
## V Application $\left(\kappa,\sigma\right)=\left(2,\frac{1}{2}\right)$
Consider now the case where $\kappa=2$ and $\sigma=\frac{1}{2}$, while
$\lambda$ is an arbitrary constant. For that consideration, the stationary
points of the dynamical system (20)-(24) have the following coordinates
$\displaystyle\bar{A}_{1}^{\pm}$ $\displaystyle=\left(\pm
1,0,0,0,0\right),~{}$ $\displaystyle\bar{A}_{2}$
$\displaystyle=\left(-\frac{\lambda}{\sqrt{6}},\sqrt{1-\frac{\lambda^{2}}{6}},0,0,0\right),$
$\displaystyle\bar{B}_{1}^{\pm}$
$\displaystyle=\left(-\frac{\sqrt{6}}{\lambda+2},\sqrt{\frac{2}{\lambda+2}},\pm\sqrt{\frac{\left(\lambda+1\right)^{2}-7}{\left(\lambda+2\right)^{2}}},0,0\right),~{}$
$\displaystyle\bar{B}_{2}^{\pm}$
$\displaystyle=\left(-\frac{\sqrt{6}}{\lambda+2},\sqrt{\frac{2}{\lambda+2}},\sqrt{\frac{\left(\lambda+1\right)^{2}-7}{\left(\lambda+2\right)^{2}}},0,2\sqrt{\frac{6}{\left(\lambda+1\right)^{2}-7}}\right),$
$\displaystyle\bar{C}_{1}$
$\displaystyle=\left(-\sqrt{\frac{2}{3}},0,0,\frac{1}{\sqrt{3}},0\right),~{}$
$\displaystyle\bar{C}_{2}$
$\displaystyle=\left(0,\left(1-\frac{\lambda}{2}\right)^{-1},0,\sqrt{\frac{\lambda}{\lambda-2}}\right),$
$\displaystyle\bar{D}_{1}^{\pm}$
$\displaystyle=\left(-\frac{1}{2}\sqrt{\frac{3}{2}},0,\frac{1}{2\sqrt{2}},\frac{1}{\sqrt{2}},0\right).$
Points $\bar{A}_{1}^{\pm},~{}\bar{A}_{2}$ are sources and since they do not
depend on the parameters $\kappa,\sigma$ their physical properties are the
same as before. Recall that point $\bar{A}_{2}$ is real for
$\left|\lambda\right|<\sqrt{6}$. Stationary points
$\mathbf{B}=\left(\bar{B}_{1}^{\pm},\bar{B}_{2}^{\pm}\right)$ exist when
$\lambda>\sqrt{7}-1$. The physical parameters at the points are simplified as
follows
$w_{tot}\left(\mathbf{B}\right)=\frac{\lambda-2}{\lambda+2}~{},~{}w_{\phi}\left(\mathbf{B}\right)=-1+\frac{6}{5\lambda}~{},~{}w_{\psi}\left(\mathbf{B}\right)=1~{},$
(46)
$\Omega_{\phi}\left(\mathbf{B}\right)=\frac{2\left(\lambda+5\right)}{\left(\lambda+2\right)^{2}}~{},~{}\Omega_{\psi}\left(\mathbf{B}\right)=1-\frac{2\left(\lambda+5\right)}{\left(\lambda+2\right)^{2}}.$
(47)
The exact solutions at points $\bar{B}_{1}^{\pm}$ are always unstable.
However, for points $\bar{B}_{2}^{\pm}$ we find that
$e_{2}\left(\bar{B}_{2}^{\pm}\right)>0$ for $\lambda>\sqrt{7}-1$ which means
that points $\bar{B}_{2}^{\pm}$ are sources. The parameter for the equation of
state $w_{tot}\left(\mathbf{B}\right)$ is constraint as
$\frac{\sqrt{7}-3}{\sqrt{7+1}}<w_{tot}\left(B\right)<1$, while for
$\lambda=2$, $w_{tot}\left(\mathbf{B}\right)=0$ the exact solutions have the
scale factor $a\left(t\right)=a_{0}t^{\frac{2}{3}}$, while for $\lambda=4$,
$w_{tot}\left(\mathbf{B}\right)=\frac{1}{3}$, that is
$a\left(t\right)=a_{0}t^{\frac{1}{2}}$.
Furthermore, stationary point $\bar{C}_{1}$ is a source and describes the
radiation epoch, $w_{tot}\left(\bar{C}_{1}\right)=\frac{1}{3}$, on the other
hand, at point $\bar{C}_{2}$ the exact solution is that of de Sitter universe,
the point is real for $\lambda<0\mathbf{.}$Finally, points $\bar{D}_{1}^{\pm}$
points describe the unstable scaling solutions which describe the matter
dominated era, that is, $w_{tot}\left(\bar{D}_{1}^{\pm}\right)=0$.
In Figs. 8 and 9, the evolution of the physical variables
$\left\\{w_{tot},w_{\phi},w_{\psi},\Omega_{\phi},\Omega_{\psi}\right\\}$ is
presented for the specific model for $\lambda=-4$ and $\lambda=-2$ and for
different initial conditions for the integration of the dynamical system
(20)-(24). Recall that the de Sitter point $\bar{C}_{2}$ is a source; however,
it admits a four-dimensional stable manifold when $\mu\rightarrow 0$. We
observe that in the de Sitter point the physical parameters
$\Omega_{\phi},~{}\Omega_{\psi}$ are not zero which means that the all the
parts of the potential $V\left(\phi,\psi\right)$ contributes to the
cosmological fluid. The initial conditions have been considered such that to
describe a wide range of solutions and different behaviour. The large number
of stationary points is observed from the behaviour of $w_{tot}$, which has
various maxima before reach the de Sitter point. Similarly from the diagram of
$\left\\{\Omega_{\phi},~{}\Omega_{\psi}\right\\}$, we observe that there is a
alternation between the domination of the two fields.
Figure 8: Evolution of the physical variables
$\left\\{w_{tot},w_{\phi},w_{\psi},\Omega_{\phi},\Omega_{\psi}\right\\}$ for
numerical solutions of the field equations with
$\kappa=2,~{}\sigma=\frac{1}{2}$ and $\lambda=-4$. The plots are for different
initial conditions
$\left(x\left(0\right),y\left(0\right),z\left(0\right),u\left(0\right),\mu\left(0\right)\right)$
where $\mu\left(0\right)$ has been chosen to be near to zero, such that the de
Sitter point $\bar{C}_{2}$ to be an attractor. Figure 9: Evolution of the
physical variables
$\left\\{w_{tot},w_{\phi},w_{\psi},\Omega_{\phi},\Omega_{\psi}\right\\}$ for
numerical solutions of the field equations with
$\kappa=2,~{}\sigma=\frac{1}{2}$ and $\lambda=-2$. The plots are for different
initial conditions
$\left(x\left(0\right),y\left(0\right),z\left(0\right),u\left(0\right),\mu\left(0\right)\right)$
where $\mu\left(0\right)$ has been chosen to be near to zero, such that the de
Sitter point $\bar{C}_{2}$ to be an attractor.
Consider now the cosmographic parameters $q,~{}j$ and $s$ which are defined as
cowei
$q\left(x,y,z,u,\mu;\lambda,\kappa,\sigma\right)=-1-\frac{\dot{H}}{H^{2}}$
(48)
$j\left(x,y,z,u,\mu;\lambda,\kappa,\sigma\right)=\frac{\ddot{H}}{H^{3}}-3q-2$
(49)
$s\left(x,y,z,u,\mu;\lambda,\kappa,\sigma\right)=\frac{H^{\left(3\right)}}{H^{4}}+4j+3q(q+4)+6$
(50)
In Fig. 10 we present the evolution of the cosmographic parameters for the
application we considered in this example as also for additional values of the
free parameters $\left\\{\lambda,\kappa,\sigma\right\\}$, while all the plots
are for the same initial conditions. Here we present the qualitative evolution
of these parameters, however the cosmographic parameters as also the free
parameters of the theory can be constrained by the observations cob1 .
Figure 10: Qualitative evolution of the cosmographic parameters
$\left\\{q,j,s\right\\}$ for various values of the free parameters
$\left\\{\lambda,\kappa,\sigma\right\\}$. The plots of the first row are for
$\left\\{\lambda,\kappa,\sigma\right\\}=\left\\{-4,\pm 2,\frac{1}{2}\right\\}$
while the plots of the second row are for
$\left\\{\lambda,\kappa,\sigma\right\\}=\left\\{-2,\pm
2,\frac{1}{2}\right\\}$. From the figure we observe that in order the future
attractor to be a de Sitter point then $\kappa>0.$
In the following section we continue our analysis by presenting analytic
solutions for the model of our study.
## VI Analytic solution
We consider the point-like Lagrangian
$\mathcal{L}\left(a,\dot{a},\phi,\dot{\phi},\psi,\dot{\psi}\right)=-3a\dot{a}^{2}+\frac{1}{2}a^{3}\left(\dot{\phi}^{2}+e^{\kappa\phi}\dot{\psi}^{2}\right)-a^{3}\left(V_{0}e^{\lambda\phi}+U_{0}\psi^{\frac{1}{\sigma}}e^{\kappa\phi}\right).$
(51)
Analytic solutions of form of Lagrangian (51) were presented before in 2sfand
. By using the results and the analysis of 2sfand we present an analytic
solutions for specific values of the parameters
$\left\\{\lambda,\kappa,\sigma\right\\}$ in order to support the results of
the previous section. Specifically for the free variables we select
$\left(\lambda,\kappa,\sigma\right)=\left(-\frac{\sqrt{6}}{2},-\frac{\sqrt{6}}{2},\frac{1}{2}\right)$.
These values are not random. In particular, from the results of 2sfand it
follows that for these specific values the field equations admit conservation
laws and they form a Liouville integrable dynamical system, such that the
field equations can be solved by quadratures.
In order to simplify the field equations and write the analytic solution by
using closed-form functions, we apply the point transformation
$a=\left(xz-\frac{3}{8}y^{2}\right)^{\frac{1}{3}}~{},~{}\phi=-2\sqrt{\frac{2}{3}}\ln\left(\frac{x}{\sqrt{\left(xz-\frac{3}{8}y^{2}\right)}}\right)~{},~{}\psi=\frac{y}{x}$
(52)
such that Lagrangian (51) is written as
$\mathcal{L}\left(x,\dot{x},y,\dot{y},z,\dot{z}\right)=-\frac{4}{3}\dot{x}\dot{z}-V_{0}x^{2}+\frac{1}{2}\dot{y}^{2}-U_{0}y^{2}.$
(53)
In the new coordinates the field equations are
$\ddot{x}=0~{},~{}\ddot{y}+2U_{0}y=0~{},~{}\ddot{z}-\frac{3}{2}V_{0}x=0,$ (54)
with constraint equation
$-\frac{4}{3}\dot{x}\dot{z}+V_{0}x^{2}+\frac{1}{2}\dot{y}^{2}+U_{0}y^{2}=0.$
(55)
Easily, we find the exact solution
$x=x_{1}t+x_{0}~{},~{}z=\frac{1}{4}V_{0}x_{1}t^{3}+\frac{3}{4}V_{0}x_{0}t^{2}+z_{1}t+z_{0}~{},$
(56)
$y\left(t\right)=y_{1}\cos\left(\sqrt{2U_{0}}t\right)+y_{2}\sin\left(\sqrt{2U_{0}}t\right)$
(57)
with constraint condition
$V_{0}x_{0}^{2}-\frac{4}{3}x_{1}z_{1}+U_{0}\left(y_{1}^{2}+y_{2}^{2}\right)$.
For $x_{0}=z_{0}=y_{1}=0$, the scale factor is written
$a\left(t\right)=\left(\frac{x_{1}}{4}V_{0}t^{4}+x_{1}z_{1}t^{2}-\frac{3}{8}\left(y_{2}\right)^{2}\sin\left(\sqrt{2U_{0}}t\right)\right)^{\frac{1}{3}}.$
It is easy to observe that the present analytic solution does not provide any
de Sitter point. That is in agreement with the result of the previous section,
since for $\lambda=\kappa$, the de Sitter point $C_{2}$ does not exist. For
more general solutions with expansion eras and de Sitter phases we refer the
reader to 2sfand .
## VII With a matter source
Let us assume now the presence of an additional pressureless matter source in
field equations with energy density $\rho_{m}$ and let us discuss the
existence of additional stationary points. For a pressureless fluid source the
dimensionless field equations (20)-(25) remain the same, while the constraint
equation (28) becomes
$\Omega_{m}=1-x^{2}-y^{2}-z^{2}-u^{2}$ (58)
where $\Omega_{m}=\frac{\rho_{m}}{3H^{2}}$, and $0\leq\Omega_{m}\leq 1$.
For this model, the stationary points found before exist and give
$\Omega_{m}=0$, while when $\Omega_{m}\neq 0$ the additional points exist
$E_{1}=\left(-\sqrt{\frac{3}{2}}\frac{1}{\lambda},\sqrt{\frac{3}{2}}\frac{1}{\lambda},0,0,0\right)~{},~{}E_{2}=\left(-\sqrt{\frac{3}{2}}\frac{1}{\kappa},0,0,\sqrt{\frac{3}{2}}\frac{1}{\kappa},0\right)$
(59)
$E_{3}=\left(-\sqrt{\frac{3}{2}}\frac{1}{\kappa},0,z,\sqrt{\frac{3}{2}+\kappa^{2}z^{2}}\frac{1}{\kappa},0\right)$
(60)
Point $E_{1}$ is physically accepted when
$\left|\lambda\right|>\sqrt{\frac{3}{2}}$ and describes the tracking solution
with $\Omega_{m}\left(E_{1}\right)=1-\frac{3}{\lambda^{2}}$ where the field
$\phi$ mimics the ideal gas $\rho_{m},~{}$that is,
$w_{\phi}\left(E_{1}\right)=0,$ while the second field $\psi$ does not
contribute, i.e. $z\left(E_{1}\right)=u\left(E_{1}\right)=0$.
For $E_{2}$ we find
$\left(w_{tot}\left(E_{2}\right),w_{\phi}\left(E_{2}\right),w_{\psi}\left(E_{2}\right),\Omega_{\phi}\left(E_{2}\right),\Omega_{\psi}\left(E_{2}\right)\right)=\left(0,1,-1,\frac{3}{2\kappa^{2}},\frac{3}{2\kappa^{2}}\right)$,
which means that it is another tracking tracking solution with
$\Omega_{m}=1-\frac{3}{\kappa^{2}}$; the point is physically accepted when
$\left|\kappa\right|\geq\sqrt{\frac{3}{2}}$.
$E_{3}$ does not describe one point, but a family of points on the surface
$u\left(z\right)=$ $\sqrt{\frac{3}{2}+\kappa^{2}z^{2}}$ , for
$x\left(E_{3}\right)=-\sqrt{\frac{3}{2}}\frac{1}{\kappa},$
$y\left(E_{3}\right)=\mu\left(E_{3}\right)=0$. It describes a tracking
solution, that is $w_{tot}\left(E_{3}\right)=0$, with physical parameters
$\left(w_{tot}\left(E_{3}\right),w_{\phi}\left(E_{3}\right),w_{\psi}\left(E_{3}\right),\Omega_{\phi}\left(E_{3}\right),\Omega_{\psi}\left(E_{3}\right)\right)=\left(0,1,-\frac{3}{4+3\kappa^{2}z^{2}},\frac{3}{2\kappa^{2}},2z^{2}+\frac{3}{2\kappa^{2}}\right),$
(61)
while the point is physically accepted when
$\left|\kappa\right|\geq\sqrt{\frac{3}{2}}$ and
$\left|z\right|\leq\frac{1}{2}\sqrt{2-\frac{3}{\kappa^{2}}}$. When
$z\left(E_{3}\right)=0$, then $E_{3}$ reduces to $E_{2}$. What it is
important, to mention is that the stability analysis for all the previous
points changes, since we made use of the constraint equation (28).
## VIII Conclusions
In this work we performed a detailed study of the dynamics for a two scalar
field model with a mixed potential term known as Chiral model. The purpose of
our analysis was to study the cosmological evolution of that specific model as
also the cosmological viability of the model and which epochs of the
cosmological evolution can be described by the Chiral model.
For the scalar field potential we assumed that it is of the form
$V\left(\phi,\psi\right)=V_{0}e^{\lambda\phi}+U_{0}\psi^{\frac{1}{\sigma}}e^{\kappa\phi}$.
For this consideration and without assuming the existence of additional matter
source, we found four families of stationary points which provide nine
different cosmological solutions. Eight of the cosmological solutions are
scaling solutions which describe spacetimes with a a perfect fluid with a
constant equation of state parameter $w\left(P\right)$. One of the scaling
solutions describes a universe with a stiff matter, $w\left(P\right)=1$,
another scaling solution correspond to a universe with a pressureless fluid
source, $w\left(P\right)=0$, while for the rest six scaling solutions
$w\left(P\right)=w\left(P,\lambda,\kappa,\sigma\right)$, which can describe
accelerated eras for for specific values of the free parameters
$\left\\{\lambda,\kappa,\sigma\right\\}$. Moreover, the ninth exact
cosmological solution which was found from the analysis of the stationary
points describes a de Sitter universe.
As far as the stability of the exact solutions at the stationary points is
concerned, seven of the points are always unstable. while only the set of the
points $B_{2}^{\pm}~{}$can be stable. Point $C_{2}$ which describes the de
Sitter universe, has one eigenvalue negative while the rest of the eigenvalues
are always negative. Consequently, according to the center manifold theorem we
found the internal surface where the point $C_{2}$ is a source. Moreover, in
the presence of additional matter source only additional tracking solutions
follow, similarly to the quintessence model.
From the above results we observe that the specific Chiral cosmological model
can describe the major eras of the cosmological history, that is, the late
expansion era, an unstable matter dominated era, and two scaling solutions
describe the radiation dominated era and the early acceleration epoch,
therefore, the model in terms of dynamics it is cosmologically viable.
From this analysis it is clear that the Chiral cosmological model can be used
as dark energy candidate. In a future work we plan to apply the cosmological
observations to constrain the theory.
## References
* (1) A. G. Riess, et al., Astron J. 116, 1009 (1998)
* (2) S. Perlmutter, et al., Astrophys. J. 517, 565 (1998)
* (3) P. Astier et al., Astrophys. J. 659, 98 (2007)
* (4) N. Suzuki et al., Astrophys. J. 746, 85 (2012)
* (5) G. Hinshaw et al. [WMAP Collaboration], Astrophys. J. Suppl. 208, 19 (2013)
* (6) P. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys. 594, A13 (2016)
* (7) A. Guth, Phys. Rev. D 23, 347 (1981)
* (8) B. Ratra and P.J.E. Peebles, Phys. Rev. D 37, 3406 (1988)
* (9) G.W. Hordenski, Int. J. Theor. Phys. 10, 363 (1975)
* (10) C. Deffayet, G. Esposito-Farese and A. Vikman, Phys. Rev. D 79, 084003 (2009)
* (11) A.A. Coley and R.J. van den Hoogen, Phys. Rev. D 62, 023517 (2000)
* (12) T. Clifton, P.G. Ferreira, A. Padilla and C. Skordis, Phys. Rept. 513, 1 (2012)
* (13) J. Klusoň, Class. Quantum Grav. 28, 125025 (2011)
* (14) D. Sáez-Gómez, Phys. Rev. D 85, 023009 (2012)
* (15) J.D. Barrow and A.A.H. Graham, Phys. Rev. D 91, 083513 (2015)
* (16) D.N. Page, Class. Quant. Grav. 1, 417 (1984)
* (17) S. Basilakos and G. Luke-Gerakopoulos, Phys. Rev. D 78, 083509 (2008)
* (18) J.D. Barrow, Phys. Rev. D 48, 1585 (1993)
* (19) A. Muslimov, Class. Quant. Grav. 7, 231 (1990)
* (20) G.F.R. Ellis and M.S. Madsen, Class. Quant. Grav. 8, 667 (1991)
* (21) J.D. Barrow and P. Saich, Class. Quant. Grav. 10, 279 (1993)
* (22) R. de Ritis, G. Marmo, G. Platania, C. Rubano, P. Scudellaro and C. Stornaiolo, Phys. Rev. D. 42 1091 (1990)
* (23) S. Basilakos, M. Tsamparlis and A. Paliathanasis, Phys. Rev. D 83, 103512 (2011)
* (24) J.D. Barrow and A. Paliathanasis, Phys. Rev. D 94, 083518 (2016)
* (25) E.J. Copeland, M. Sami and S. Tsujikawa, IJMPD 15, 1753 (2006)
* (26) G. Leon and F.O. Franz Silva, Generalized scalar field cosmologies, arXiv:1912.09856
* (27) V. Faraoni, Cosmology in Scalar-Tensor Gravity, Springer, Dordrecht (2004)
* (28) V. Sivanesan, Phys. Rev. D 90, 104006 (2014)
* (29) V. Gorini, A. Yu. Kamenshchik, U. Moschella and V. Pasquier, Phys. Rev. D 69, 123512 (2004)
* (30) N. Chow and J. Khoury, Phys. Rev. D 80, 024037 (2009)
* (31) G. Leon and E.N. Saridakis, JCAP 1303, 025 (2013)
* (32) L.P. Chimento, M. Forte, R. Lazkoz and M.G. Richarte, Phys. Rev. D 043502 (2009)
* (33) J. Socorro and E.O. Nunez, Eur. Phys. J. Plus 132, 168 (2017)
* (34) A. Giacomini, G. Leon, A. Paliathanais and S. Pan, EPJC 80, 184 (2020)
* (35) A. Paliathanasis, Gen. Rel. Grav. 51, 101 (2019)
* (36) D. Benisty and E.I. Guendelman, Class. Quantum Grav. 36, 095001 (2019)
* (37) A.D. Lindle, Phys. Rev. D 49, 784 (1994)
* (38) E.J. Copeland, A.R. Liddle, D.H. Lyth, E.W. Steward and D. Wands, Phys. Rev. D 49, 6410 (1994)
* (39) S.A. Kim and A.R. Liddle, Phys. Rev. D 74, 023513 (2006)
* (40) D. Wands, Lect. Notes Phys. 738, 275 (2008)
* (41) P. Carrilho, D. Mulryne, J. Ronaye and T. Tenkanen, JCAP 06, 032 (2018)
* (42) P. Christodoulidis, D. Roest and E.I. Sfakianakis, JCAP 11, 002 (2019)
* (43) W. Hu, Phys. Rev. D 71, 047301 (2005)
* (44) Y.-F. Cai, E.N. Saridakis, M.R. Setare and J.-Q. Xia, Phys. Rept. 493, 1 (2010)
* (45) R. Lazkoz, G. Leon and I. Quiros, Phys. Lett. B 649, 103 (2007)
* (46) S.V. Chervon, Quantum Matter 2, 71 (2013)
* (47) I.V. Fomin, J. Phys.: Conf. Ser. 918, 012009 (2017)
* (48) S. V. Ketov, Quantum Non-linear Sigma Models, Springer-Verlag, Berlin, (2000).
* (49) N. Dimakis, A. Paliathanasis, P.A. Terzis and T. Christodoulakis, EPJC 79, 618 (2019)
* (50) P. Christodoulidis, D. Roest and E.I. Sfakianakis, JCAP 12, 059 (2019)
* (51) A. Paliathanasis and M. Tsamparlis, Phys. Rev. D 90, 043529 (2014)
* (52) L. Amendola, G. Camargo Campos and R. Rosenfeld, Phys. Rev. D 75, 083506 (2007)
* (53) D. Pavón and B. Wang, Gen. Rel. Grav. 41, 1 (2009)
* (54) L. P. Chimento, Phys. Rev. D 81, 043525 (2010)
* (55) F. Arevalo, A. P. R. Bacalhau and W. Zimdahl, Class. Quant. Grav. 29, 235001 (2012)
* (56) A. Paliathanasis, S. Pan and W. Yang, IJMPD 28, 1950161 (2019)
* (57) G. Papagiannopoulos, P. Tsiami, S. Basilakos and A. Paliathanasis, EPJC 80, 55 (2020)
* (58) D. Begue, C. Stahl and S.-S. Xue, Nucl. Phys. B 940, 312 (2019)
* (59) M. Szydlowski, T. Stachowiak and R. Wojtak, Phys. Rev. D 73, 063516 (2006)
* (60) W. Yang, S. Pan and A. Paliathanasis, MNRAS 482, 1007 (2019)
* (61) S. Pan, W. Yang and A. Paliathanasis, to appear in MNRAS (DOI:10.1093/mnras/staa213) (2020)
* (62) L. Amendola, D. Polarski and S. Tsujikawa, IJMPD 16, 1555 (2007)
* (63) G. Leon and E.N. Saridakis, JCAP 1504, 031 (2015)
* (64) G. Leon, IJMPE 20, 19 (2011)
* (65) T. Gonzales, G. Leon and I. Quiros, Class. Quantum Grav. 23, 3165 (2006)
* (66) A. Giacomini, S. Jamal, G. Leon, A. Paliathanasis and J. Saveedra, Phys. Rev. D 95, 124060 (2017)
* (67) G. Chee and Y. Guo, Class. Quantum Grav. 29, 235022 (2012) [Corrigendum: Class. Quantum Grav. 33, 209501 (2016)]
* (68) S. Mishra and S. Chakraborty, EPJC 79, 328 (2019)
* (69) H. Farajollahi and A. Salehi, JCAP 07, 036 (2011)
* (70) M. Kerachian, G. Acquaviva and G. Lukes-Gerakopoulos, Phys. Rev. D 101, 043535 (202)
* (71) S. Weinberg, Gravitation and cosmology: Principles and applications of the general theory of relativity, Wiley, New York, (1972)
* (72) J.-Q. Xia, V. Vitagliano, S. Liberati, M. Viel, Phys. Rev. D. 85, 043520 (2012)
|
2024-09-04T02:54:59.224615 | 2020-02-25T14:44:12 | 2003.05370 | {
"authors": "Ernesto Jim\\'enez-Ruiz, Asan Agibetov, Jiaoyan Chen, Matthias Samwald,\n Valerie Cross",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26168",
"submitter": "Ernesto Jimenez-Ruiz",
"url": "https://arxiv.org/abs/2003.05370"
} | arxiv-papers | # Dividing the Ontology Alignment Task
with Semantic Embeddings and Logic-based Modules††Accepted to the 24th
European Conference on Artificial Intelligence (ECAI 2020)
Ernesto Jiménez-Ruiz City, University of London, UK, email: ernesto.jimenez-
<EMAIL_ADDRESS>and Department of Informatics, University of Oslo,
NorwaySection for Artificial Intelligence and Decision Support, Medical
University of Vienna, Vienna, AustriaDepartment of Computer Science,
University of Oxford, UKMiami University, Oxford, OH 45056, United States Asan
Agibetov Section for Artificial Intelligence and Decision Support, Medical
University of Vienna, Vienna, AustriaDepartment of Computer Science,
University of Oxford, UKMiami University, Oxford, OH 45056, United States
Jiaoyan Chen Department of Computer Science, University of Oxford, UKMiami
University, Oxford, OH 45056, United States Matthias Samwald3 Miami
University, Oxford, OH 45056, United States Valerie Cross Miami University,
Oxford, OH 45056, United States
###### Abstract
Large ontologies still pose serious challenges to state-of-the-art ontology
alignment systems. In this paper we present an approach that combines a neural
embedding model and logic-based modules to accurately divide an input ontology
matching task into smaller and more tractable matching (sub)tasks. We have
conducted a comprehensive evaluation using the datasets of the Ontology
Alignment Evaluation Initiative. The results are encouraging and suggest that
the proposed method is adequate in practice and can be integrated within the
workflow of systems unable to cope with very large ontologies.
## 1 Introduction
The problem of (semi-)automatically computing an alignment between
independently developed ontologies has been extensively studied in the last
years. As a result, a number of sophisticated ontology alignment systems
currently exist [44, 15].222Ontology matching surveys and approaches:
http://ontologymatching.org/ The Ontology Alignment Evaluation
Initiative333OAEI evaluation campaigns: http://oaei.ontologymatching.org/
(OAEI) [3, 4] has played a key role in the benchmarking of these systems by
facilitating their comparison on the same basis and the reproducibility of the
results. The OAEI includes different tracks organised by different research
groups. Each track contains one or more matching tasks involving small-size
(e.g., conference), medium-size (e.g., anatomy), large (e.g., phenotype) or
very large (e.g., largebio) ontologies. Some tracks only involve matching at
the terminological level (e.g., concepts and properties) while other tracks
also expect an alignment at the assertional level (i.e., instance data).
Large ontologies still pose serious challenges to ontology alignment systems.
For example, several systems participating in the _largebio track_ were unable
to complete the largest tasks during the latest OAEI campaigns.444Largebio
track: http://www.cs.ox.ac.uk/isg/projects/SEALS/oaei/ These systems typically
use advanced alignment methods and are able to cope with small and medium size
ontologies with competitive results, but fail to complete large tasks in a
given time frame or with the available resources such as memory.
There have been several efforts in the literature to divide the ontology
alignment task (e.g., [20, 22]). These approaches, however, have not been
successfully evaluated with very large ontologies, failing to scale or
producing partitions of the ontologies leading to information loss [42]. In
this paper we propose a novel method to accurately divide the matching task
into several independent, smaller and manageable (sub)tasks, so as to scale
systems that cannot cope with very large ontologies.555A preliminary version
of this work has been published in arXiv [25] and in the Ontology Matching
workshop [26]. Unlike state-of-the-art approaches, our method: (i) preserves
the coverage of the relevant ontology alignments while keeping manageable
matching subtasks; (ii) provides a formal notion of matching subtask and
semantic context; (iii) uses neural embeddings to compute an accurate
division by learning semantic similarities between words and ontology entities
according to the ontology alignment task at hand; (iv) computes self-
contained (logical) modules to guarantee the inclusion of the (semantically)
relevant information required by an alignment system; and (v) has been
successfully evaluated with very large ontologies.
## 2 Preliminaries
A _mapping_ (also called match) between entities666In this work we accept any
input ontology in the OWL 2 language [18]. We refer to (OWL 2) concepts,
properties and individuals as entities. of two ontologies $\mathcal{O}_{1}$
and $\mathcal{O}_{2}$ is typically represented as a 4-tuple $\langle
e_{1},\allowbreak e_{2},\allowbreak r,\allowbreak c\rangle$ where $e_{1}$ and
$e_{2}$ are entities of $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$, respectively;
$r$ is a semantic relation, typically one of
$\\{\sqsubseteq,\sqsupseteq,\equiv\\}$; and $c$ is a confidence value,
usually, a real number within the interval $\left(0,1\right]$. For simplicity,
we refer to a mapping as a pair $\langle e_{1},\allowbreak e_{2}\rangle$. An
ontology _alignment_ is a set of mappings $\mathcal{M}$ between two ontologies
$\mathcal{O}_{1}$ and $\mathcal{O}_{2}$.
An ontology _matching task_ $\mathcal{M}\mathcal{T}$ is composed of a pair of
ontologies $\mathcal{O}_{1}$ (typically called source) and $\mathcal{O}_{2}$
(typically called target) and possibly an associated _reference alignment_
$\mathcal{M}^{RA}$. The objective of a matching task is to discover an
overlapping of $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ in the form of an
alignment $\mathcal{M}$. The _size_ or _search space_ of a matching task is
typically bound to the size of the Cartesian product between the entities of
the input ontologies: $\lvert Sig(\mathcal{O}_{1})\rvert\times\lvert
Sig(\mathcal{O}_{2})\rvert$, where $Sig(\mathcal{O})$ denotes the signature
(i.e., entities) of $\mathcal{O}$ and $\lvert\cdot\lvert$ denotes the size of
a set.
An ontology _matching system_ is a program that, given as input a matching
task $\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$,
generates an ontology alignment $\mathcal{M}^{S}$.777Typically automatic,
although there are systems that also allow human interaction [32]. The
standard evaluation measures for an alignment $\mathcal{M}^{S}$ are
_precision_ (P), _recall_ (R) and _f-measure_ (F) computed against a reference
alignment $\mathcal{M}^{RA}$ as follows:
$P=\frac{\lvert\mathcal{M}^{S}\cap\mathcal{M}^{RA}\rvert}{\lvert\mathcal{M}^{S}\rvert},~{}R=\frac{\lvert\mathcal{M}^{S}\cap\mathcal{M}^{RA}\rvert}{\lvert\mathcal{M}^{RA}\rvert},~{}F=2\cdot\frac{P\cdot
R}{P+R}$ (1)
Figure 1: Pipeline to divide a given matching task
$\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$.
### 2.1 Problem definition and quality measures
We denote _division_ of an ontology matching task $\mathcal{M}\mathcal{T}$,
composed by the ontologies $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$, as the
process of finding $n$ matching subtasks
$\mathcal{M}\mathcal{T}_{i}=\langle\mathcal{O}_{1}^{i},\mathcal{O}_{2}^{i}\rangle$
(with $i$=$1$,…,$n$), where $\mathcal{O}_{1}^{i}\subset\mathcal{O}_{1}$ and
$\mathcal{O}_{2}^{i}\subset\mathcal{O}_{2}$.
Size of the division. The size of each matching subtask is smaller than the
original task and thus reduces the search space. Let
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}=\\{\mathcal{M}\mathcal{T}_{1},\ldots,\mathcal{M}\mathcal{T}_{n}\\}$
be the division of a matching task $\mathcal{M}\mathcal{T}$ into $n$ subtasks.
The _size ratio_ of the subtasks $\mathcal{M}\mathcal{T}_{i}$ and
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ with respect to the original
matching task size is computed as follows:
$\mathsf{SizeRatio}(\mathcal{M}\mathcal{T}_{i},\mathcal{M}\mathcal{T})=\frac{\lvert
Sig(\mathcal{O}_{1}^{i})\rvert\times\lvert
Sig(\mathcal{O}_{2}^{i})\rvert}{\lvert Sig(\mathcal{O}_{1})\rvert\times\lvert
Sig(\mathcal{O}_{2})\rvert}$ (2)
$\mathsf{SizeRatio}(\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n},\mathcal{M}\mathcal{T})=\sum_{i=1}^{n}\mathsf{SizeRatio}(\mathcal{M}\mathcal{T}_{i},\mathcal{M}\mathcal{T})$
(3)
The ratio
$\mathsf{SizeRatio}(\mathcal{M}\mathcal{T}_{i},\mathcal{M}\mathcal{T})$ is
less than $1.0$ while the aggregation
$\sum_{i=1}^{n}\mathsf{SizeRatio}(\mathcal{M}\mathcal{T}_{i},\mathcal{M}\mathcal{T})$,
being $n$ the number of matching subtasks, can be greater than $1.0$ as
matching subtasks depend on the division technique and may overlap.
Alignment coverage. The division of the matching task aims at preserving the
target outcomes of the original matching task. The _coverage_ is calculated
with respect to a relevant alignment $\mathcal{M}$, possibly the reference
alignment $\mathcal{M}^{RA}$ of the matching task if it exists, and indicates
whether that alignment can still be (potentially) discovered with the matching
subtasks. The formal notion of coverage is given in Definitions 1 and 2.
###### Definition 1 (Coverage of a matching task)
Let $\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$ be
a matching task and $\mathcal{M}$ an alignment. We say that a mapping
$m=\langle e_{1},\allowbreak e_{2}\rangle\in\mathcal{M}$ is covered by the
matching task if $e_{1}\in Sig(\mathcal{O}_{1})$ and $e_{2}\in
Sig(\mathcal{O}_{2})$. The coverage of $\mathcal{M}\mathcal{T}$ w.r.t.
$\mathcal{M}$ (denoted as
$\mathsf{Coverage}(\mathcal{M}\mathcal{T},\mathcal{M})$) represents the set of
mappings $\mathcal{M}^{\prime}\subseteq\mathcal{M}$ covered by
$\mathcal{M}\mathcal{T}$.
###### Definition 2 (Coverage of the matching task division)
Let
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}=\\{\mathcal{M}\mathcal{T}_{1},\ldots,\mathcal{M}\mathcal{T}_{n}\\}$
be the result of dividing a matching task $\mathcal{M}\mathcal{T}$ and
$\mathcal{M}$ an alignment. We say that a mapping $m\in\mathcal{M}$ is covered
by $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ if $m$ is at least covered by
one of the matching subtask $\mathcal{M}\mathcal{T}_{i}$ (with $i$=$1$,…,$n$)
as in Definition 1. The coverage of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$
w.r.t. $\mathcal{M}$ (denoted as
$\mathsf{Coverage}(\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n},\mathcal{M})$)
represents the set of mappings $\mathcal{M}^{\prime}\subseteq\mathcal{M}$
covered by $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$. The coverage is given
as a ratio with respect to the (covered) alignment:
$\mathsf{CoverageRatio}(\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n},\mathcal{M})=\frac{\lvert\mathsf{Coverage}(\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n},\mathcal{M})\rvert}{\lvert\mathcal{M}\rvert}$
(4)
## 3 Methods
In this section we present our approach to compute a division
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}=\\{\mathcal{M}\mathcal{T}_{1},\ldots,\mathcal{M}\mathcal{T}_{n}\\}$
given a matching task
$\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$ and the
number of target subtasks $n$. We rely on locality ontology modules to extract
self-contained modules of the input ontologies. The module extraction and task
division is tailored to the ontology alignment task at hand by embedding the
contextual semantics of a (combined) inverted index of the ontologies in the
matching task.
Figure 1 shows an overview of our approach. (i) The ontologies
$\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ are indexed using the lexical index
LexI (see Section 3.2); (ii) LexI is divided into clusters based on the
semantic embeddings of its entries (see Section 3.4); (iii) entries in those
clusters derive potential mapping sets (see Section 3.3); and (iv) the context
of these mapping sets lead to matching subtasks (see Sections 3.1 and 3.3).
Next, we elaborate on the methods behind these steps.
### 3.1 Locality modules and context
Logic-based module extraction techniques compute ontology fragments that
capture the meaning of an input signature (e.g., set of entities) with respect
to a given ontology. That is, a module contains the context (i.e., sets of
_semantically related_ entities) of the input signature. In this paper we rely
on bottom-locality modules [13, 29], which will be referred to as locality-
modules or simply as modules. These modules include the ontology axioms
required to describe the entities in the signature. Locality-modules compute
self-contained ontologies and are tailored to tasks that require reusing a
fragment of an ontology. Please refer to [13, 29] for further details.
Locality-modules play an key role in our approach as they provide the context
for the entities in a given mapping or set of mappings as formally presented
in Definition 3.
###### Definition 3 (Context of a mapping and an alignment)
Let $m=\langle e_{1},\allowbreak e_{2}\rangle$ be a mapping between two
ontologies $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$. We define the context of
$m$ (denoted as $\mathsf{Context}(m,\mathcal{O}_{1},\mathcal{O}_{2})$) as a
pair of locality modules $\mathcal{O}_{1}^{\prime}\subseteq\mathcal{O}_{1}$
and $\mathcal{O}_{2}^{\prime}\subseteq\mathcal{O}_{2}$, where
$\mathcal{O}_{1}^{\prime}$ and $\mathcal{O}_{2}^{\prime}$ include the
semantically related entities to $e_{1}$ and $e_{2}$, respectively. Similarly,
the _context_ for an alignment $\mathcal{M}$ between two ontologies
$\mathcal{O}_{1}$ and $\mathcal{O}_{2}$ is denoted as
$\mathsf{Context}(\mathcal{M},\mathcal{O}_{1},\mathcal{O}_{2})=\langle\mathcal{O}_{1}^{\prime},\mathcal{O}_{2}^{\prime}\rangle$,
where $\mathcal{O}_{1}^{\prime}$ and $\mathcal{O}_{2}^{\prime}$ are modules
including the semantically related entities for the entities $e_{1}\in
Sig(\mathcal{O}_{1})$ and $e_{2}\in Sig(\mathcal{O}_{2})$ in each mapping
$m=\langle e_{1},\allowbreak e_{2}\rangle\in\mathcal{M}$.
Intuitively, as the context of an alignment (i.e.,
$\mathsf{Context}(\mathcal{M},\mathcal{O}_{1},\mathcal{O}_{2})=\langle\mathcal{O}_{1}^{\prime},\mathcal{O}_{2}^{\prime}\rangle$)
semantically characterises the entities involved in that alignment, a matching
task $\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$
can be reduced to the task
$\mathcal{M}\mathcal{T}^{\mathcal{M}}_{\mathcal{O}_{1}\text{-}\mathcal{O}_{2}}=\langle\mathcal{O}_{1}^{\prime},\mathcal{O}_{2}^{\prime}\rangle$
without information loss in terms of finding $\mathcal{M}$ (i.e.,
$\mathsf{Coverage}(\mathcal{M}\mathcal{T}^{\mathcal{M}}_{\mathcal{O}_{1}\text{-}\mathcal{O}_{2}},\mathcal{M})=\mathcal{M}$).
For example, in the small OAEI _largebio_ tasks [3, 4] systems are given the
context of the reference alignment as a (reduced) matching task (e.g.,
$\mathcal{M}\mathcal{T}^{RA}_{\text{fma-
nci}}=\mathsf{Context}(\mathcal{M}^{RA}_{\text{fma-nci}},\
\mathcal{O}_{\text{FMA}},\mathcal{O}_{\text{NCI}})=\langle\mathcal{O}_{\text{FMA}}^{\prime},\mathcal{O}_{\text{NCI}}^{\prime}\rangle$),
instead of the whole FMA and NCI ontologies.
Table 1: Inverted lexical index LexI. For readability, index values have been split into elements of $\mathcal{O}_{1}$ and $\mathcal{O}_{2}$. ‘-’ indicates that the ontology does not contain entities for that entry. | # | Index key | Index value
---|---|---
Entities $\mathcal{O}_{1}$ | Entities $\mathcal{O}_{2}$
1 | $\\{$ disorder $\\}$ | $\mathcal{O}_{1}$:Disorder_of_pregnancy, $\mathcal{O}_{1}$:Disorder_of_stomach | $\mathcal{O}_{2}$:Pregnancy_Disorder
2 | $\\{$ disorder, pregnancy $\\}$ | $\mathcal{O}_{1}$:Disorder_of_pregnancy | $\mathcal{O}_{2}$:Pregnancy_Disorder
3 | $\\{$ carcinoma, basaloid $\\}$ | $\mathcal{O}_{1}$:Basaloid_carcinoma | $\mathcal{O}_{2}$:Basaloid_Carcinoma, $\mathcal{O}_{2}$:Basaloid_Lung_Carcinoma
4 | $\\{$ follicul, thyroid, carcinom $\\}$ | $\mathcal{O}_{1}$:Follicular_thyroid_carcinoma | $\mathcal{O}_{2}$:Follicular_Thyroid_carcinoma
5 | $\\{$ hamate, lunate $\\}$ | $\mathcal{O}_{1}$:Lunate_facet_of_hamate | -
### 3.2 Indexing the ontology vocabulary
We rely on a semantic inverted index (we will refer to this index as LexI).
This index maps sets of words to the entities where these words appear. LexI
encodes the labels of all entities of the input ontologies $\mathcal{O}_{1}$
and $\mathcal{O}_{2}$, including their lexical variations (e.g., preferred
labels, synonyms), in the form of _key-value_ pairs where the key is a set of
words and the value is a set of entities such that the set of words of the key
appears in (one of) the entity labels. Similar indexes are commonly used in
information retrieval applications [11], Entity Resolution systems [40], and
also exploited in ontology alignment systems (e.g., LogMap [27], ServOMap [14]
and AML [16]) to reduce the search space and enhance the matching process.
Table 1 shows a few example entries of LexI for two input ontologies.
LexI is created as follows. (i) Each label associated to an ontology entity
is split into a set of words; for example, the label “Lunate facet of hamate”
is split into the set {“lunate”, “facet”, “of”, “hamate”}. (ii) Stop-words
are removed from the set of words. (iii) Stemming techniques are applied to
each word (i.e., {“lunat”, “facet”, “hamat”}). (iv) Combinations of subsets
of words also serve as keys in LexI; for example, {“lunat”, “facet”},
{“hamat”, “lunat”} and so on.888In order to avoid a combinatorial blow-up, the
number of computed subsets of words is limited. (v) Entities leading to the
same (sub)set of words are associated to the same key in LexI, for example
{“disorder”} is associated with three entities. Finally, (vi) entries in LexI
pointing to entities of only one ontology or associated to a number of
entities larger than $\alpha$ are not considered.999In the experiments we used
$\alpha=60$. Note that a single entity label may lead to several entries in
LexI, and each entry in LexI points to one or more entities.
### 3.3 Covering matching subtasks
Each entry (i.e., a _key-value_ pair) in LexI is a source of candidate
mappings. For instance, the example in Table 1 suggests that there is a
candidate mapping
$m=\langle\mathsf{\mathcal{O}_{1}\negthickspace:\negthickspace
Disorder\\_of\\_stomach},\allowbreak\mathsf{\mathcal{O}_{2}\negthickspace:\negthickspace
Pregnancy\\_disorder}\rangle$ since these entities are associated to the _{
“disorder”}_ entry in LexI. These mappings are not necessarily correct but
will link lexically-related entities, that is, those entities sharing at least
one word among their labels (e.g., “disorder”). Given a subset of entries or
rows of LexI (i.e., $l\subseteq\textsf{LexI}$), the function
$\mathsf{Mappings}(l)=\mathcal{M}^{l}$ provides the set of mappings derived
from $l$. We refer to the set of all (potential) mappings suggested by LexI
(i.e., $\mathsf{Mappings}(\textsf{LexI})$) as $\mathcal{M}^{\textsf{LexI}}$.
$\mathcal{M}^{\textsf{LexI}}$ represents a manageable subset of the Cartesian
product between the entities of the input ontologies. For example, LexI
suggest around $2\times 10^{4}$ potential mappings for the matching task
$\mathcal{M}\mathcal{T}_{\text{fma-
nci}}=\langle\mathcal{O}_{\text{FMA}},\mathcal{O}_{\text{NCI}}\rangle$, while
the Cartesian product between $\mathcal{O}_{\text{FMA}}$ and
$\mathcal{O}_{\text{NCI}}$ involves more than $5\times 10^{9}$ mappings.
Since standard ontology alignment systems rarely discover mappings outside
$\mathcal{M}^{\textsf{LexI}}$, the context of $\mathcal{M}^{\textsf{LexI}}$
(recall Definition 3) can be seen as a reduced matching task
$\mathcal{M}\mathcal{T}^{\textsf{LexI}}=\mathsf{Context}(\mathcal{M}^{\textsf{LexI}},\mathcal{O}_{1},\mathcal{O}_{2})=\langle\mathcal{O}_{1}^{\textsf{LexI}},\mathcal{O}_{2}^{\textsf{LexI}}\rangle$
of the original task
$\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$.
However, the modules $\mathcal{O}_{1}^{\textsf{LexI}}$ and
$\mathcal{O}_{2}^{\textsf{LexI}}$, although smaller than $\mathcal{O}_{1}$ and
$\mathcal{O}_{2}$, can still be challenging for many ontology matching
systems. A solution is to divide or cluster the entries in LexI to lead to
several tasks involving smaller ontology modules.
###### Definition 4 (Matching subtasks from LexI)
Let $\mathcal{M}\mathcal{T}=\langle\mathcal{O}_{1},\mathcal{O}_{2}\rangle$ be
a matching task, _LexI_ the inverted index of the ontologies $\mathcal{O}_{1}$
and $\mathcal{O}_{2}$, and $\\{l_{1},\ldots,l_{n}\\}$ a set of $n$ clusters of
entries in _LexI_. We denote the set of matching subtasks from _LexI_ as
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}=\\{\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{1},\ldots,\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{n}\\}$
where each cluster $l_{i}$ leads to the matching subtask
$\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{i}=\langle\mathcal{O}_{1}^{i},\mathcal{O}_{2}^{i}\rangle$,
such that $\mathsf{Mappings}(l_{i})=\mathcal{M}^{\textsf{LexI}}_{i}$ is the
set of mappings suggested by the _LexI_ entries in $l_{i}$ (i.e., _key-value_
pairs) and $\mathcal{O}_{1}^{i}$ and $\mathcal{O}_{2}^{i}$ represent the
context of $\mathcal{M}^{\textsf{LexI}}_{i}$ w.r.t. $\mathcal{O}_{1}$ and
$\mathcal{O}_{2}$.
Quality of the matching subtasks. The matching subtasks in Definition 4 rely
on LexI and the notion of context, thus it is expected that the tasks in
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ will cover most of the mappings
$\mathcal{M}^{S}$ that a matching system can compute, that is
$\mathsf{CoverageRatio}(\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n},\mathcal{M}^{S})$
will be close to $1.0$. Furthermore, the use of locality modules to compute
the context guarantees the extraction of matching subtasks that are suitable
to ontology alignment systems in terms of preservation of the logical
properties of the given signature.
Intuitively each cluster of LexI will lead to a smaller matching task
$\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{i}$ (with respect to both
$\mathcal{M}\mathcal{T}^{\textsf{LexI}}$ and $\mathcal{M}\mathcal{T}$) in
terms of search space. Hence
$\mathsf{SizeRatio}(\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{i},\mathcal{M}\mathcal{T})$
will be smaller than $1.0$. The overall aggregation of ratios (cf. Equation 3)
depends on the clustering strategy of the entries in LexI and it is also
expected to be smaller than $1.0$.
Reducing the search space in each matching subtask
$\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{i}$ has the potential of enabling the
evaluation of systems that cannot cope with the original matching task
$\mathcal{M}\mathcal{T}$ in a given time-frame or with (limited) computational
resources.
Table 2: Matching tasks. AMA: Adult Mouse Anatomy. DOID: Human Disease Ontology. FMA: Foundational Model of Anatomy. HPO: Human Phenotype Ontology. MP: Mammalian Phenotype. NCI: National Cancer Institute Thesaurus. NCIA: Anatomy fragment of NCI. ORDO: Orphanet Rare Disease Ontology. SNOMED CT: Systematized Nomenclature of Medicine – Clinical Terms. Phenotype ontologies downloaded from BioPortal. For all tracks we use the consensus with vote=3 as system mappings $\mathcal{M}^{S}$. The Phenotype track does not have a gold standard so a consensus alignment with vote=2 is used as reference. OAEI track | Source of $\mathcal{M}^{RA}$ | Source of $\mathcal{M}^{S}$ | Task | Ontology | Version | Size (classes)
---|---|---|---|---|---|---
Anatomy | Manually created [10] | Consensus (vote=3) | AMA-NCIA | AMA | v.2007 | 2,744
NCIA | v.2007 | 3,304
Largebio | UMLS-Metathesaurus [28] | Consensus (vote=3) | FMA-NCI | FMA | v.2.0 | 78,989
FMA-SNOMED | NCI | v.08.05d | 66,724
SNOMED-NCI | SNOMED CT | v.2009 | 306,591
Phenotype | Consensus alignment (vote=2) [21] | Consensus (vote=3) | HPO-MP | HPO | v.2016 | 11,786
MP | v.2016 | 11,721
DOID-ORDO | DOID | v.2016 | 9,248
ORDO | v.2016 | 12,936
### 3.4 Semantic embeddings
We use a _semantic embedding_ approach to identify, given $n$, a set of
clusters of entries $\\{l_{1},\ldots,l_{n}\\}$ from LexI. As in Definition 4,
these clusters lead to the set of matching subtasks
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}=\\{\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{1},\ldots,\mathcal{M}\mathcal{T}^{\textsf{LexI}}_{n}\\}$.
The _semantic embeddings_ aim at representing into the same (vector) space the
features about the relationships among words and ontology entities that occur
in LexI. Hence, words and entities that belong to similar semantic contexts
will typically have similar vector representations.
Embedding model. Our approach currently relies on the StarSpace
toolkit101010StarSpace: https://github.com/facebookresearch/StarSpace and its
neural embedding model [49] to learn _embeddings_ for the words and ontology
entities in LexI. We adopt the _TagSpace_ [48] training setting of StarSpace.
Applied to our setting, StarSpace learns associations between a set of words
(i.e., keys in LexI) and a set of relevant ontology entities (i.e., values in
LexI). The StarSpace model is trained by assigning a $d$-dimensional vector to
each of the relevant features (e.g., the individual words and the ontology
entities in LexI). Ultimately, the look-up matrix (the matrix of embeddings -
latent vectors) is learned by minimising the loss function in Equation 5.
$\\!\sum_{\begin{subarray}{c}(w,e)\in E^{+},\\\ e^{-}\in
E^{-}\end{subarray}}L^{batch}(sim(\bm{v}_{w},\bm{v}_{e}),sim(\bm{v}_{w},\bm{v}_{e_{1}^{-}}),\ldots,\\\
sim(\bm{v}_{w},\bm{v}_{e_{j}^{-}}))$ (5)
In this loss function we compare positive samples with negative samples. Hence
we need to indicate the generator of positive pairs $(w,e)\in E^{+}$ (in our
setting those are _word-entity_ pairs from LexI) and the generator of negative
entries $e^{-}\in E^{-}$ (in our case we sample from the list of entities in
the values of LexI). StarSpace follows the strategy by Mikolov et al. [36] and
selects a random subset of $j$ negative examples for each batch update. Note
that we tailor the generators to the alignment task by sampling from LexI. The
similarity function $sim$ operates on $d$-dimensional vectors (e.g.,
$\bm{v}_{w}$, $\bm{v}_{e}$ and $\bm{v}_{e}^{-}$), in our case we use the
standard dot product in Euclidean space.
Clustering strategy. The semantic embedding of each entry
$\varepsilon=(K,V)\in$ LexI is calculated by concatenating (i) the mean
vector representation of the vectors associated to each word in the key $K$,
with (ii) the mean vector of the vectors of the ontology entities in the
value $V$, as in Equation 6, where $\oplus$ represents the concatenation of
two vectors, $\bm{v}_{w}$ and $\bm{v}_{e}$ represents $d$-dimensional vector
embeddings learnt by StarSpace, and $\bm{v}_{\varepsilon}$ is a
($2*d$)-dimension vector.
$\bm{v}_{\varepsilon}=\frac{1}{|K|}\sum_{w\in
K}\bm{v}_{w}\oplus\frac{1}{|V|}\sum_{e\in V}\bm{v}_{e}$ (6)
Based on the embeddings $\bm{v}_{\varepsilon}$ we then perform standard
clustering with the K-means algorithm to obtain the clusters of LexI entries
$\\{l_{1},\ldots,l_{n}\\}$. For example, following our approach, in the
example of Table 1 entries in rows $1$ and $2$ (respectively $3$ and $4$)
would belong to the same cluster.
Suitability of the embedding model. Although we could have followed other
embedding strategies, we advocated to learn new entity embeddings with
StarSpace for the following reasons: (i) ontologies, particularly in the
biomedical domain, may bring specialised vocabulary that is not fully covered
by precomputed word embeddings; (ii) to embed not only words but also
concepts of both ontologies; and (iii) to obtain embeddings tailored to the
ontology alignment task (i.e., to learn similarities among words and concepts
dependant on the task). StarSpace provides the required functionalities to
embed the semantics of LexI and identify accurate clusters. Precise clusters
will lead to smaller matching tasks, and thus, to a reduced global size of the
computed division of the matching task
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ (cf. Equation 3).
## 4 Evaluation
In this section we provide empirical evidence to support the suitability of
the proposed method to divide the ontology alignment task. We rely on the
datasets of the Ontology Alignment Evaluation Initiative (OAEI) [3, 4], more
specifically, on the matching tasks provided in the _anatomy_ , _largebio_ and
_phenotype_ tracks. Table 2 provides an overview of these OAEI tasks and the
related ontologies and mapping sets.
The methods have been implemented in Java111111Java codes:
https://github.com/ernestojimenezruiz/logmap-matcher and Python121212Python
codes: https://github.com/plumdeq/neuro-onto-part (neural embedding strategy),
tested on a Ubuntu Laptop with an Intel Core i9-8950HK<EMAIL_ADDRESS>and
allocating up to $25Gb$ of RAM. Datasets, matching subtasks, computed mappings
and other supporting resources are available in the _Zenodo_ repository [24].
For all of our experiments we used the following StarSpace hyperparameters:
-trainMode 0 -similarity dot --epoch 100 --dim 64.
(a) $\mathsf{CoverageRatio}$ of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$
over $\mathcal{M}^{RA}$
(b) $\mathsf{CoverageRatio}$ of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$
over $\mathcal{M}^{S}$
(c) $\mathsf{SizeRatio}$ of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$
(d) Module sizes of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ for FMA-NCI
Figure 2: Quality measures of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ with
respect to the number of matching subtasks $n$.
### 4.1 Adequacy of the division approach
We have evaluated the adequacy of our division strategy in terms of coverage
(as in Equation 4) and size (as in Equation 3) of the resulting division
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ for each of the matching task in
Table 2.
Coverage ratio. Figures 2(a) and 2(b) shows the coverage of the different
divisions $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ with respect to the
reference alignment and system computed mappings, respectively. As system
mappings we have used the consensus alignment with vote=3, that is, mappings
that have been voted by at least $3$ systems in the last OAEI campaigns. The
overall coverage results are encouraging: (i) the divisions
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ cover over $94\%$ of the reference
alignments for all tasks, with the exception of the SNOMED-NCI case where
coverage ranges from $0.94$ to $0.90$; (ii) when considering system mappings,
the coverage for all divisions is over $0.98$ with the exception of AMA-NCIA,
where it ranges from $0.956$ to $0.974$; (iii) increasing the number of
divisions $n$ tends to slightly decrease the coverage in some of the test
cases, this is an expected behaviour as the computed divisions include
different semantic contexts (i.e., locality modules) and some relevant
entities may fall out the division; finally (iv) as shown in [42], the
results in terms of coverage of state-of-the-art partitioning methods (e.g.,
[22, 20]) are very low for the OAEI _largebio_ track ($0.76$, $0.59$ and
$0.67$ as the best results for FMA-NCI, FMA-SNOMED and SNOMED-NCI,
respectively), thus, making the obtained results even more valuable.
Size ratio. The results in terms of the size (i.e., search space) of the
selected divisions $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ are presented in
Figure 2(c). The search space is improved with respect to the original
$\mathcal{M}\mathcal{T}$ for all the cases, getting as low as $5\%$ of the
original matching task size for the FMA-NCI and FMA-SNOMED cases. The gain in
the reduction of the search space gets relatively stable after a given
division size; this result is expected since the context provided by locality
modules ensures modules with the necessary semantically related entities. The
scatter plot in Figure 2(d) visualise the size of the source modules against
the size of the target modules for the FMA-NCI matching subtasks with
divisions of size $n\in\\{5,20,50,100\\}$. For instance, the (blue) circles
represent points $\big{(}\lvert Sig(\mathcal{O}_{1}^{i})\rvert,\lvert
Sig(\mathcal{O}_{2}^{i})\rvert\big{)}$ being $\mathcal{O}_{1}^{i}$ and
$\mathcal{O}_{2}^{i}$ the source and target modules (with $i$=$1$,…,$5$) in
the matching subtasks of $\mathcal{D}_{\mathcal{M}\mathcal{T}}^{5}$. It can be
noted that, on average, the size of source and target modules decreases as the
size of the division increases. For example, the largest task in
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{20}$ is represented in point
$(6754,9168)$, while the largest task in
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{100}$ is represented in point
$(2657,11842)$.
Table 3: Evaluation of systems that failed to complete OAEI tasks in the 2015-2018 campaigns. Times reported in seconds (s). Tool | Task | Matching | Performance measures | Computation times (s)
---|---|---|---|---
subtasks | P | R | F | Min | Max | Total
MAMBA (v.2015) | AMA-NCIA | 5 | 0.870 | 0.624 | 0.727 | 73 | 785 | 1,981
10 | 0.885 | 0.623 | 0.731 | 41 | 379 | 1,608
50 | 0.897 | 0.623 | 0.735 | 8 | 154 | 1,377
FCA-Map (v.2016) | FMA-NCI | 20 | 0.656 | 0.874 | 0.749 | 39 | 340 | 2,934
50 | 0.625 | 0.875 | 0.729 | 19 | 222 | 3,213
FMA-SNOMED | 50 | 0.599 | 0.251 | 0.354 | 6 | 280 | 3,455
100 | 0.569 | 0.253 | 0.350 | 5 | 191 | 3,028
SNOMED-NCI | 150 | 0.704 | 0.629 | 0.664 | 5 | 547 | 16,822
| 200 | 0.696 | 0.630 | 0.661 | 5 | 395 | 16,874
SANOM (v.2017) | FMA-NCI | 20 | 0.475 | 0.720 | 0.572 | 40 | 1,467 | 9,374
50 | 0.466 | 0.726 | 0.568 | 15 | 728 | 7,069
FMA-SNOMED | 100 | 0.145 | 0.210 | 0.172 | 3 | 1,044 | 13,073
150 | 0.143 | 0.209 | 0.170 | 3 | 799 | 10,814
POMap++ (v.2018) | FMA-NCI | 20 | 0.697 | 0.732 | 0.714 | 24 | 850 | 5,448
50 | 0.701 | 0.748 | 0.724 | 11 | 388 | 4,041
FMA-SNOMED | 50 | 0.520 | 0.209 | 0.298 | 4 | 439 | 5,879
100 | 0.522 | 0.209 | 0.298 | 3 | 327 | 4,408
ALOD2vec (v.2018) | FMA-NCI | 20 | 0.697 | 0.813 | 0.751 | 115 | 2,141 | 13,592
50 | 0.698 | 0.813 | 0.751 | 48 | 933 | 12,162
FMA-SNOMED | 100 | 0.702 | 0.183 | 0.29 | 9 | 858 | 12,688
150 | 0.708 | 0.183 | 0.291 | 7 | 581 | 10,449
Computation times. The time to compute the divisions of the matching task is
tied to the number of locality modules to extract, which can be computed in
polynomial time relative to the size of the input ontology [13]. The creation
of LexI does not add an important overhead, while the training of the neural
embedding model ranges from $21s$ in AMA-NCI to $224s$ in SNOMED-NCI. Overall,
for example, the required time to compute the division with $100$ matching
subtasks ranges from $23s$ (AMA-NCIA) to approx. $600s$ (SNOMED-NCI).
### 4.2 Evaluation of OAEI systems
In this section we show that the division of the alignment task enables
systems that, given some computational constraints, were unable to complete an
OAEI task. We have selected the following five systems from the latest OAEI
campaigns, which include novel alignment techniques but failed to scale to
very large matching tasks: MAMBA (v.2015) [35], FCA-Map (v.2016) [52], SANOM
(v.2017) [37], ALOD2vec (v.2018) [43] and POMap++ (v.2018) [30]. MAMBA failed
to complete the anatomy track, while FCA-Map, SANOM, ALOD2vec and POMap++
could not complete the largest tasks in the largebio track. MAMBA and SANOM
threw an out-of-memory exception with $25Gb$, whereas FCA-Map, ALOD2vec and
POMap++ did not complete the tasks within a $6$ hours time-frame. We have used
the SEALS infrastructure to conduct the evaluation [3, 4].
Table 3 shows the obtained results in terms of precision, recall, f-measure,
and computation times (time for the easiest and the hardest task, and total
time for all tasks) over different divisions
$\mathcal{D}_{\mathcal{M}\mathcal{T}}^{n}$ computed using our strategy. For
example, FCA-Map was run over divisions with 20 and 50 matching subtasks
(i.e., $n\in\\{20,50\\}$) in the FMA-NCI case. Note that for each matching
subtask a system generates a partial alignment $\mathcal{M}^{S}_{i}$, the
final alignment for the (original) matching task is computed as the union of
all partial alignments
($\mathcal{M}^{S}=\bigcup_{i=1}^{n}\mathcal{M}^{S}_{i}$). The results are
encouraging and can be summarised as follows:
1. )
We enabled several systems to produce results even for the largest OAEI test
case (e.g., FCA-Map with SNOMED-NCI).
2. )
The computation times are also very good falling under the $6$ hours time
frame, specially given that the (independent) matching subtasks have been run
sequentially without parallelization.
3. )
The size of the divisions, with the exception of FCA-Map, is beneficial in
terms of total computation time.
4. )
The increase of number of matching subtasks is positive or neutral for MAMBA,
POMap++ and ALOD2vec in terms of f-measure, while it is slightly reduced for
FCA-Map and SANOM.
5. )
Global f-measure results are lower than top OAEI systems; nevertheless, since
the above systems could not be evaluated without the divisions, these results
are obtained without any fine-tuning of their parameters.
6. )
The computation times of the hardest tasks, as $n$ increases, is also reduced.
This has a positive impact in the monitoring of alignment systems as the
hardest task is completed in a reasonable time.
## 5 Related work
Partitioning and blocking. Partitioning and modularization techniques have
been extensively used within the Semantic Web to improve the efficiency when
solving the task at hand (e.g., visualization [45, 1], reuse [29], debugging
[47], classification [7]). Partitioning or blocking has also been widely used
to reduce the complexity of the ontology alignment task [16]. In the
literature there are two major categories of partitioning techniques, namely:
_independent_ and _dependent_. Independent techniques typically use only the
structure of the ontologies and are not concerned about the ontology alignment
task when performing the partitioning. Whereas dependent partitioning methods
rely on both the structure of the ontology and the ontology alignment task at
hand. Although our approach does not compute (non-overlapping) partitions of
the ontologies, it can be considered a dependent technique.
Prominent examples of ontology alignment systems including partitioning
techniques are Falcon-AO [22], GOMMA [19], COMA++ [5] and TaxoMap [20].
Falcon-AO, GOMMA and COMA++ perform independent partitioning where the
clusters of the source and target ontologies are independently extracted. Then
pairs of similar clusters (i.e., matching subtasks) are aligned using standard
techniques. TaxoMap [20] implements a dependent technique where the
partitioning is combined with the matching process. TaxoMap proposes two
methods, namely: PAP (partition, anchor, partition) and APP (anchor,
partition, partition). The main difference of these methods is the order of
extraction of (preliminary) anchors to discover pairs of partitions to be
matched (i.e., matching subtasks). SeeCOnt [2] presents a seeding-based
clustering technique to discover independent clusters in the input ontologies.
Their approach has been evaluated with the Falcon-AO system by replacing its
native PBM (Partition-based Block Matching) module [23]. Laadhar et al. [30]
have recently integrated within the system POMap++ a hierarchical
agglomerative clustering algorithm to divide an ontology into a set of
partitions.
The above approaches, although presented interesting ideas, did not provide
guarantees about the size and coverage of the discovered partitions or
divisions. Furthermore, they have not been successfully evaluated on very
large ontologies. On the one hand, as reported by Pereira et al. [42] the
results in terms of coverage of the PBM method of Falcon-OA, and the PAP and
APP methods of TaxoMap are very low for the OAEI largebio track. On the other
hand, as discussed in Section 4, POMap++ fails to scale with the largest
largebio tasks.
Note that the recent work in [31] has borrowed from our workshop paper [26]
the quality measures presented in Section 2.1. They obtain competitive
coverage results for medium size ontologies; however, their approach, as in
POMap++, does not scale for large ontologies.
Blocking techniques are also extensively used in Entity Resolution (see [40]
for a survey). Although related, the problem of blocking in ontologies is
different as the logically related axioms for a seed signature play an
important role when computing the blocks.
Our dependent approach, unlike traditional partitioning and blocking methods,
computes overlapping self-contained modules (i.e., locality modules [13]).
Locality modules guarantee the extraction of all semantically related entities
for a given signature. This capability enhances the coverage results and
enables the inclusion of the (semantically) relevant information required by
an alignment system. It is worth mentioning that the need of self-contained
and covering modules, although not thoroughly studied, was also highlighted in
a preliminary work by Paulheim [41].
Embedding and clustering. Recently, machine learning techniques such as
semantic embedding [12] have been investigated for ontology alignment. They
often first learn vector representations of the entities and then predict the
alignment [9, 51, 46]. However, most of them focus on alignment of ontology
individuals (i.e., ABox) without considering the ontology concepts and axioms
at the terminological level (i.e., TBox). Nkisi-Orji et al. [39] predicts the
alignment between ontology concepts with Random Forest, but incorporates the
embeddings of words alone, without any other semantic components like in our
work. Furthermore, these approaches focus on predicting the alignment, while
our work aims at boosting an existing alignment system. Our framework could
potentially be adopted in systems like in [39] if facing scalability problems
for large ontologies.
Another piece of related work is the clustering of semantic components, using
the canopy clustering algorithm [33] where objects are grouped into canopies
and each object can be a member of multiple canopies. For example, Wu et al.
[50] first extracted canopies (i.e., mentions) from a knowledge base, and then
grouped the entities accordingly so as to finding out the entities with the
same semantics (i.e., canonicalization). As we focus on a totally different
task – ontology alignment, the context that can be used, such as the
embeddings for the words and ontology entities in LexI, is different from
these works, which leads to a different clustering method.
## 6 Conclusions and future work
We have developed a novel framework to split the ontology alignment task into
several matching subtasks based on a semantic inverted index, locality
modules, and a neural embedding model. We have performed a comprehensive
evaluation which suggests that the obtained divisions are suitable in practice
in terms of both coverage and size. The division of the matching task allowed
us to obtain results for five systems which failed to complete these tasks in
the past. We have focused on systems failing to complete a task, but a
suitable adoption and integration of the presented framework within the
pipeline of any ontology alignment system has the potential to improve the
results in terms of computation times.
Opportunities. Reducing the ontology matching task into smaller and more
manageable tasks may also bring opportunities to enhance (i) user interaction
[32], (ii) reasoning and repair [34], (iii) benchmarking and monitoring [3,
4], and (iv) parallelization. The computed independent matching subtasks can
potentially be run in parallel in evaluation platforms like the HOBBIT [38].
The current evaluation was conducted sequentially as (i) the SEALS instance
only allows running one task at a time, and (ii) the evaluated systems were
not designed to run several tasks in parallel; for instance, we managed to run
MAMBA outside SEALS, but it relies on a MySQL database and raised a concurrent
access exception.
Impact on the f-measure. As shown in Section 4.2, the impact of the number of
divisions on the f-measure depends on the evaluated systems. In the near
future we aim at conducting an extensive evaluation of our framework over OAEI
systems able to deal with the largest tasks in order to obtain more insights
about the impact on the f-measure. In [25] we reported a preliminary
evaluation where YAM-Bio [6] and AML [17] kept similar f-measure values, while
LogMap [27] had a reduction in f-measure, as the number of divisions
increased.
Number of subdivisions. Currently our strategy requires the size of the number
of matching subtasks or divisions as input. The (required) matching subtasks
may be known before hand if, for example, the matching tasks are to be run in
parallel in a number of available CPUs. For the cases where the resources are
limited or where a matching system is known to cope with small ontologies, we
plan to design an algorithm to estimate the number of divisions so that the
size of the matching subtasks in the computed divisions is appropriate to the
system and resource constraints.
Dealing with a limited or large lexicon. The construction of LexI shares a
limitation with state-of-the-art systems when the input ontologies are
lexically disparate or in different languages. In such cases, LexI can be
enriched with general-purpose lexicons (e.g., WordNet), more specialised
background knowledge (e.g., UMLS Metathesaurus) or with translated labels
using online services (e.g., Google). On the other hand, a large lexicon may
also have an important impact in the computation times. Our conducted
evaluation shows, however, that we can cope with very large ontologies with a
rich lexicon (e.g., NCI Thesaurus).
Notion of context. Locality-based modules are typically much smaller than the
whole ontology and they have led to very good results in terms of size and
coverage. We plan, however, to study different notions of _context_ of an
alignment (e.g., the tailored modules proposed in [8]) to further improve the
results in terms of size while keeping the same level of coverage.
This work was supported by the SIRIUS Centre for Scalable Data Access (Norges
forskningsråd), the AIDA project (Alan Turing Institute), Samsung Research UK,
Siemens AG, and the EPSRC projects AnaLOG, OASIS and UK FIRES. We would also
like to thank the anonymous reviewers that helped us improve this work.
## References
* [1] Asan Agibetov, Giuseppe Patanè, and Michela Spagnuolo, ‘Grontocrawler: Graph-Based Ontology Exploration’, in STAG, (2015).
* [2] Alsayed Algergawy, Samira Babalou, Mohammad J. Kargar, and S. Hashem Davarpanah, ‘SeeCOnt: A New Seeding-Based Clustering Approach for Ontology Matching’, in ADBIS, (2015).
* [3] Alsayed Algergawy et al., ‘Results of the Ontology Alignment Evaluation Initiative 2018’, in 13th Int’l Workshop on Ontology Matching, (2018).
* [4] Alsayed Algergawy et al., ‘Results of the Ontology Alignment Evaluation Initiative 2019’, in Int’l Workshop on Ontology Matching, (2019).
* [5] Alsayed Algergawy, Sabine Massmann, and Erhard Rahm, ‘A clustering-based approach for large-scale ontology matching’, in ADBIS, pp. 415–428, (2011).
* [6] Amina Annane, Zohra Bellahsene, Faiçal Azouaou, and Clément Jonquet, ‘YAM-BIO: results for OAEI 2017’, in 12th Int’l Workshop on Ontology Matching, (2017).
* [7] Ana Armas Romero, Bernardo Cuenca Grau, and Ian Horrocks, ‘MORe: Modular Combination of OWL Reasoners for Ontology Classification’, in Int’l Sem. Web Conf., (2012).
* [8] Ana Armas Romero, Mark Kaminski, Bernardo Cuenca Grau, and Ian Horrocks, ‘Module Extraction in Expressive Ontology Languages via Datalog Reasoning’, J. Artif. Intell. Res., 55, (2016).
* [9] Michael Azmy, Peng Shi, Jimmy Lin, and Ihab F Ilyas, ‘Matching entities across different knowledge graphs with graph embeddings’, CoRR, abs/1903.06607, (2019).
* [10] Olivier Bodenreider, Terry F. Hayamizu, Martin Ringwald, Sherri de Coronado, and Songmao Zhang, ‘Of mice and men: Aligning mouse and human anatomies’, in AMIA Symposium, (2005).
* [11] Stefan Büttcher, Charles L. A. Clarke, and Gordon V. Cormack, Information Retrieval - Implementing and Evaluating Search Engines, MIT Press, 2010.
* [12] Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang, ‘A comprehensive survey of graph embedding: Problems, techniques, and applications’, IEEE Trans. on Know. and Data Eng., 30(9), (2018).
* [13] Bernardo Cuenca Grau, Ian Horrocks, Yevgeny Kazakov, and Ulrike Sattler, ‘Modular reuse of ontologies: Theory and practice’, J. Artif. Intell. Res., 31, (2008).
* [14] Gayo Diallo, ‘An effective method of large scale ontology matching’, J. Biomedical Semantics, 5, 44, (2014).
* [15] Jérôme Euzenat and Pavel Shvaiko, Ontology Matching, Second Edition, Springer, 2013.
* [16] Daniel Faria, Catia Pesquita, Isabela Mott, Catarina Martins, Francisco M. Couto, and Isabel F. Cruz, ‘Tackling the challenges of matching biomedical ontologies’, J. Biomedical Semantics, 9(1), (2018).
* [17] Daniel Faria, Catia Pesquita, Emanuel Santos, Matteo Palmonari, Isabel F. Cruz, and Francisco M. Couto, ‘The AgreementMakerLight Ontology Matching System’, in OTM-ODBASE Conference, (2013).
* [18] Bernardo Cuenca Grau, Ian Horrocks, Boris Motik, Bijan Parsia, Peter F. Patel-Schneider, and Ulrike Sattler, ‘OWL 2: The next step for OWL’, J. Web Semantics, 6(4), (2008).
* [19] Anika Groß, Michael Hartung, Toralf Kirsten, and Erhard Rahm, ‘On matching large life science ontologies in parallel’, in Data Integration in the Life Sciences (DILS), (2010).
* [20] Fayçal Hamdi, Brigitte Safar, Chantal Reynaud, and Haïfa Zargayouna, ‘Alignment-based partitioning of large-scale ontologies’, in Advances in Knowledge Discovery and Management, (2009).
* [21] Ian Harrow, Ernesto Jiménez-Ruiz, et al., ‘Matching disease and phenotype ontologies in the ontology alignment evaluation initiative’, J. Biomedical Semantics, 8(1), (2017).
* [22] Wei Hu, Yuzhong Qu, and Gong Cheng, ‘Matching large ontologies: A divide-and-conquer approach’, Data Knowl. Eng., 67, 140–160, (2008).
* [23] Wei Hu, Yuanyuan Zhao, and Yuzhong Qu, ‘Partition-Based Block Matching of Large Class Hierarchies’, in Asian Sem. Web Conf., (2006).
* [24] Ernesto Jiménez-Ruiz, Asan Agibetov, Jiaoyan Chen, Matthias Samwald, and Valerie Cross. Dividing the Ontology Alignment Task [Data set], 2019. Zenodo. https://doi.org/10.5281/zenodo.3547888.
* [25] Ernesto Jiménez-Ruiz, Asan Agibetov, Matthias Samwald, and Valerie Cross, ‘Breaking-down the ontology alignment task with a lexical index and neural embeddings’, CoRR, abs/1805.12402, (2018).
* [26] Ernesto Jiménez-Ruiz, Asan Agibetov, Matthias Samwald, and Valerie Cross, ‘We divide, you conquer: from large-scale ontology alignment to manageable subtasks with a lexical index and neural embeddings’, in 13th Int’l Workshop on Ontology Matching, (2018).
* [27] Ernesto Jiménez-Ruiz and Bernardo Cuenca Grau, ‘LogMap: Logic-Based and Scalable Ontology Matching’, in Int’l Sem. Web Conf., (2011).
* [28] Ernesto Jiménez-Ruiz, Bernardo Cuenca Grau, Ian Horrocks, and Rafael Berlanga Llavori, ‘Logic-based assessment of the compatibility of UMLS ontology sources’, J. Biomedical Semantics, 2, (2011).
* [29] Ernesto Jiménez-Ruiz, Bernardo Cuenca Grau, Ulrike Sattler, Thomas Schneider, and Rafael Berlanga, ‘Safe and Economic Re-Use of Ontologies: A Logic-Based Methodology and Tool Support’, in European Sem. Web Conf., (2008).
* [30] Amir Laadhar, Faïza Ghozzi, Imen Megdiche, Franck Ravat, Olivier Teste, and Faïez Gargouri, ‘OAEI 2018 results of POMap++’, in 13th Int’l Workshop on Ontology Matching, (2018).
* [31] Amir Laadhar, Faïza Ghozzi, Imen Megdiche, Franck Ravat, Olivier Teste, and Faïez Gargouri, ‘Partitioning and local matching learning of large biomedical ontologies’, in 34th ACM/SIGAPP Symposium on Applied Computing SAC, (2019).
* [32] Huanyu Li, Zlatan Dragisic, Daniel Faria, Valentina Ivanova, Ernesto Jiménez-Ruiz, Patrick Lambrix, and Catia Pesquita, ‘User validation in ontology alignment: functional assessment and impact’, The Knowledge Engineering Review, 34, (2019).
* [33] Andrew McCallum, Kamal Nigam, and Lyle H Ungar, ‘Efficient clustering of high-dimensional data sets with application to reference matching’, in 6th ACM SIGKDD, (2000).
* [34] Christian Meilicke, Alignment incoherence in ontology matching, Ph.D. dissertation, University of Mannheim, 2011.
* [35] Christian Meilicke, ‘MAMBA - results for the OAEI 2015’, in 10th Int’l Workshop on Ontology Matching, (2015).
* [36] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean, ‘Distributed representations of words and phrases and their compositionality’, arXiv, (oct 2013).
* [37] Majid Mohammadi, Amir Ahooye Atashin, Wout Hofman, and Yao-Hua Tan, ‘SANOM results for OAEI 2017’, in 12th Int’l Workshop on Ontology Matching, (2017).
* [38] Axel-Cyrille Ngonga Ngomo, Alejandra García-Rojas, and Irini Fundulaki, ‘HOBBIT: holistic benchmarking of big linked data’, ERCIM News, 2016(105), (2016).
* [39] Ikechukwu Nkisi-Orji, Nirmalie Wiratunga, Stewart Massie, Kit-Ying Hui, and Rachel Heaven, ‘Ontology alignment based on word embedding and random forest classification’, in ECML/PKDD, (2018).
* [40] George Papadakis, Dimitrios Skoutas, Emmanouil Thanos, and Themis Palpanas, ‘A Survey of Blocking and Filtering Techniques for Entity Resolution’, CoRR, abs/1905.06167, (2019).
* [41] Heiko Paulheim, ‘On Applying Matching Tools to Large-scale Ontologies’, in 3rd Int’l Workshop on Ontology Matching, (2008).
* [42] Sunny Pereira, Valerie Cross, and Ernesto Jiménez-Ruiz, ‘On partitioning for ontology alignment’, in Int’l Sem. Web Conf. (Poster), (2017).
* [43] Jan Portisch and Heiko Paulheim, ‘ALOD2Vec matcher’, in 13th Int’l Workshop on Ontology Matching, (2018).
* [44] Pavel Shvaiko and Jérôme Euzenat, ‘Ontology matching: State of the art and future challenges’, IEEE Trans. Knowl. Data Eng., 25(1), (2013).
* [45] Heiner Stuckenschmidt and Anne Schlicht, ‘Structure-based partitioning of large ontologies’, in Modular Ontologies: Concepts, Theories and Techniques for Knowledge Modularization, Springer, (2009).
* [46] Zequn Sun, Jiacheng Huang, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu, ‘TransEdge: Translating Relation-Contextualized Embeddings for Knowledge Graphs’, in Int’l Sem. Web Conf. (ISWC), (2019).
* [47] Boontawee Suntisrivaraporn, Guilin Qi, Qiu Ji, and Peter Haase, ‘A modularization-based approach to finding all justifications for OWL DL entailments’, in Asian Sem. Web Conf., (2008).
* [48] Jason Weston, Sumit Chopra, and Keith Adams, ‘#tagspace: Semantic embeddings from hashtags’, in Conference on Empirical Methods in Natural Language Processing (EMNLP), (2014).
* [49] Ledell Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston, ‘StarSpace: Embed All The Things!’, arXiv, (2017).
* [50] Tien-Hsuan Wu, Zhiyong Wu, Ben Kao, and Pengcheng Yin, ‘Towards practical open knowledge base canonicalization’, in 27th ACM Int’l Conf. on Inf. and Knowledge Management, (2018).
* [51] Qingheng Zhang, Zequn Sun, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu, ‘Multi-view knowledge graph embedding for entity alignment’, in 28th Int’l Joint Conf. on Art. Intell. (IJCAI), (2019).
* [52] Mengyi Zhao and Songmao Zhang, ‘FCA-Map results for OAEI 2016’, in 11th Int’l Workshop on Ontology Matching, (2016).
|
2024-09-04T02:54:59.237774 | 2020-03-11T16:40:42 | 2003.05398 | {
"authors": "Kyoungmun Lee and Siyoung Q. Choi",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26169",
"submitter": "Kyoungmun Lee",
"url": "https://arxiv.org/abs/2003.05398"
} | arxiv-papers | # Stratification of polymer-colloid mixtures via fast nonequilibrium
evaporation
Kyoungmun Lee<EMAIL_ADDRESS>Siyoung Q. Choi<EMAIL_ADDRESS>Department of Chemical and Biomolecular Engineering, Korea Advanced Institute
of Science and Technology (KAIST), Daejeon 34141, Korea
###### Abstract
In drying liquid films of polymer-colloid mixtures, the stratification in
which polymers are placed on top of larger colloids is studied. It is often
presumed that the formation of segregated polymer-colloid layers is solely due
to the proportion in size at fast evaporation as in binary colloid mixtures.
By comparing experiments with a theoretical model, we found that the
transition in viscosity near the drying interface was another important
parameter for controlling the formation of stratified layers in polymer-
colloid mixtures. At high evaporation rates, increased polymer concentrations
near the surface lead to a phase transition from semidilute to concentrated
regime, in which colloidal particles are kinetically arrested. Stratification
only occurs if the formation of a stratified layer precedes the evolution to
the concentrated regime near the drying interfaces. Otherwise, the colloids
will be trapped by the polymers in the concentrated regime before forming a
segregated layer. Also, no stratification is observed if the initial polymer
concentration is too low to form a sufficiently high polymer concentration
gradient within a short period of time. Our findings are relevant for
developing solution-cast polymer composite for painting, antifouling and
antireflective coatings.
††preprint: APS/123-QED
## I Introduction
Solution-cast polymer composite films composed of polymer matrices containing
colloidal particles have been widely studied for many applications, including
paints [1], coatings [2,3], and cosmetics [4,5] because they provide highly
improved macroscopic properties relative to the pure polymer [6], through a
simple manufacturing process. The enhanced properties of the dried films are
largely dependent on the spatial distribution of the polymer and colloid
[7-10]. In particular, stratified layers consisting of a polymer layer on a
colloidal layer have exhibited highly improved antifouling performance
[11,12], and photoactive properties [13].
Several previous studies have demonstrated ways of controlling the segregated
layers of polymer-colloid mixtures in an equilibrium state [14-16]. However,
relatively little is known about how polymer-colloid mixtures can be
stratified during the simple, fast and inexpensive nonequilibrium solvent
evaporation process. Although solvent casting is one of the simplest
manufacturing methods, from coffee ring stains [17] to many industrial
applications [1-5], the inherent nonequilibrium nature of drying has made it
difficult to clarify the underlying mechanism.
As a solvent evaporates, the spatial distribution of the solutes in liquid
films is determined by two competing factors: diffusion [18] and receding
drying interfaces. Solutes tend to distribute uniformly in drying films with a
diffusion constant _D_ , while the nonuniform concentration gradient is
developed by the downward velocity of the interface _$v_{ev}$_. Which of the
two phenomena dominates can be quantified by the dimensionless _Péclet_ number
_Pe =_ _$v_{ev}$__$z_{0}/D$_ , where _$z_{0}$_ is the initial film thickness.
If _Pe_ $>$ 1, the solutes cannot diffuse uniformly within the time of
evaporation, and they accumulate near the top of the film. On the other hand,
the drying film shows almost uniform distribution if _Pe_ $<$ 1.
In binary colloid mixtures, it was recently shown that stratifications with
smaller colloids placed on large colloids can be realized if _Pe_ is larger
than 1 [19-22]. This occurs when the concentration gradient of both the large
and smaller particles increases near the liquid/air interface. Fortini _et
al._ [20] proposed that the inverted stratification was caused by an imbalance
in the osmotic pressure between the larger and smaller colloids. Zhou _et al._
[21] suggested that the stratification phenomenon could be explained
quantitatively using a diffusion model, with cross-interaction between the
colloids. Sear and Warren [22] argued that diffusiophoretic motion induced by
the concentration gradient of the smaller components can exclude the larger
colloids from the drying interfaces.
In a way similar to binary colloid mixtures, it has been proposed that a
polymer-colloid mixture can yield the same stratified layers if the _Pe_ of
both the polymer and colloid are larger than 1 [23,24]. However, these results
have only been demonstrated by simulation and modeling studies, and few
experimental studies have been made on polymer-colloid stratification.
Although polymers and colloids can show similar behaviors at very dilute
concentrations [24,25], they might behave much differently at the high
concentrations that any drying solutions must experience for the complete
drying [26,27]. The obvious difference is viscosity. It rapidly increases at
relatively low concentrations in the polymer solution, slowing the motions of
the species [27-29]. In contrast, the viscosity of the colloidal suspension
increases relatively slowly [30]. Thus, the growth in viscosity near the
interface, which can kinetically arrest larger colloids [31-33], needs to be
considered differently for polymer and colloidal systems, but no appropriate
studies have been performed yet.
In this work, we experimentally show that the formation of stratified layers,
where a small polymer layer is placed on larger colloids, can be predicted
using two competing time scales: the time at which the colloid begins to
stratify (_$t_{s}^{*}$_) and the time the colloid is arrested by the
transitions of viscosity near the interface (_$t_{c}^{*}$_).
We consider that the colloid starts to be arrested near the drying interfaces
when the polymer concentration reaches a concentrated regime where the polymer
chains are densely packed [29]. The stratification can be observed only if
_$t_{s}^{*}$_ precedes _$t_{c}^{*}$_ , or _$t_{c}^{*}$_ /_$t_{s}^{*}$_ $>$ 1\.
Otherwise, the viscosity near the drying interface rapidly grows within a very
short time and the colloids are kinetically trapped before a sufficient
downward velocity away from the surface of large colloids is generated. In
addition, when the initial polymer concentration is too low, no stratification
can also occur because the concentration gradient of the polymer, or the
additional migration velocity of the larger colloid, is not enough until the
evaporation ends.
For the predictive analysis of _$t_{s}^{*}$_ and _$t_{c}^{*}$_ , we propose a
simple model modified from the previous work [22]. We observed quite excellent
agreement in the final film morphology of the model prediction and
experimental studies. Our comprehensive study predicts the spatial
distribution of polymers and colloids in the final dried film, based on the
experimental system and drying conditions.
## II Result and discussion
### II.1 Structure of dried films of polymer-colloid
Mixtures of aqueous polystyrene (PS) suspension with a mean diameter _$d_{c}$_
= 1 _$\mu$__m_ , and poly(ethylene glycol) (PEG) or poly(vinyl alcohol) (PVA)
were used as a model system for stratification. The molecular weights of the
polymers with PEG _$M_{n}$_ (number average molecular weight) 6,000 gmol-1,
PEG _$M_{n}$_ 20,000 gmol-1, PVA _MW_ 6,000 gmol-1, and PVA _$M_{w}$_ (weight
average molecular weight) 13,000-23,000 gmol-1 (PVA _$M_{w}$_ 18,000) were
chosen for radius of colloid (_$R_{colloid}$_) _$\gg$_ radius of polymer
(_$R_{polymer}$_). Before drying, the film solutions contained an initial
volume fraction of _$\phi_{i,p}$_ = 0.01 or 0.04 for the polymer and
_$\phi_{i,c}$_ = 0.67 _$\phi_{i,p}$_ for the colloid, respectively. The
mixture solutions were deposited on glass substrates as _$z_{0}$_ = 1.25 mm.
The evaporation was performed at ambient temperature and a relative humidity
of 23 %, resulting in an initial polymer _Péclet_ number _$Pe_{i,p}$_ $>$ 1
(See Supplemental Material). All of the experimental systems are summarized in
Table I. When the evaporation was completed, the final film morphologies were
analyzed with the help of scanning electronic microscopy (SEM) and ImageJ
analysis.
Table 1: Various polymer-colloid systems that were tested. Colloid was fixed as PS to exclude gravitational effect during drying ($\rho_{PS}$ $\approx$ $\rho_{water}$). A total of 8 systems were experimentally performed. | | | | | | | | _$Pe_{i,p}$_
---|---|---|---|---|---|---|---|---
Colloid | Polymer | | _$R_{g}$_ 111See Supplemental Material (nm) | _$\phi_{i,p}$_ | _$\phi_{i,p}:\phi_{i,c}$_ | _$h_{0}$_ (mm) | Relative humidity | _$\phi_{i,p}$_ 0.01 | _$\phi_{i,p}$_ 0.04
PS (r = 500 nm) | PEG | _$M_{n}$_ 6,000 | 3.6 | 0.01 | 3 : 2 | 1.25 | 23 % | 4 | 7
| | _$M_{n}$_ 20,000 | 7.4 | or | | | | 9 | 22
| PVA | _MW_ 6,000 | 3.5 | 0.04 | | | | 4 | 9
| | _$M_{w}$_ 18,000 | 6.8 | | | | | 8 | 24
After complete drying, the polymers were enriched at the top of the films in
PEG _$M_{n}$_ 6,000 gmol-1 (_$\phi_{i,p}$_ = 0.04) [Fig. 1(a)] and PVA _MW_
6,000 gmol-1 (_$\phi_{i,p}$_ = 0.04) [Fig. 1(c)] while other 6 dried films in
Fig. 1(b), 1(d) and Fig. 2(a) - 2(d) were not segregated, but randomly
distributed. Although the stratified layers in Fig. 1(a) and Fig. 1(c) also
showed different degrees of stratification, there was a clear boundary between
the stratified layers [Fig. 1(e)] and nonstratified layers [Fig. 1(f), Fig.
2(e), and Fig. 2(f)].
Figure 1: Cross sectional SEM images of dried films of polymer-colloid
mixtures (_$\phi_{i,p}=0.04$_ , _$\phi_{i,p}:\phi_{i,c}=3:2$_). The upper row
shows various polymer-colloid distributions according to the polymer types and
molecular weights (a) PEG _$M_{n}$_ 6,000, (b) PEG _$M_{n}$_ 20,000, (c) PVA
_MW_ 6,000, (d) PVA _$M_{w}$_ 18,000. The yellow lines represent boundary of
stratified layers. If there is no clear boundary, nothing is denoted. The
lower rows are estimated relative volume fraction of polymer _$\phi_{p}$_ (red
circles) and colloid _$\phi_{c}$_ (blue triangles) of two representatives: (e)
PEG _$M_{n}$_ 6,000 and (f) PEG _$M_{n}$_ 20,000. The colloidal volume
fractions were obtained from SEM images through the ImageJ analysis. The
remained volume fraction was considered as polymer volume fraction
_$\phi_{p}=1-\phi_{c}$_. Figure 2: SEM images of dried films formed from
polymer-colloid mixtures (_$\phi_{i,p}=0.01$_ ,
_$\phi_{i,p}:\phi_{i,c}=3:2$_). Distributions of polymer and colloid are shown
through the upper row depending on the polymer types and molecular weights (a)
PEG _$M_{n}$_ 6,000, (b) PEG _$M_{n}$_ 20,000, (c) PVA _MW_ 6,000, (d) PVA
_$M_{w}$_ 18,000. There was no clear stratified layer in all four images. The
volume fractions of polymer _$\phi_{p}$_ (red circles) and colloid
_$\phi_{c}$_ (blue triangles) of the two dried films were obtained from SEM
image analysis: (e) PEG _$M_{n}$_ 6,000 and (f) PEG _$M_{n}$_ 20,000. The
volume fractions of colloids are estimated by ImageJ analysis, and the polymer
volume fraction was determined by _$\phi_{p}=1-\phi_{c}$_.
### II.2 Modified theoretical model of dynamic stratification
As the solvent evaporated at _Pe_ $>$ 1 for both polymer and colloid, the
descending air/water interface _$z_{interface}$_ compressed the polymer and
colloid, and they accumulated near the drying interface. From previous studies
[22,34], the transition of the polymer concentration in drying film
_$\phi_{p}$_(z,t) can be written as
$\displaystyle\phi_{p}(z,t^{*})\approx\phi_{i,p}(1+Pe_{p}t^{*}exp{\bf[}-\frac{|z-z_{interface}|}{D_{p}/v_{ev}}{\bf]}),$
(1) $\displaystyle z_{interface}(t^{*})=z_{0}-v_{ev}t=(1-t^{*})z_{0}$ (2)
if _Péclet_ number of polymer _$Pe_{p}$_ $\gg$ 1, where
_$t^{*}=tv_{ev}/z_{0}$_ (0 $\leq$ _$t^{*}$_ $\leq$ 1) is the dimensionless
time. Here, _$Pe_{p}$_ and diffusion coefficient of polymer _$D_{p}$_ can be
expressed as a function of drying time when _$Pe_{p}$_ and _$D_{p}$_ vary
slowly. Since the viscosity growth derived from the increased polymer
concentration can be accompanied by the kinetic arrest of the colloidal
particles, _$t_{c}^{*}$_ can be determined by the time when the volume
fraction of polymer reaches the concentrated regime
_$\phi_{p}=\phi_{p}^{**}$_. We consider that the colloidal particles at the
drying interface (_$z=z_{interface}$_) are kinetically arrested when the
polymer fraction reaches _$\phi_{p}^{**}$_ at _$z=z_{interface}-r_{colloid}$_
$\displaystyle\phi_{p}(z_{interface}-r_{colloid},t_{c}^{*})=\phi_{p}^{**}.$
(3)
Meanwhile, increasing the concentration gradients of the small polymers can
also create the diffusiophoretic drift velocity of larger colloids
_$v_{diffusiophoresis}$_ [35,36]
$\displaystyle v_{diffusiophoresis}=-\frac{9}{4}D_{p}\nabla\phi_{p}$ (4)
under the condition of _$R_{colloid}$_ $\gg$ _$R_{polymer}$_. From the simple
1D diffusion model, the polymer concentration gradient at the interface is
_$\nabla\phi_{p}=-v_{ev}\phi_{interface}/D_{p}$_ [37]. This gives the
diffusiophoretic velocity of interfacial colloids with the combination of
_$\phi_{interface}=\phi_{i,p}(1+Pe_{p}t^{*})$_ originating from Eq. (1) at
_$z=z_{interface}$_ ,
$\displaystyle
v_{colloid,interface}\approx\frac{9}{4}v_{ev}\phi_{i,p}(1+Pe_{p}t^{*}).$ (5)
The time at which the colloid begins to stratify during the evaporation
process (_$t_{s}^{*}$_) is determined by comparing _$v_{colloid,interface}$_
and _$v_{ev}$_. Near the time when evaporation begins, the gradient of polymer
concentration is not too large and _$v_{colloid,interface}$_ does not overcome
_$v_{ev}$_. At this state, both the polymer and colloid simply accumulate at
the drying interface. If the concentration gradient of the polymer is large
enough for the formation of a higher colloidal diffusiophoretic velocity,
however, _$v_{colloid,interface}$_ is larger than _$v_{ev}$_ and it starts to
create stratified layers in the drying film. We consider the time
_$t_{s}^{*}$_ when _$v_{colloid,interface}=v_{ev}$_ , resulting in
$\displaystyle v_{colloid,interface}(t_{s}^{*})=v_{ev}.$ (6)
The final morphologies of the drying polymer-colloid mixtures are determined
by the two competing time scales _$t_{s}^{*}$_ and _$t_{c}^{*}$_. There are
three regimes for the predictive analysis of the stratification of polymer-
colloid mixtures. The first is _$t_{c}^{*}/t_{s}^{*}$_ $>$ 1, where the
downward motion of the colloidal particles appears before
_$\phi_{p}(z_{interface}-r_{colloid},t_{c}^{*})=\phi_{p}^{**}$_. The second is
_$t_{c}^{*}/t_{s}^{*}$_ $<$ 1, where the polymer volume fraction reaches
_$\phi_{p}^{**}$_ before the evolution of
_$v_{colloid,interface}(t_{s}^{*})=v_{ev}$_. The third is _$t_{s}^{*}\approx
1$_ , where _$t_{s}^{*}$_ reaches to the time at which evaporation ends
(_$t^{*}=1$_), even though _$t_{s}^{*}$_ precedes _$t_{c}^{*}$_.
### II.3 Comparison of experimental results and theoretical model
As described above, the prediction for the polymer-colloid stratification can
be estimated using the competition between _$t_{c}^{*}$_ and _$t_{s}^{*}$_.
For the time dependent volume fraction of the polymer in the drying films,
evaporation rates were determined by measuring mass reduction (Fig. SM3). To
calculate the time dependent (or concentration dependent) polymer diffusion
coefficient, the average volume fractions of polymer in the drying film were
used as _$D_{p}$_ (See Supplemental Material). The transition volume fraction
of semidilute entangled _$\phi_{e}$_ to concentrated regime _$\phi^{**}$_ in
good solvent were determined by the specific viscosity _$\eta_{sp}$_ slope
transition [27,28,38] in Fig. 3. From the slope transition of semidilute
unentangled (_$\eta_{sp}$_ $\sim$ _$\phi_{p}^{1.3}$_) to semidilute entangled
(_$\eta_{sp}$_ $\sim$ _$\phi_{p}^{3.9}$_), _$\phi_{e}$_ of the polymer in good
solvent was measured. Similarly, _$\phi^{**}$_ can be estimated using the
slope transition point between the semidilute entangled regime (_$\eta_{sp}$_
$\sim$ _$\phi_{p}^{3.9}$_) and the concentrated regime (_$\eta_{sp}$_ $\sim$
_$\phi_{p}^{\alpha}$_ , where $\alpha$ $>$ 3.9).
Figure 3: Specific viscosity of four polymer solutions as a function of
polymer volume fraction. Polymer volume fraction where it goes to concentrated
regime _$\phi^{**}$_ is estimated by the slope transition point from 3.9 to
larger than 3.9. In case of PEG _$M_{n}$_ 6,000, _$\phi^{**}$_ is considered
as max solubility ($\approx$ 630 mg/ml at 20oC). As the PEG _$M_{n}$_ 6,000
solution goes to higher than max solubility, it shows abrupt increment of
specific viscosity (empty red triangle).
In drying films of polymer-colloid mixtures, the final film morphology can be
predicted using the three regimes in the (_$t_{s}^{*}$_ , _$t_{c}^{*}$_)
plane. Regime 1 with _$t_{c}^{*}$_ /_$t_{s}^{*}$_ $>$ 1 indicates clearly
stratified layers in the dried films. Regime 2 represents nonsegregated
layers, because _$t_{c}^{*}$_ appears before _$t_{s}^{*}$_. Regime 3 also
shows nonstratified layers in the final morphology of the complete dried
polymer-colloids mixtures, since _$t_{s}^{*}$_ appears very close to 1
(_$t_{s}^{*}$_ $\approx$ 1).
The theoretical predictions based on Eq. (3), Eq. (6) and the experimental
stratification results from 8 different systems are presented in Fig. 4. There
is quite excellent agreement between the model prediction and experimental
results except for the PVA _MW_ 6,000 (_$\phi_{i,p}=0.04$_) system, which also
appears to be closest to _$t_{c}^{*}/t_{s}^{*}$_ = 1. This might be due to the
air/water interfacial activity of PVA _MW_ 6,000 (Fig. SM4), which can make
faster _$t_{s}^{*}$_ under real drying conditions, but it cannot bring
_$t_{c}^{*}$_ forward since _$t_{c}^{*}$_ is related to the
_$z=z_{interface}-r_{colloid}$_ , not _$z=z_{interface}$_. To reduce the
interfacial activity effect of PVA _MW_ 6,000 (_$\phi_{i,p}=0.04$_) on
stratification, we moved the point to deviate from _$t_{c}^{*}/t_{s}^{*}$_ $=$
1 in our theoretical model by changing _$v_{ev}$_. As it deviates from
_$t_{c}^{*}/t_{s}^{*}=1$_ , the theoretical prediction becomes consistent with
the experimental result for PVA _MW_ 6,000 (_$\phi_{i,p}=0.04$_) (Fig. 5).
Figure 4: State diagram on the (_$t_{s}^{*}$_ ,_$t_{c}^{*}$_) plane. The
dotted line corresponds to _$t_{c}^{*}/t_{s}^{*}=1$_. Theoretical predictions
of 8 different systems are denoted as symbols in the diagram, and the
experimental results are represented by colors. Blue indicates regime 1
(_$t_{c}^{*}/t_{s}^{*}$_ $>$ 1) where stratified layer expected and red shows
regime 2 (_$t_{c}^{*}/t_{s}^{*}$_ $<$ 1). Orange designated regime 3
(_$t_{s}^{*}\approx 1$_) (Fig. SM5). The green indicates the intermediate
state where stratified layer is observed in experiments while it belongs to
regime 2 in model prediction. All data points show overall agreement with one
exception, filled green triangle, which also appears close to the
_$t_{c}^{*}/t_{s}^{*}=1$_. Figure 5: State diagram of PVA _MW_ 6,000
(_$\phi_{i,p}$_ = 0.04) on the (_$t_{s}^{*}$_ ,_$t_{c}^{*}$_) plane. The
dotted line corresponds to _$t_{c}^{*}/t_{s}^{*}=1$_. The filled green
triangle deviated from _$t_{c}^{*}/t_{s}^{*}=1$_ in theoretical model only by
increasing _$v_{ev}$_. As it stays away from _$t_{c}^{*}/t_{s}^{*}=1$_ , the
intermediate stratified morphology where stratified layer is observed in
experiments while it belongs to regime 2 in model prediction become consistent
with model prediction. (a) SEM image of PVA _MW_ 6,000 (_$\phi_{i,p}$_ = 0.04)
at fast evaporation. (b) Top of the cross-sectional SEM image (a). The
evaporation rate was controlled by convective flow of air with a relative
humidity of 23 % at ambient temperature.
### II.4 Conditions for polymer-colloid stratification
To analyze the general conditions for polymer-colloid stratification, we
represented _$t_{c}^{*}$_ and _$t_{s}^{*}$_ in another experimental parameter.
As mentioned above, the polymer-on-top structure can be formed when the two
conditions, both _$t_{s}^{*}$_ $<$ 1 and _$t_{c}^{*}/t_{s}^{*}$_ $>$ 1, are
satisfied. From Eq. (3) and Eq. (6), _$t_{c}^{*}$_ and _$t_{s}^{*}$_ are (See
Supplemental Material)
$\displaystyle
t_{c}^{*}\approx\frac{\frac{\phi^{**}}{\phi_{i,p}}-1}{Pe_{p}(t_{c}^{*})},$ (7)
$\displaystyle
t_{s}^{*}\approx\frac{\frac{4}{9}\frac{1}{\phi_{i,p}}-1}{Pe_{p}(t_{s}^{*})},$
(8)
where _$Pe_{p}(t_{c}^{*})$_ and _$Pe_{p}(t_{s}^{*})$_ are _Pe_ of the polymer
at dimensionless time _$t^{*}=t_{c}^{*}$_ and _$t^{*}=t_{s}^{*}$_ in
respectively. The first condition for the stratification to happen,
_$t_{s}^{*}$_ $<$ 1, is
$\displaystyle Pe_{p}(t_{s}^{*})\phi_{i,p}>\frac{4}{9}-\phi_{i,p}.$ (9)
This follows a condition for similar to that for the inverted stratification
of binary colloidal mixtures [21,33].
The second requirement for stratified layers in polymer-colloid mixtures,
_$t_{c}^{*}/t_{s}^{*}$_ $>$ 1, can be expressed as
$\displaystyle\frac{t_{c}^{*}}{t_{s}^{*}}\approx\frac{(\frac{\phi^{**}}{\phi_{i,p}}-1)}{(\frac{4}{9}\frac{1}{\phi_{i,p}}-1)}\frac{Pe_{p}(t_{s}^{*})}{Pe_{p}(t_{c}^{*})}>1,$
(10)
$\displaystyle\frac{t_{c}^{*}}{t_{s}^{*}}\approx\frac{(\frac{\phi^{**}}{\phi_{i,p}}-1)}{(\frac{4}{9}\frac{1}{\phi_{i,p}}-1)}\frac{\eta(t_{s}^{*})}{\eta(t_{c}^{*})}>1.$
(11)
Since _$t_{c}^{*}$_ and _$t_{s}^{*}$_ come out when the polymer solution in
the semi-dilute entangled regime (close to _$\phi_{p}=\phi^{**}$_),
_$\eta(t^{*})$_ is
$\displaystyle\eta(t^{*})=(\frac{1-t_{e}^{*}}{1-t^{*}})^{3.9}(\eta_{e}-\eta_{s})+\eta_{s}$
(12)
from Eq. (14) of Supplemental Material, where _$t_{e}^{*}$_ is the
dimensionless time when _$\eta=\eta_{e}$_ (viscosity when
_$\phi_{p}=\phi_{e}$_) from Eq. (10) of Supplemental Material. If we neglect
the last term in Eq. (12),
$\displaystyle\frac{t_{c}^{*}}{t_{s}^{*}}\approx\frac{(\frac{\phi^{**}}{\phi_{i,p}}-1)}{(\frac{4}{9}\frac{1}{\phi_{i,p}}-1)}(\frac{1-t_{c}^{*}}{1-t_{s}^{*}})^{3.9},$
(13)
$\displaystyle\frac{t_{c}^{*}}{t_{s}^{*}}(\frac{1-t_{s}^{*}}{1-t_{c}^{*}})^{3.9}\approx\frac{(\frac{\phi^{**}}{\phi_{i,p}}-1)}{(\frac{4}{9}\frac{1}{\phi_{i,p}}-1)}.$
(14)
To satisfy the condition of _$t_{c}^{*}/t_{s}^{*}$_ $>$ 1 for polymer-colloid
stratification,
$\displaystyle\frac{(\frac{\phi^{**}}{\phi_{i,p}}-1)}{(\frac{4}{9}\frac{1}{\phi_{i,p}}-1)}>1,$
(15) $\displaystyle\phi^{**}-\phi_{i,p}>\frac{4}{9}-\phi_{i,p},$ (16)
$\displaystyle\phi^{**}>\frac{4}{9}.$ (17)
It is interesting to note that the predicted stratification of the polymer-
colloid mixtures does not depend on the drying rate _$v_{ev}$_ , or _Pe_ , as
long as _$Pe\gg 1$_. This tendency also can be seen in Fig. 6, which shows the
theoretical predictions of the 8 systems above, with _$v_{ev}$_ values
changed. Ignoring the data points of _$Pe_{i,p}\leq 5$_ , failing to follow
the aforementioned assumption _$Pe\gg 1$_ , all the other points belong in
same regime once the polymer type and initial volume fraction are determined.
This is quite plausible since the increase in polymer concentration near the
drying interface accelerates both _$t_{c}^{*}$_ and _$t_{s}^{*}$_ in similar
order. Thus, it might be hard to create stratified layers in polymer-colloid
mixtures only by varying the evaporation rate _$v_{ev}$_ , or _Pe_. Altering
other properties which can increase _$t_{c}^{*}/t_{s}^{*}$_ larger than 1,
such as the interfacial activity of the polymer in Fig. 5 or the gravitational
velocity from the density difference in Fig. SM6, could be another solution to
achieve stratified layers in polymer-colloid mixtures.
Figure 6: Theoretical prediction of the stratification of 8 different systems
on the (_$t_{s}^{*}$_ ,_$t_{c}^{*}$_) plane with controlled _$v_{ev}$_ (or
_$Pe_{i,p}$_) (a) PEG _$M_{n}$_ 6,000, (b) PEG _$M_{n}$_ 20,000, (c) PVA _MW_
6,000, (d) PVA _$M_{w}$_ 18,000. As _$Pe_{i,p}$_ increases, both _$t_{s}^{*}$_
and _$t_{c}^{*}$_ decrease and data points go to left bottom side on the
(_$t_{s}^{*}$_ ,_$t_{c}^{*}$_) plane. Regardless of the polymer type or
molecular weight, most of the data points belong in the same regime once the
type of polymer and initial volume fraction are determined except the
relatively slow drying rate (_$Pe_{i,p}\leq 5$_ , red circles).
## III Conclusion
In summary, we demonstrated that dynamic stratification of polymer-colloid
mixtures can be achieved by controlling viscosity near the drying interface,
which results from increasing polymer concentration. When the polymer-colloid
solution evaporates, the polymer starts to increase the solution viscosity
near the air/water interface within a relatively very short time, unlike
colloidal suspensions. Since the transition in viscosity due to the polymer
can cause the kinetic arrest of colloidal particles, which hinders the
diffusiophoretic downward motion of colloids, stratified layers are only
observed if the formation of a stratified layer precedes the transition in
viscosity near the liquid/air interfaces.
Our model calculations for _$t_{c}^{*}$_ and _$t_{s}^{*}$_ , inspired by the
previous study [22], show that the segregation of polymer-colloid mixtures can
only occur under the condition of _$t_{c}^{*}/t_{s}^{*}$_ $>$ 1, unless the
solute fraction of the polymer is sufficiently low. The requirement for
stratification, _$t_{c}^{*}/t_{s}^{*}$_ $>$ 1, implies that the stratification
of polymer-colloid mixtures may not rely on drying rate if _$Pe\gg 1$_ , since
both _$t_{c}^{*}$_ and _$t_{s}^{*}$_ vary in similar order as _$v_{ev}$_
changes. Our model calculations are further supported by the consistency
between the model prediction and final experimental film morphologies
In more general terms, the consistent results of the experiments and model
prediction may shed light on methods of controlling surface enrichment in
general solution-cast polymer composites. The ability to predict morphology in
a simple nonequilibrium solvent evaporation process is highly desirable for
preparing materials whose surface properties are crucial to performance, such
as antireflective or organic photovoltaics. Our insights on how polymer
concentration affects colloidal dynamics and stratification can be exploited
to control segregated layers in solution-cast polymer-colloid mixtures.
###### Acknowledgements.
This work was supported by the Basic Science Research Program through the
National Research Foundation of Korea (Grants NRF-2015R1C1A1A01054180, and
NRF-2019R1F1A1059587).
## References
* van der Kooij and Sprakel [2015] H. M. van der Kooij and J. Sprakel, Watching paint dry; more exciting than it seems, Soft Matter 11, 6353 (2015).
* Beaugendre _et al._ [2017] A. Beaugendre, S. Degoutin, S. Bellayer, C. Pierlot, S. Duquesne, M. Casetta, and M. Jimenez, Self-stratifying coatings: A review, Prog. Org. Coat. 110, 210 (2017).
* Padget [1994] J. C. Padget, Polymers for water-based coatings-a systematic overview, J. Coat. Technol. 66, 89 (1994).
* Márquez _et al._ [2016] A. G. Márquez, T. Hidalgo, H. Lana, D. Cunha, M. J. Blanco-Prieto, C. Álvarez-Lorenzo, C. Boissiére, C. Sánchez, C. Serre, , and P. Horcajada, Biocompatible polymer–metal–organic framework composite patches for cutaneous administration of cosmetic molecules, J. Mater. Chem. B 4, 7031 (2016).
* Wissing and Műller [2001] S. A. Wissing and R. H. Műller, A novel sunscreen system based on tocopherol acetate incorporated into solid lipid nanoparticles, Int. J. Cosmet. Sci. 23, 233 (2001).
* Moniruzzaman and Winey [2006] M. Moniruzzaman and K. I. Winey, Polymer nanocomposites containing carbon nanotubes, Macromolecules 39, 5194 (2006).
* Anderson and Zukoski [2008] B. J. Anderson and C. F. Zukoski, Rheology and microstructure of an unentangled polymer nanocomposite melt, Macromolecules 41, 9326 (2008).
* Anderson and Zukoski [2009] B. J. Anderson and C. F. Zukoski, Rheology and microstructure of entangled polymer nanocomposite melts, Macromolecules 42, 8370 (2009).
* Jancar _et al._ [2010] J. Jancar, J. F. Douglas, F. W. Starr, S. K. Kumar, P. Cassagnau, A. J. Lesser, S. S. Sternstein, and M. J. Buehler, Current issues in research on structureeproperty relationships in polymer nanocomposites, Polymer 51, 3321 (2010).
* Cassagnau [2008] P. Cassagnau, Melt rheology of organoclay and fumed silica nanocomposites, Polymer 49, 2183 (2008).
* Yebra _et al._ [2004] D. M. Yebra, S. Kiil, and K. Dam-Johansen, Antifouling technology—past, present and future steps towards efficient and environmentally friendly antifouling coatings, Prog. Org. Coat. 50, 75 (2004).
* Banerjee _et al._ [2011] I. Banerjee, R. C. Pangule, and R. S. Kane, Antifouling coatings: Recent developments in the design of surfaces that prevent fouling by proteins, bacteria, and marine organisms, Advanced Materials 23, 690 (2011).
* van Franeker _et al._ [2015] J. J. van Franeker, D. Westhoff, M. Turbiez, M. M. Wienk, V. Schmidt, and R. A. J. Janssen, Controlling the dominant length scale of liquid–liquid phase separation in spin-coated organic semiconductor films, Adv. Func. Mater. 25, 855 (2015).
* Krishnan _et al._ [2006] R. S. Krishnan, M. E. Mackay, P. M. Duxbury, A. Pastor, C. J. Hawker, B. V. Horn, S. Asokan, and M. S. Wong, Self-assembled multilayers of nanocomponents, Nano Letters 7, 484 (2006).
* Wei _et al._ [2008] Q. Wei, T. Nishizawa, K. Tajima, and K. Hashimoto, Self-organized buffer layers in organic solar cells, Advanced Materials 20, 2211 (2008).
* McGarrity _et al._ [2008] E. S. McGarrity, A. L. Frischknecht, and M. E. Mackay, Phase behavior of polymer/nanoparticle blends near a substrate, J. Chem. Phys. 128, 154904 (2008).
* Deegan _et al._ [1997] R. D. Deegan, O. Bakajin, T. F. Dupont, G. Huber, S. R. Nagel, and T. A. Witten, Capillary flow as the cause of ring stains from dried liquid drops, Nature 389, 827 (1997).
* Brown [1828] R. Brown, Xxvii. a brief account of microscopical observations made in the months of june, july and august 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies, Philos. Mag. 4, 161 (1828).
* Howard _et al._ [2017a] M. P. Howard, A. Nikoubashman, and A. Z. Panagiotopoulos, Stratification dynamics in drying colloidal mixtures, Langmuir 33, 3685 (2017a).
* Fortini _et al._ [2016] A. Fortini, I. Martín-Fabiani, J. L. D. L. Haye, P.-Y. Dugas, M. Lansalot, F. D. Agosto, E. Bourgeat-Lami, J. L. Keddie, , and R. P. Sear, Dynamic stratification in drying films of colloidal mixtures, Phys. Rev. Lett. 116, 118301 (2016).
* Zhou _et al._ [2017] J. Zhou, Y. Jiang, and M. Doi, Cross interaction drives stratification in drying film of binary colloidal mixtures, Phys. Rev. Lett. 118, 108002 (2017).
* Sear and Warren [2017] R. P. Sear and P. B. Warren, Diffusiophoresis in nonadsorbing polymer solutions: The asakura-oosawa model and stratification in drying films, Phys. Rev. E 96, 62602 (2017).
* Howard _et al._ [2017b] M. P. Howard, A. Nikoubashman, and A. Z. Panagiotopoulos, Stratification in drying polymer-polymer and colloid-polymer mixtures, Langmuir 33, 11390 (2017b).
* Flory and Fox [1951] P. J. Flory and T. G. J. Fox, Treatment of intrinsic viscosities, J. Am. Chem. Soc. 73, 1909 (1951).
* Matsuoka and Cowman [2002] S. Matsuoka and M. K. Cowman, Equation of state of polymer solution, Polymer 43, 3447 (2002).
* de Gennes [1979] P.-G. de Gennes, _Scaling Concepts in Polymer Physics_ (Cornell University Press, 1979).
* Colby [2010] R. H. Colby, Structure and linear viscoelasticity of flexible polymer solutions: comparison of polyelectrolyte and neutral polymer solutions, Rheol. Acta 49, 425 (2010).
* Takahashi _et al._ [1985] Y. Takahashi, Y. Isono, I. Noda, and M. Nagasawa, Zero-shear viscosity of linear polymer solutions over a wide range of concentration, Macromolecules 18, 1002 (1985).
* Graessley [1980] W. W. Graessley, Polymer chain dimensions and the dependence of viscoelastic properties on concentration, molecular weight and solvent power, Polymer 21, 258 (1980).
* Krieger and Dougherty [1959] I. M. Krieger and T. J. Dougherty, A mechanism for non-newtonian flow in suspensions of rigid spheres, Trans. Soc. Rheol. 3, 137 (1959).
* Langevin and Rondelez [1978] D. Langevin and F. Rondelez, Sedimentation of large colloidal particles through semidilute polymer solutions, Polymer 19, 875 (1978).
* Chou _et al._ [2006] C. Y. Chou, B. C. Eng, and M. Robert, One-dimensional diffusion of colloids in polymer solutions, J. Chem. Phys. 124, 044902 (2006).
* Sear [2018] R. P. Sear, Stratification of mixtures in evaporating liquid films occurs only for a range of volume fractions of the smaller component, J. Chem. Phys. 148, 134909 (2018).
* Fedorchenko and Chernov [2003] A. I. Fedorchenko and A. A. Chernov, Exact solution of the problem of gas segregation in the process of crystallization, Int. J. Heat Mass Tran. 46, 915 (2003).
* Anderson _et al._ [1982] J. L. Anderson, M. E. Lowell, and D. C. Prieve, Motion of a particle generated by chemical gradients part 1. non-electrolytes, J. Fluid Mech. 117, 107 (1982).
* Anderson [1989] J. L. Anderson, Colloid transport by interfacial forces, Annun. Rev. Fluid Mech. 21, 61 (1989).
* Okuzono _et al._ [2006] T. Okuzono, K. Ozawa, and M. Doi, Simple model of skin formation caused by solvent evaporation in polymer solutions, Phys. Rev. Lett. 97, 136103 (2006).
* Rubinstein and Colby [2003] M. Rubinstein and R. H. Colby, _Polymer Physics_ (Oxford University Press, 2003).
|
2024-09-04T02:54:59.247362 | 2020-03-11T16:42:49 | 2003.05399 | {
"authors": "Mats Vermeeren",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26170",
"submitter": "Mats Vermeeren",
"url": "https://arxiv.org/abs/2003.05399"
} | arxiv-papers | Open Communications in Nonlinear Mathematical Physics ]ocnmp[ Vol.1 (2021) pp
1–References Article
††footnotetext: © The author(s). Distributed under a Creative Commons
Attribution 4.0 International License
Hamiltonian structures for integrable hierarchies of Lagrangian PDEs
Mats Vermeeren 1
1 School of Mathematics, University of Leeds, Leeds, LS2 9JT, UK.
<EMAIL_ADDRESS>
Received May 18, 2021; Accepted August 31, 2021
###### Abstract
Many integrable hierarchies of differential equations allow a variational
description, called a Lagrangian multiform or a pluri-Lagrangian structure.
The fundamental object in this theory is not a Lagrange function but a
differential $d$-form that is integrated over arbitrary $d$-dimensional
submanifolds. All such action integrals must be stationary for a field to be a
solution to the pluri-Lagrangian problem. In this paper we present a procedure
to obtain Hamiltonian structures from the pluri-Lagrangian formulation of an
integrable hierarchy of PDEs. As a prelude, we review a similar procedure for
integrable ODEs. We show that the exterior derivative of the Lagrangian
$d$-form is closely related to the Poisson brackets between the corresponding
Hamilton functions. In the ODE (Lagrangian 1-form) case we discuss as examples
the Toda hierarchy and the Kepler problem. As examples for the PDE (Lagrangian
2-form) case we present the potential and Schwarzian Korteweg-de Vries
hierarchies, as well as the Boussinesq hierarchy.
## 1 Introduction
Some of the most powerful descriptions of integrable systems use the
Hamiltonian formalism. In mechanics, Liouville-Arnold integrability means
having as many independent Hamilton functions as the system has degrees of
freedom, such that the Poisson bracket of any two of them vanishes. In the
case of integrable PDEs, which have infinitely many degrees of freedom,
integrability is often defined as having an infinite number of commuting
Hamiltonian flows, where again each two Hamilton functions have a zero Poisson
bracket. In addition, many integrable PDEs have two compatible Poisson
brackets. Such a bi-Hamiltonian structure can be used to obtain a recursion
operator, which in turn is an effective way to construct an integrable
hierarchy of PDEs.
In many cases, especially in mechanics, Hamiltonian systems have an equivalent
Lagrangian description. This raises the question whether integrability can be
described from a variational perspective too. Indeed, a Lagrangian theory of
integrable hierarchies has been developed over the last decade or so,
originating in the theory of integrable lattice equations (see for example
[14], [3], [12, Chapter 12]). It is called the theory of _Lagrangian
multiform_ systems, or, of _pluri-Lagrangian_ systems. The continuous version
of this theory, i.e. its application to differential equations, was developed
among others in [26, 28]. Recently, connections have been established between
pluri-Lagrangian systems and variational symmetries [18, 19, 22] as well as
Lax pairs [21].
Already in one of the earliest studies of continuous pluri-Lagrangian
structures [26], the pluri-Lagrangian principle for ODEs was shown to be
equivalent to the existence of commuting Hamiltonian flows (see also [24]). In
addition, the property that Hamilton functions are in involution can be
expressed in Lagrangian terms as closedness of the Lagrangian form. The main
goal of this work is to generalize this connection between pluri-Lagrangian
and Hamiltonian structures to the case of integrable PDEs.
A complementary approach to connecting pluri-Lagrangian structures to
Hamiltonian structures was recently taken in [6]. There, a generalisation of
covariant Hamiltonian field theory is proposed, under the name _Hamiltonian
multiform_ , as the Hamiltonian counterpart of Lagrangian multiform systems.
This yields a Hamiltonian framework where all independent variables are on the
same footing. In the present work we obtain classical Hamiltonian structures
where one of the independent variables is singled out as the common space
variable of all equations in a hierarchy.
We begin this paper with an introduction to pluri-Lagrangian systems in
Section 2. The exposition there relies mostly on examples, while proofs of the
main theorems can be found in Appendix A. Then we discuss how pluri-Lagrangian
systems generate Hamiltonian structures, using symplectic forms in
configuration space. In Section 3 we review this for ODEs (Lagrangian 1-form
systems) and in Section 4 we present the case of $(1+1)$-dimensional PDEs
(Lagrangian 2-form systems). In each section, we illustrate the results by
examples.
## 2 Pluri-Lagrangian systems
A hierarchy of commuting differential equations can be embedded in a higher-
dimensional space of independent variables, where each equation has its own
time variable. All equations share the same space variables (if any) and have
the same configuration manifold $Q$. We use coordinates
$t_{1},t_{2},\ldots,t_{N}$ in the _multi-time_ $M=\mathbb{R}^{N}$. In the case
of a hierarchy of $(1+1)$-dimensional PDEs, the first of these coordinates is
a common space coordinate, $t_{1}=x$, and we assume that for each $i\geq 2$
there is a PDE in the hierarchy expressing the $t_{i}$-derivative of a field
$u:M\rightarrow Q$ in terms of $u$ and its $x$-derivatives. Then the field $u$
is determined on the whole multi-time $M$ if initial values are prescribed on
the $x$-axis. In the case of ODEs, we assume that there is a differential
equation for each of the time directions. Then initial conditions at one point
in multi-time suffice to determine a solution.
We view a field $u:M\rightarrow Q$ as a smooth section of the trivial bundle
$M\times Q$, which has coordinates $(t_{1},\ldots,t_{N},u)$. The extension of
this bundle containing all partial derivatives of $u$ is called the _infinite
jet bundle_ and denoted by $M\times J^{\infty}$. Given a field $u$, we call
the corresponding section $\llbracket
u\rrbracket=(u,u_{t_{i}},u_{t_{i}t_{j}},\ldots)$ of the infinite jet bundle
the _infinite jet prolongation_ of $u$. (See e.g. [1] or [17, Sec. 3.5].)
In the pluri-Lagrangian context, the Lagrange function is replaced by a jet-
dependent $d$-form. More precisely we consider a fiber-preserving map
$\textstyle\mathcal{L}:M\times J^{\infty}\rightarrow\bigwedge^{d}(T^{*}M).$
Since a field $u:M\rightarrow Q$ defines a section of the infinite jet bundle,
$\mathcal{L}$ associates to it a section of $\bigwedge^{d}(T^{*}M)$, that is,
a $d$-form $\mathcal{L}\llbracket u\rrbracket$. We use the square brackets
$\llbracket u\rrbracket$ to denote dependence on the infinite jet prolongation
of $u$. We take $d=1$ if we are dealing with ODEs and $d=2$ if we are dealing
with $(1+1)$-dimensional PDEs. Higher-dimensional PDEs would correspond to
$d>2$, but are not considered in the present work. (An example of a Lagrangian
$3$-form system, the KP hierarchy, can be found in [22].) We write
$\mathcal{L}\llbracket u\rrbracket=\sum_{i}\mathcal{L}_{i}\llbracket
u\rrbracket\,\mbox{\rm d}t_{i}$
for 1-forms and
$\mathcal{L}\llbracket u\rrbracket=\sum_{i,j}\mathcal{L}_{ij}\llbracket
u\rrbracket\,\mbox{\rm d}t_{i}\wedge\mbox{\rm d}t_{j}$
for 2-forms.
###### Definition 2.1.
A field $u:M\rightarrow Q$ is a solution to the _pluri-Lagrangian problem_ for
the jet-dependent $d$-form $\mathcal{L}$, if for every $d$-dimensional
submanifold $\Gamma\subset M$ the action $\int_{\Gamma}\mathcal{L}\llbracket
u\rrbracket$ is critical with respect to variations of the field $u$, i.e.
$\frac{\mbox{\rm d}}{\mbox{\rm
d}\varepsilon}\int_{\Gamma}\mathcal{L}\llbracket u+\epsilon
v\rrbracket\bigg{|}_{\varepsilon=0}=0$
for all variations $v:M\rightarrow Q$ such that $v$ and all its partial
derivatives are zero on $\partial\Gamma$.
Some authors include in the definition that the Lagrangian $d$-form must be
closed when evaluated on solutions. That is equivalent to requiring that the
action is not just critical on every $d$-submanifold, but also takes the same
value on every $d$-submanifold (with the same boundary and topology). In this
perspective, one can take variations of the submanifold $\Gamma$ as well as of
the fields. We choose not to include the closedness in our definition, because
slightly weaker property can be obtained as a consequence Definition 2.1 (see
Proposition A.2 in the Appendix). Most of the authors that include closedness
in the definition use the term “Lagrangian multiform” (e.g. [14, 12, 32, 33,
22]), whereas those that do not tend to use “pluri-Lagrangian” (e.g. [4, 5,
27]). Whether or not it is included in the definition, closedness of the
Lagrangian $d$-form is an important property. As we will see in Sections 3.4
and 4.4, it is the Lagrangian counterpart to vanishing Poisson brackets
between Hamilton functions.
Clearly the pluri-Lagrangian principle is stronger than the usual variational
principle for the individual coefficients $\mathcal{L}_{i}$ or
$\mathcal{L}_{ij}$ of the Lagrangian form. Hence the classical Euler-Lagrange
equations are only a part of the system equations characterizing a solution to
the pluri-Lagrangian problem. This system, which we call the _multi-time
Euler-Lagrange equations_ , was derived in [28] for Lagrangian 1- and 2-forms
by approximating an arbitrary given curve or surface $\Gamma$ by _stepped_
curves or surfaces, which are piecewise flat with tangent spaces spanned by
coordinate directions. In Appendix A we give a more intrinsic proof that the
multi-time Euler-Lagrange equations imply criticality in the pluri-Lagrangian
sense. Yet another proof can be found in [23].
In order to write the multi-time Euler-Lagrange equations in a convenient
form, we will use the multi-index notation for (mixed) partial derivatives.
Let $I$ be an $N$-index, i.e. a $N$-tuple of non-negative integers. We denote
by $u_{I}$ the mixed partial derivative of $u:\mathbb{R}^{N}\rightarrow Q$,
where the number of derivatives with respect to each $t_{i}$ is given by the
entries of $I$. Note that if $I=(0,\ldots,0)$, then $u_{I}=u$. We will often
denote a multi-index suggestively by a string of $t_{i}$-variables, but it
should be noted that this representation is not always unique. For example,
$t_{1}=(1,0,\ldots,0),\qquad t_{N}=(0,\ldots,0,1),\qquad
t_{1}t_{2}=t_{2}t_{1}=(1,1,0,\ldots,0).$
In this notation we will also make use of exponents to compactify the
expressions, for example
$t_{2}^{3}=t_{2}t_{2}t_{2}=(0,3,0,\ldots,0).$
The notation $It_{j}$ should be interpreted as concatenation in the string
representation, hence it denotes the multi-index obtained from $I$ by
increasing the $j$-th entry by one. Finally, if the $j$-th entry of $I$ is
nonzero we say that $I$ contains $t_{j}$, and write $I\ni t_{j}$.
### 2.1 Lagrangian 1-forms
###### Theorem 2.2 ([28]).
Consider the Lagrangian 1-form
$\mathcal{L}\llbracket u\rrbracket=\sum_{j=1}^{N}\mathcal{L}_{j}\llbracket
u\rrbracket\,\mbox{\rm d}t_{j},$
depending on an arbitrary number of derivatives of $u$. A field $u$ is
critical in the pluri-Lagrangian sense if and only if it satisfies the multi-
time Euler-Lagrange equations
$\displaystyle\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{I}}}=0$
$\displaystyle\forall I\not\ni t_{j},$ (1)
$\displaystyle\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{It_{j}}}}-\frac{\delta_{1}{\mathcal{L}_{1}}}{\delta{u_{It_{1}}}}=0$
$\displaystyle\forall I,$ (2)
for all indices $j\in\\{1,\ldots,N\\}$, where
$\frac{\delta_{j}{}}{\delta{u_{I}}}$ denotes the variational derivative in the
direction of $t_{j}$ with respect to $u_{I}$,
$\frac{\delta_{j}{}}{\delta{u_{I}}}=\frac{\partial{}}{\partial{u_{I}}}-\partial_{j}\frac{\partial{}}{\partial{u_{It_{j}}}}+\partial_{j}^{2}\frac{\partial{}}{\partial{u_{It_{j}t_{j}}}}-\cdots,$
and $\partial_{j}=\frac{\mbox{\rm d}}{\mbox{\rm d}t_{j}}$.
Note the derivative $\partial_{j}$ equals the total derivative
$\sum_{I}u_{It_{j}}\frac{\partial{}}{\partial{u_{I}}}$ if it is applied to a
function $f\llbracket u\rrbracket$ that only depends on $t_{j}$ through $u$.
Using the total derivative has the advantage that calculations can be done on
an algebraic level, where the $u_{I}$ are formal symbols that do not
necessarily have an analytic interpretation as a derivative.
###### Example 2.3.
The Toda lattice describes $N$ particles on a line with an exponential
nearest-neighbor interaction. We denote the displacement from equilibrium of
the particles by
$u=(q^{\scriptscriptstyle[1]},\ldots,q^{\scriptscriptstyle[N]})$. We impose
either periodic boundary conditions (formally
$q^{\scriptscriptstyle[0]}=q^{\scriptscriptstyle[N]}$ and
$q^{\scriptscriptstyle[N+1]}=q^{\scriptscriptstyle[1]}$) or open-ended
boundary conditions (formally $q^{\scriptscriptstyle[0]}=\infty$ and
$q^{\scriptscriptstyle[N+1]}=-\infty$). We will use
$q^{\scriptscriptstyle[k]}_{j}$ as shorthand notation for the derivative
$q^{\scriptscriptstyle[k]}_{t_{j}}=\frac{\mbox{\rm
d}q^{\scriptscriptstyle[k]}}{\mbox{\rm d}t_{j}}$. Consider the hierarchy
consisting of the Newtonian equation for the Toda lattice,
$q^{\scriptscriptstyle[k]}_{11}=\exp\\!\left(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}\right)-\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right),\\\
$ (3)
along with its variational symmetries,
$\begin{split}q^{\scriptscriptstyle[k]}_{2}&=\left(q^{\scriptscriptstyle[k]}_{1}\right)^{2}+\exp\\!\left(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}\right)+\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right),\\\
q^{\scriptscriptstyle[k]}_{3}&=\left(q^{\scriptscriptstyle[k]}_{1}\right)^{3}+q^{\scriptscriptstyle[k+1]}_{1}\exp\\!\left(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}\right)+q^{\scriptscriptstyle[k-1]}_{1}\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right)\\\
&\quad+2q^{\scriptscriptstyle[k]}_{1}\left(\exp\\!\left(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}\right)+\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right)\right),\\\
&\mathmakebox[\widthof{{}={}}][c]{\vdots}\end{split}$ (4)
The hierarchy (3)–(4) has a Lagrangian 1-form with coefficients
$\displaystyle\mathcal{L}_{1}$
$\displaystyle=\sum_{k}\left(\frac{1}{2}\left(q^{\scriptscriptstyle[k]}_{1}\right)^{2}-\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right)\right),$
$\displaystyle\mathcal{L}_{2}$
$\displaystyle=\sum_{k}\left(q^{\scriptscriptstyle[k]}_{1}q^{\scriptscriptstyle[k]}_{2}-\frac{1}{3}\left(q^{\scriptscriptstyle[k]}_{1}\right)^{3}-\left(q^{\scriptscriptstyle[k]}_{1}+q^{\scriptscriptstyle[k-1]}_{1}\right)\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right)\right),$
$\displaystyle\mathcal{L}_{3}$
$\displaystyle=\sum_{k}\bigg{(}-\frac{1}{4}\left(q^{\scriptscriptstyle[k]}_{1}\right)^{4}-\left(\left(q^{\scriptscriptstyle[k+1]}_{1}\right)^{2}+q^{\scriptscriptstyle[k+1]}_{1}q^{\scriptscriptstyle[k]}_{1}+\left(q^{\scriptscriptstyle[k]}_{1}\right)^{2}\right)\exp\\!\left(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}\right)$
$\displaystyle\hskip
42.67912pt+q^{\scriptscriptstyle[k]}_{1}q^{\scriptscriptstyle[k]}_{3}-\exp\\!\left(q^{\scriptscriptstyle[k+2]}-q^{\scriptscriptstyle[k]}\right)-\frac{1}{2}\exp\\!\left(2(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]})\right)\bigg{)},$
$\displaystyle\mathmakebox[\widthof{{}={}}][c]{\vdots}$
See [18, 29] for constructions of this pluri-Lagrangian structure. The
classical Euler-Lagrange equations of these Lagrangian coefficients are
$\displaystyle\frac{\delta_{1}{\mathcal{L}_{1}}}{\delta{q^{\scriptscriptstyle[k]}}}=0\quad$
$\displaystyle\Leftrightarrow\quad
q^{\scriptscriptstyle[k]}_{11}=e^{q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}}-e^{q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}},$
$\displaystyle\frac{\delta_{2}{\mathcal{L}_{2}}}{\delta{q^{\scriptscriptstyle[k]}}}=0\quad$
$\displaystyle\Leftrightarrow\quad
q^{\scriptscriptstyle[k]}_{12}=\left(q^{\scriptscriptstyle[k]}_{1}+q^{\scriptscriptstyle[k+1]}_{1}\right)e^{q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}}-\left(q^{\scriptscriptstyle[k-1]}_{1}+q^{\scriptscriptstyle[k]}_{1}\right)e^{q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}},$
$\displaystyle\mathmakebox[\widthof{{}\Rightarrow{}}][c]{\vdots}$
We recover Equation (3), but for the other equations of the hierarchy we only
get a differentiated form. However, we do get their evolutionary form, as in
Equation (4), from the multi-time Euler-Lagrange equations
$\frac{\delta_{2}{\mathcal{L}_{2}}}{\delta{q^{\scriptscriptstyle[k]}_{1}}}=0,\qquad\frac{\delta_{3}{\mathcal{L}_{3}}}{\delta{q^{\scriptscriptstyle[k]}_{1}}}=0,\qquad\cdots.$
The multi-time Euler-Lagrange equations of type (2) are trivially satisfied in
this case:
$\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{q^{\scriptscriptstyle[k]}_{j}}}=q^{\scriptscriptstyle[k]}_{1}$
for all $j$.
### 2.2 Lagrangian 2-forms
###### Theorem 2.4 ([28]).
Consider the Lagrangian 2-form
$\mathcal{L}\llbracket u\rrbracket=\sum_{i<j}\mathcal{L}_{ij}\llbracket
u\rrbracket\,\mbox{\rm d}t_{i}\wedge\mbox{\rm d}t_{j},$
depending on an arbitrary number of derivatives of $u$. A field $u$ is
critical in the pluri-Lagrangian sense if and only if it satisfies the multi-
time Euler-Lagrange equations
$\displaystyle\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{I}}}=0$
$\displaystyle\forall I\not\ni t_{i},t_{j},$ (5)
$\displaystyle\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{j}}}}-\frac{\delta_{ik}{\mathcal{L}_{ik}}}{\delta{u_{It_{k}}}}=0$
$\displaystyle\forall I\not\ni t_{i},$ (6)
$\displaystyle\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}t_{j}}}}+\frac{\delta_{jk}{\mathcal{L}_{jk}}}{\delta{u_{It_{j}t_{k}}}}+\frac{\delta_{ki}{\mathcal{L}_{ki}}}{\delta{u_{It_{k}t_{i}}}}=0$
$\displaystyle\forall I,$ (7)
for all triples $(i,j,k)$ of distinct indices, where
$\frac{\delta_{ij}{}}{\delta{u_{I}}}=\sum_{\alpha,\beta=0}^{\infty}(-1)^{\alpha+\beta}\partial_{i}^{\alpha}\partial_{j}^{\beta}\frac{\partial{}}{\partial{u_{It_{i}^{\alpha}t_{j}^{\beta}}}}.$
###### Example 2.5.
A Lagrangian 2-form for the potential KdV hierarchy was first given in [28].
It is instructive to look at just two of the equations embedded in
$\mathbb{R}^{3}$. Then the Lagrangian 2-form has three coefficients,
$\mathcal{L}=\mathcal{L}_{12}\,\mbox{\rm d}t_{1}\wedge\mbox{\rm
d}t_{2}+\mathcal{L}_{13}\,\mbox{\rm d}t_{1}\wedge\mbox{\rm
d}t_{3}+\mathcal{L}_{23}\,\mbox{\rm d}t_{2}\wedge\mbox{\rm d}t_{3},$
where $t_{1}$ is viewed as the space variable. We can take
$\displaystyle\mathcal{L}_{12}$
$\displaystyle=-u_{1}^{3}-\frac{1}{2}u_{1}u_{111}+\frac{1}{2}u_{1}u_{2},$
$\displaystyle\mathcal{L}_{13}$
$\displaystyle=-\frac{5}{2}u_{1}^{4}+5u_{1}u_{11}^{2}-\frac{1}{2}u_{111}^{2}+\frac{1}{2}u_{1}u_{3},$
where $u_{i}$ is a shorthand notation for the partial derivative $u_{t_{i}}$,
and similar notations are used for higher derivatives. These are the classical
Lagrangians of the potential KdV hierarchy. However, their classical Euler-
Lagrange equations give the hierarchy only in a differentiated form,
$\displaystyle u_{12}$ $\displaystyle=6u_{1}u_{11}+u_{1111},$ $\displaystyle
u_{13}$
$\displaystyle=30u_{1}^{2}u_{11}+20u_{11}u_{111}+10u_{1}u_{1111}+u_{111111}.$
The Lagrangian 2-form also contains a coefficient
$\displaystyle\mathcal{L}_{23}$
$\displaystyle=3u_{1}^{5}-\frac{15}{2}u_{1}^{2}u_{11}^{2}+10u_{1}^{3}u_{111}-5u_{1}^{3}u_{2}+\frac{7}{2}u_{11}^{2}u_{111}+3u_{1}u_{111}^{2}-6u_{1}u_{11}u_{1111}$
$\displaystyle\quad+\frac{3}{2}u_{1}^{2}u_{11111}+10u_{1}u_{11}u_{12}-\frac{5}{2}u_{11}^{2}u_{2}-5u_{1}u_{111}u_{2}+\frac{3}{2}u_{1}^{2}u_{3}-\frac{1}{2}u_{1111}^{2}$
$\displaystyle\quad+\frac{1}{2}u_{111}u_{11111}-u_{111}u_{112}+\frac{1}{2}u_{1}u_{113}+u_{1111}u_{12}-\frac{1}{2}u_{11}u_{13}-\frac{1}{2}u_{11111}u_{2}$
$\displaystyle\quad+\frac{1}{2}u_{111}u_{3}$
which does not have a classical interpretation, but contributes meaningfully
in the pluri-Lagrangian formalism. In particular, the multi-time Euler-
Lagrange equations
$\frac{\delta_{12}{\mathcal{L}_{12}}}{\delta{u_{1}}}+\frac{\delta_{23}{\mathcal{L}_{23}}}{\delta{u_{3}}}=0\qquad\text{and}\qquad\frac{\delta_{13}{\mathcal{L}_{13}}}{\delta{u_{1}}}-\frac{\delta_{23}{\mathcal{L}_{23}}}{\delta{u_{3}}}=0$
yield the potential KdV equations in their evolutionary form,
$\displaystyle u_{2}$ $\displaystyle=3u_{1}^{2}+u_{111},$ $\displaystyle
u_{3}$ $\displaystyle=10u_{1}^{3}+5u_{11}^{2}+10u_{1}u_{111}+u_{11111}.$
All other multi-time Euler-Lagrange equations are consequences of these
evolutionary equations.
This example can be extended to contain an arbitrary number of equations from
the potential KdV hierarchy. The coefficients $\mathcal{L}_{1j}$ will be
Lagrangians of the individual equations, whereas the $\mathcal{L}_{ij}$ for
$i,j>1$ do not appear in the traditional Lagrangian picture.
###### Example 2.6.
The Boussinesq equation
$u_{22}=12u_{1}u_{11}-3u_{1111}$ (8)
is of second order in its time $t_{2}$, but the higher equations of its
hierarchy are of first order in their respective times, beginning with
$u_{3}=-6u_{1}u_{2}+3u_{112}.$ (9)
A Lagrangian 2-form for this system has coefficients
$\displaystyle\mathcal{L}_{12}$
$\displaystyle=\frac{1}{2}u_{2}^{2}-2u_{1}^{3}-\frac{3}{2}u_{11}^{2},$
$\displaystyle\mathcal{L}_{13}$
$\displaystyle=u_{2}u_{3}+6u_{1}^{4}+27u_{1}u_{11}^{2}-6uu_{12}u_{2}+\frac{9}{2}u_{111}^{2}+\frac{3}{2}u_{12}^{2},$
$\displaystyle\mathcal{L}_{23}$
$\displaystyle=24u_{1}^{3}u_{2}+18u_{1}u_{11}u_{12}+9u_{11}^{2}u_{2}-18u_{1}u_{111}u_{2}-2u_{2}^{3}-6uu_{2}u_{22}$
$\displaystyle\quad+6u_{1}^{2}u_{3}+9u_{111}u_{112}+3u_{11}u_{13}+3u_{12}u_{22}-3u_{111}u_{3}.$
They can be found in [30] with a different scaling of $\mathcal{L}$ and a
different numbering of the time variables. Equation (8) is equivalent to the
Euler-Lagrange equation
$\frac{\delta_{12}{\mathcal{L}_{12}}}{\delta{u}}=0$
and Equation (9) to
$\frac{\delta_{13}{\mathcal{L}_{13}}}{\delta{u_{2}}}=0.$
All other multi-time Euler-Lagrange equations are differential consequences of
Equations (8) and (9). As in the previous example, it is possible to extend
this 2-form to represent an arbitrary number of equations from the hierarchy.
Further examples of pluri-Lagrangian 2-form systems can be found in [21, 22,
29, 30].
## 3 Hamiltonian structure of Lagrangian 1-form systems
A connection between Lagrangian 1-form systems and Hamiltonian or symplectic
systems was found in [26], both in the continuous and the discrete case. Here
we specialize that result to the common case where one coefficient of the
Lagrangian 1-form is a mechanical Lagrangian and all others are linear in
their respective time-derivatives. We formulate explicitly the underlying
symplectic structures, which will provide guidance for the case of Lagrangian
2-form systems. Since some of the coefficients of the Lagrangian form will be
linear in velocities, it is helpful to first have a look at the Hamiltonian
formulation for Lagrangians of this type, independent of a pluri-Lagrangian
structure.
### 3.1 Lagrangians that are linear in velocities
Let the configuration space be a finite-dimensional real vector space
$Q=\mathbb{R}^{N}$ and consider a Lagrangian
$\mathcal{L}:TQ\rightarrow\mathbb{R}$ of the form
$\mathcal{L}(q,q_{t})=p(q)^{T}q_{t}-V(q),$ (10)
where
$\det\left(\frac{\partial{p}}{\partial{q}}-\left(\frac{\partial{p}}{\partial{q}}\right)^{T}\right)\neq
0.$ (11)
Note that $p$ denotes a function of the position $q$; later on we will use
$\pi$ to denote the momentum as an element of cotangent space. If $Q$ is a
manifold, the arguments of this subsection will still apply if there exists
local coordinates in which the Lagrangian is of the form (10). The Euler-
Lagrange equations are first order ODEs:
$\dot{q}=\left(\left(\frac{\partial{p}}{\partial{q}}\right)^{T}-\frac{\partial{p}}{\partial{q}}\right)^{-1}\nabla
V,$ (12)
where $\nabla V=\left(\frac{\partial{V}}{\partial{q}}\right)^{T}$ is the
gradient of $V$.
Note that Equation (11) implies that $Q$ is even-dimensional, hence $Q$ admits
a (local) symplectic structure. Instead of a symplectic form on $T^{*}Q$, the
Lagrangian system preserves a symplectic form on $Q$ itself [2, 20]:
$\displaystyle\omega=\sum_{i}-\mbox{\rm d}p_{i}(q)\wedge\mbox{\rm d}q_{i}$
$\displaystyle=\sum_{i,j}-\frac{\partial{p_{i}}}{\partial{q_{j}}}\,\mbox{\rm
d}q_{j}\wedge\mbox{\rm d}q_{i}$ (13)
$\displaystyle=\sum_{i<j}\left(\frac{\partial{p_{i}}}{\partial{q_{j}}}-\frac{\partial{p_{j}}}{\partial{q_{i}}}\right)\mbox{\rm
d}q_{i}\wedge\mbox{\rm d}q_{j},$
which is non-degenerate by virtue of Equation (11).
###### Proposition 3.1.
The Euler-Lagrange equation (12) of the Lagrangian (10) corresponds to a
Hamiltonian vector field with respect to the symplectic structure $\omega$,
with Hamilton function $V$.
* Proof.
The Hamiltonian vector field
$X=\sum_{i}X_{i}\frac{\partial{}}{\partial{q_{i}}}$ of the Hamilton function
$V$ with respect to $\omega$ satisfies
$\iota_{X}\omega=\mbox{\rm d}V,$
where
$\iota_{X}\omega=\sum_{i}\sum_{j\neq
i}\left(\frac{\partial{p_{j}}}{\partial{q_{i}}}-\frac{\partial{p_{i}}}{\partial{q_{j}}}\right)X_{j}\,\mbox{\rm
d}q_{i}$
and
$\mbox{\rm d}V=\sum_{i}\frac{\partial{V}}{\partial{q_{i}}}\,\mbox{\rm
d}q_{i}.$
Hence
$X=\left(\left(\frac{\partial{p}}{\partial{q}}\right)^{T}-\frac{\partial{p}}{\partial{q}}\right)^{-1}\nabla
V,$
which is the vector field corresponding to the Euler-Lagrange equation (12). ∎
### 3.2 From pluri-Lagrangian to Hamiltonian systems
On a finite-dimensional real vector space $Q$, consider a Lagrangian 1-form
$\mathcal{L}=\sum_{i}\mathcal{L}_{i}\,\mbox{\rm d}t_{i}$ consisting of a
mechanical Lagrangian
$\mathcal{L}_{1}(q,q_{1})=\frac{1}{2}|q_{1}|^{2}-V_{1}(q),$ (14)
where $|q_{1}|^{2}=q_{1}^{T}q_{1}$, and additional coefficients of the form
$\mathcal{L}_{i}(q,q_{1},q_{i})=q_{1}^{T}q_{i}-V_{i}(q,q_{1})\qquad\text{for
}i\geq 2,$ (15)
where the indices of $q$ denote partial derivatives,
$q_{i}=q_{t_{i}}=\frac{\mbox{\rm d}q}{\mbox{\rm d}t_{i}}$, whereas the indices
of $\mathcal{L}$ and $V$ are labels. We have chosen the Lagrangian
coefficients such that they share a common momentum $p=q_{1}$, which is forced
upon us by the multi-time Euler-Lagrange equation (2). Note that for each $i$,
the coefficient $\mathcal{L}_{i}$ contains derivatives of $q$ with respect to
$t_{1}$ and $t_{i}$ only. Many Lagrangian 1-forms are of this form, including
the Toda hierarchy, presented in Example 2.3.
The nontrivial multi-time Euler-Lagrange equations are
$\frac{\delta_{1}{\mathcal{L}_{1}}}{\delta{q}}=0\quad\Leftrightarrow\quad
q_{11}=-\frac{\partial{V_{1}}}{\partial{q}},$
and
$\frac{\delta_{i}{\mathcal{L}_{i}}}{\delta{q_{1}}}=0\quad\Leftrightarrow\quad
q_{i}=\frac{\partial{V_{i}}}{\partial{q_{1}}}\qquad\qquad\text{for }i\geq 2,$
with the additional condition that
$\frac{\delta_{i}{\mathcal{L}_{i}}}{\delta{q}}=0\quad\Leftrightarrow\quad
q_{1i}+\frac{\partial{V_{i}}}{\partial{q}}=0.$
Hence the multi-time Euler-Lagrange equations are overdetermined. Only for
particular choices of $V_{i}$ will the last equation be a differential
consequence of the other multi-time Euler-Lagrange equations. The existence of
suitable $V_{i}$ for a given hierarchy could be taken as a definition of its
integrability.
Note that there is no multi-time Euler-Lagrange equation involving the
variational derivative
$\frac{\delta_{1}{\mathcal{L}_{i}}}{\delta{q}}=\frac{\partial{V_{i}}}{\partial{q}}-\frac{\mbox{\rm
d}}{\mbox{\rm d}t_{1}}\frac{\partial{V_{i}}}{\partial{q_{1}}}$
because of the mismatch between the time direction $t_{1}$ in which the
variational derivative acts and the index $i$ of the Lagrangian coefficient.
The multi-time Euler-Lagrange equations of the type
$\frac{\delta_{i}{\mathcal{L}_{i}}}{\delta{q_{i}}}=\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{q_{j}}}$
all reduce to the trivial equation $q_{1}=q_{1}$, expressing the fact that all
$\mathcal{L}_{i}$ yield the same momentum.
Since $\mathcal{L}_{1}$ is regular,
$\det\left(\frac{\partial{{}^{2}\mathcal{L}_{1}}}{\partial{q_{1}^{2}}}\right)\neq
0$, we can find a canonical Hamiltonian for the first equation by Legendre
transformation,
$H_{1}(q,\pi)=\frac{1}{2}|\pi|^{2}+V_{1}(q),$
where we use $\pi$ to denote the cotangent space coordinate and
$|\pi|^{2}=\pi^{T}\pi$.
For $i\geq 2$ we consider $r=q_{1}$ as a second dependent variable. In other
words, we double the dimension of the configuration space, which is now has
coordinates $(q,r)=(q,q_{1})$. The Lagrangians
$\mathcal{L}_{i}(q,r,q_{i},r_{i})=rq_{i}-V_{i}(q,r)$ are linear in velocities.
We have $p(q,r)=r$, hence the symplectic form (13) is
$\omega=\mbox{\rm d}r\wedge\mbox{\rm d}q.$
This is the canonical symplectic form, with the momentum replaced by
$r=q_{1}$. Hence we can consider $r$ as momentum, thus identifying the
extended configuration space spanned by $q$ and $r$ with the phase space
$T^{*}Q$.
Applying Proposition 3.1, we arrive at the following result:
###### Theorem 3.2.
The multi-time Euler-Lagrange equations of a 1-form with coefficients
(14)–(15) are equivalent, under the identification $\pi=q_{1}$, to a system of
Hamiltonian equations with respect to the canonical symplectic form
$\omega=\mbox{\rm d}\pi\wedge\mbox{\rm d}q$, with Hamilton functions
$H_{1}(q,\pi)=\frac{1}{2}|\pi|^{2}+V_{1}(q)\qquad\text{and}\qquad
H_{i}(q,\pi)=V_{i}(q,\pi)\quad\text{for }i\geq 2$
###### Example 3.3.
From the Lagrangian 1-form for the Toda lattice given in Example 2.3 we find
$\displaystyle H_{1}$
$\displaystyle=\sum_{k}\left(\frac{1}{2}\left(\pi^{[k]}\right)^{2}+\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right)\right),$
$\displaystyle H_{2}$
$\displaystyle=\sum_{k}\left(\frac{1}{3}\left(\pi^{[k]}\right)^{3}+\left(\pi^{[k]}+\pi^{[k-1]}\right)\exp\\!\left(q^{\scriptscriptstyle[k]}-q^{\scriptscriptstyle[k-1]}\right)\right),$
$\displaystyle H_{3}$
$\displaystyle=\sum_{k}\bigg{(}\frac{1}{4}\left(\pi^{[k]}\right)^{4}+\left(\left(\pi^{[k+1]}\right)^{2}+\pi^{[k+1]}\pi^{[k]}+\left(\pi^{[k]}\right)^{2}\right)\exp\\!\left(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]}\right)$
$\displaystyle\hskip
42.67912pt+\exp\\!\left(q^{\scriptscriptstyle[k+2]}-q^{\scriptscriptstyle[k]}\right)+\frac{1}{2}\exp\\!\left(2(q^{\scriptscriptstyle[k+1]}-q^{\scriptscriptstyle[k]})\right)\bigg{)},$
$\displaystyle\mathmakebox[\widthof{{}={}}][c]{\vdots}$
We have limited the discussion in this section to the case where
$\mathcal{L}_{1}$ is quadratic in the velocity. There are some interesting
examples that do not fall into this category, like the Volterra lattice, which
has a Lagrangian linear in velocities, and the relativistic Toda lattice,
which has a Lagrangian with a more complicated dependence on velocities (see
e.g. [25] and the references therein). The discussion above can be adapted to
other types of Lagrangian 1-forms if one of its coefficients $\mathcal{L}_{i}$
has an invertible Legendre transform, or if they are collectively Legendre-
transformable as described in [26].
### 3.3 From Hamiltonian to Pluri-Lagrangian systems
The procedure from Section 3.2 can be reversed to construct a Lagrangian
1-form from a number of Hamiltonians.
###### Theorem 3.4.
Consider Hamilton functions $H_{i}:T^{*}Q\rightarrow\mathbb{R}$, with
$H_{1}(q,\pi)=\frac{1}{2}|\pi|^{2}+V_{1}(q)$. Then the multi-time Euler-
Lagrange equations of the Lagrangian 1-form
$\sum_{i}\mathcal{L}_{i}\,\mbox{\rm d}t_{i}$ with
$\displaystyle\mathcal{L}_{1}$ $\displaystyle=\frac{1}{2}|q_{1}|^{2}-V_{1}(q)$
$\displaystyle\mathcal{L}_{i}$
$\displaystyle=q_{1}q_{i}-H_{i}(q,q_{1})\qquad\text{for }i\geq 2$
are equivalent to the Hamiltonian equations under the identification
$\pi=q_{1}$.
* Proof.
Identifying $\pi=q_{1}$, the multi-time Euler-Lagrange equations of the type
(1) are
$\displaystyle\frac{\delta_{1}{\mathcal{L}_{1}}}{\delta{q}}$
$\displaystyle=0\quad\Leftrightarrow\quad
q_{11}=-\frac{\partial{V_{1}(q)}}{\partial{q}},$
$\displaystyle\frac{\delta_{i}{\mathcal{L}_{i}}}{\delta{q_{1}}}$
$\displaystyle=0\quad\Leftrightarrow\quad
q_{i}=\frac{\partial{H_{i}(q,\pi)}}{\partial{p}},$
$\displaystyle\frac{\delta_{i}{\mathcal{L}_{i}}}{\delta{q}}$
$\displaystyle=0\quad\Leftrightarrow\quad\pi_{i}=-\frac{\partial{H_{i}(q,\pi)}}{\partial{q}}.$
The multi-time Euler-Lagrange equations of the type (2) are trivially
satisfied because
$\frac{\delta_{i}{\mathcal{L}_{i}}}{\delta{q_{i}}}=q_{1}$
for all $i$. ∎
Note that the statement of Theorem 3.4 does not require the Hamiltonian
equations to commute, i.e. it is not imposed that the Hamiltonian vector
fields $X_{H_{i}}$ associated to the Hamilton functions $H_{i}$ satisfy
$[X_{H_{i}},X_{H_{j}}]=0$. However, if they do not commute then for a generic
initial condition $(q_{0},\pi_{0})$ there will be no solution
$(q,\pi):\mathbb{R}^{N}\rightarrow T^{*}Q$ to the equations
$\displaystyle\frac{\partial{}}{\partial{t_{i}}}(q(t_{1},\ldots
t_{N}),\pi(t_{1},\ldots t_{N}))=X_{H_{i}}(q(t_{1},\ldots
t_{N}),\pi(t_{1},\ldots t_{N}))\qquad(i=1,\ldots,N),$
$\displaystyle(q(0,\ldots,0),\pi(0,\ldots,0))=(q_{0},\pi_{0}).$
Hence the relevance of Theorem 3.4 lies almost entirely in the case of
commuting Hamiltonian equations. If they do not commute then it is an (almost)
empty statement because neither the system of Hamiltonian equations nor the
multi-time Euler-Lagrange equations will have solutions for generic initial
data.
###### Example 3.5.
The Kepler Problem, describing the motion of a point mass around a
gravitational center, is one of the classic examples of a completely
integrable system. It possesses Poisson-commuting Hamiltonians
$H_{1},H_{2},H_{3}:T^{*}\mathbb{R}^{3}\rightarrow\mathbb{R}$ given by
$\displaystyle H_{1}(q,\pi)$
$\displaystyle=\frac{1}{2}|\pi|^{2}-|q|^{-1},\quad$ the energy, Hamiltonian
for the physical motion, $\displaystyle H_{2}(q,\pi)$
$\displaystyle=(q\times\pi)\cdot\mathsf{e}_{z},$ the 3rd component of the
angular momentum, and $\displaystyle H_{3}(q,\pi)$
$\displaystyle=|q\times\pi|^{2},$ the squared magnitude of the angular
momentum,
where $q=(x,y,z)$ and $\mathsf{e}_{z}$ is the unit vector in the
$z$-direction. The corresponding coefficients of the Lagrangian 1-form are
$\displaystyle\mathcal{L}_{1}$
$\displaystyle=\frac{1}{2}|q_{1}|^{2}+|q|^{-1},$
$\displaystyle\mathcal{L}_{2}$ $\displaystyle=q_{1}\cdot q_{2}-(q\times
q_{1})\cdot\mathsf{e}_{z},$ $\displaystyle\mathcal{L}_{3}$
$\displaystyle=q_{1}\cdot q_{3}-|q\times q_{1}|^{2}.$
The multi-time Euler-Lagrange equations are
$\frac{\delta_{1}{\mathcal{L}_{1}}}{\delta{q}}=0\quad\Rightarrow\quad
q_{11}=\frac{q}{|q|^{3}},$
the physical equations of motion,
$\displaystyle\frac{\delta_{2}{\mathcal{L}_{2}}}{\delta{q_{1}}}=0$
$\displaystyle\quad\Rightarrow\quad q_{2}=\mathsf{e}_{z}\times q,$
$\displaystyle\frac{\delta_{2}{\mathcal{L}_{2}}}{\delta{q}}=0$
$\displaystyle\quad\Rightarrow\quad q_{12}=-q_{1}\times\mathsf{e}_{z},\ $
describing a rotation around the $z$-axis, and
$\displaystyle\frac{\delta_{3}{\mathcal{L}_{3}}}{\delta{q_{1}}}=0$
$\displaystyle\quad\Rightarrow\quad q_{3}=2(q\times q_{1})\times q,$
$\displaystyle\frac{\delta_{3}{\mathcal{L}_{3}}}{\delta{q}}=0$
$\displaystyle\quad\Rightarrow\quad q_{13}=2(q\times q_{1})\times q_{1},$
describing a rotation around the angular momentum vector.
### 3.4 Closedness and involutivity
In the pluri-Lagrangian theory, the exterior derivative $\mbox{\rm
d}\mathcal{L}$ is constant on solutions (see Proposition A.2 in the Appendix).
In many cases this constant is zero, i.e. the Lagrangian 1-form is closed on
solutions. Here we relate this property to the vanishing of Poisson brackets
between the Hamilton functions.
###### Proposition 3.6 ([26, Theorem 3]).
Consider a Lagrangian 1-form $\mathcal{L}$ as in Section 3.2 and the
corresponding Hamilton functions $H_{i}$. On solutions to the multi-time
Euler-Lagrange equations, and identifying
$\pi=p(q,q_{1})=\frac{\partial{\mathcal{L}_{i}}}{\partial{q_{i}}}$, there
holds
$\begin{split}\frac{\mbox{\rm d}\mathcal{L}_{j}}{\mbox{\rm
d}t_{i}}-\frac{\mbox{\rm d}\mathcal{L}_{i}}{\mbox{\rm
d}t_{j}}&=p_{j}q_{i}-p_{i}q_{j}\\\ &=\\{H_{j},H_{i}\\},\end{split}$ (16)
where $\\{\cdot,\cdot\\}$ denotes the canonical Poisson bracket and $p_{j}$
and $q_{j}$ are shorthand for $\frac{\mbox{\rm d}p}{\mbox{\rm d}t_{j}}$ and
$\frac{\mbox{\rm d}q}{\mbox{\rm d}t_{j}}$.
* Proof.
On solutions of the multi-time Euler-Lagrange equations there holds
$\displaystyle\frac{\mbox{\rm d}\mathcal{L}_{j}}{\mbox{\rm d}t_{i}}$
$\displaystyle=\frac{\partial{\mathcal{L}_{j}}}{\partial{q}}q_{i}+\frac{\partial{\mathcal{L}_{j}}}{\partial{q_{1}}}q_{1i}+\frac{\partial{\mathcal{L}_{j}}}{\partial{q_{j}}}q_{ij}$
$\displaystyle=\left(\frac{\mbox{\rm d}}{\mbox{\rm
d}t_{j}}\frac{\partial{\mathcal{L}_{j}}}{\partial{q}}\right)q_{i}+\frac{\partial{\mathcal{L}_{j}}}{\partial{q_{j}}}q_{ij}$
$\displaystyle=p_{j}q_{i}+pq_{ij}.$
Hence
$\frac{\mbox{\rm d}\mathcal{L}_{j}}{\mbox{\rm d}t_{i}}-\frac{\mbox{\rm
d}\mathcal{L}_{i}}{\mbox{\rm d}t_{j}}=p_{j}q_{i}-p_{i}q_{j}.$ (17)
Alternatively, we can calculate this expression using the Hamiltonian
formalism. We have
$\displaystyle\frac{\mbox{\rm d}\mathcal{L}_{j}}{\mbox{\rm
d}t_{i}}-\frac{\mbox{\rm d}\mathcal{L}_{i}}{\mbox{\rm d}t_{j}}$
$\displaystyle=\frac{\mbox{\rm d}}{\mbox{\rm
d}t_{i}}(pq_{j}-H_{j})-\frac{\mbox{\rm d}}{\mbox{\rm d}t_{j}}(pq_{i}-H_{i})$
$\displaystyle=p_{i}q_{j}-p_{j}q_{i}+2\\{H_{j},H_{i}\\}.$
Combined with Equation (17), this implies Equation (16). ∎
As a corollary we have:
###### Theorem 3.7.
The Hamiltonians $H_{i}$ from Theorem 3.2 are in involution if and only if
$\mbox{\rm d}\mathcal{L}=0$ on solutions.
All examples of Lagrangian 1-forms discussed so far satisfy $\mbox{\rm
d}\mathcal{L}=0$ on solutions. This need not be the case.
###### Example 3.8.
Let us consider a system of commuting equations that is not Liouville
integrable. Fix a constant $c\neq 0$ and consider the 1-form
$\mathcal{L}=\mathcal{L}_{1}\,\mbox{\rm d}t_{1}+\mathcal{L}_{2}\,\mbox{\rm
d}t_{2}$ with
$\mathcal{L}_{1}\llbracket
r,\theta\rrbracket=\frac{1}{2}r^{2}\theta_{1}^{2}+\frac{1}{2}r_{1}^{2}-V(r)-c\theta,$
which for $c=0$ would describe a central force in the plane governed by the
potential $V$, and
$\mathcal{L}_{2}\llbracket
r,\theta\rrbracket=r^{2}\theta_{1}(\theta_{2}-1)+r_{1}r_{2}.$
Its multi-time Euler-Lagrange equations are
$\displaystyle r_{11}=-V^{\prime}(r)+r\theta_{1}^{2},$
$\displaystyle\frac{\mbox{\rm d}}{\mbox{\rm d}t_{1}}(r^{2}\theta_{1})=-c,$
$\displaystyle r_{2}=0,$ $\displaystyle\theta_{2}=1,$
and consequences thereof. Notably, we have
$\frac{\mbox{\rm d}\mathcal{L}_{2}}{\mbox{\rm d}t_{1}}-\frac{\mbox{\rm
d}\mathcal{L}_{1}}{\mbox{\rm d}t_{2}}=c$
on solutions, hence $\mbox{\rm d}\mathcal{L}$ is nonzero.
By Theorem 3.2 the multi-time Euler-Lagrange equations are equivalent to the
canonical Hamiltonian systems with
$\displaystyle H_{1}(r,\theta,\pi,\sigma)$
$\displaystyle=\frac{1}{2}\frac{\sigma^{2}}{r^{2}}+\frac{1}{2}\pi^{2}+V(r)+c\theta$
$\displaystyle H_{2}(r,\theta,\pi,\sigma)$ $\displaystyle=\sigma,$
where $\pi$ and $\sigma$ are the conjugate momenta to $r$ and $\theta$. The
Hamiltonians are not in involution, but rather
$\\{H_{2},H_{1}\\}=c=\frac{\mbox{\rm d}\mathcal{L}_{2}}{\mbox{\rm
d}t_{1}}-\frac{\mbox{\rm d}\mathcal{L}_{1}}{\mbox{\rm d}t_{2}}.$
## 4 Hamiltonian structure of Lagrangian 2-form systems
In order to generalize the results from Section 3 to the case of 2-forms, we
need to carefully examine the relevant geometric structure. A useful tool for
this is the variational bicomplex, which is also used in Appendix A to study
the multi-time Euler-Lagrange equations.
### 4.1 The variational bicomplex
To facilitate the variational calculus in the pluri-Lagrangian setting, it is
useful to consider the variation operator $\delta$ as an exterior derivative,
acting in the fiber $J^{\infty}$ of the infinite jet bundle. We call $\delta$
the _vertical exterior derivative_ and d, which acts in the base manifold $M$,
the _horizontal exterior derivative_. Together they provide a double grading
of the space $\Omega(M\times J^{\infty})$ of differential forms on the jet
bundle. The space of _$(a,b)$ -forms_ is generated by those $(a+b)$-forms
structured as
$f\llbracket u\rrbracket\,\delta u_{I_{1}}\wedge\ldots\wedge\delta
u_{I_{a}}\wedge\mbox{\rm d}t_{j_{1}}\ldots\wedge\mbox{\rm d}t_{j_{b}}.$
We denote the space of $(a,b)$-forms by
$\Omega^{(a,b)}\subset\Omega^{a+b}(M\times J^{\infty})$. We call elements of
$\Omega^{(0,b)}$ horizontal forms and elements of $\Omega^{(a,0)}$ vertical
forms. The Lagrangian is a horizontal $d$-form,
$\mathcal{L}\in\Omega^{(0,d)}$.
The horizontal and vertical exterior derivatives are characterized by the
anti-derivation property,
$\displaystyle\mbox{\rm
d}\left(\omega_{1}^{p_{1},q_{1}}\wedge\omega_{2}^{p_{2},q_{2}}\right)$
$\displaystyle=\mbox{\rm
d}\omega_{1}^{p_{1},q_{1}}\wedge\omega_{2}^{p_{2},q_{2}}+(-1)^{p_{1}+q_{1}}\,\omega_{1}^{p_{1},q_{1}}\wedge\mbox{\rm
d}\omega_{2}^{p_{2},q_{2}},$
$\displaystyle\delta\left(\omega_{1}^{p_{1},q_{1}}\wedge\omega_{2}^{p_{2},q_{2}}\right)$
$\displaystyle=\delta\omega_{1}^{p_{1},q_{1}}\wedge\omega_{2}^{p_{2},q_{2}}+(-1)^{p_{1}+q_{1}}\,\omega_{1}^{p_{1},q_{1}}\wedge\delta\omega_{2}^{p_{2},q_{2}},$
where the upper indices denote the type of the forms, and by the way they act
on $(0,0)$-forms, and basic $(1,0)$ and $(0,1)$-forms:
$\displaystyle\mbox{\rm d}f\llbracket u\rrbracket$
$\displaystyle=\sum_{j}\partial_{j}f\llbracket u\rrbracket\,\mbox{\rm
d}t_{j},$ $\displaystyle\delta f\llbracket u\rrbracket$
$\displaystyle=\sum_{I}\frac{\partial{f\llbracket
u\rrbracket}}{\partial{u_{I}}}\delta u_{I},$ $\displaystyle\mbox{\rm d}(\delta
u_{I})$ $\displaystyle=-\sum_{j}\delta u_{Ij}\wedge\mbox{\rm d}t_{j},\hskip
56.9055pt$ $\displaystyle\delta(\delta u_{I})$ $\displaystyle=0,$
$\displaystyle\mbox{\rm d}(\mbox{\rm d}t_{j})$ $\displaystyle=0,$
$\displaystyle\delta(\mbox{\rm d}t_{j})$ $\displaystyle=0.$
One can verify that $\mbox{\rm
d}+\delta:\Omega^{a+b}\rightarrow\Omega^{a+b+1}$ is the usual exterior
derivative and that
$\delta^{2}=\mbox{\rm d}^{2}=\delta\mbox{\rm d}+\mbox{\rm d}\delta=0.$
Time-derivatives $\partial_{j}$ act on vertical forms as $\partial_{j}(\delta
u_{I})=\delta u_{Ij}$, on horizontal forms as $\partial_{j}(\mbox{\rm
d}t_{k})=0$, and obey the Leibniz rule with respect to the wedge product. As a
simple but important example, note that
$\mbox{\rm d}(f\llbracket u\rrbracket\,\delta
u_{I})=\sum_{j=1}^{N}\partial_{j}f\llbracket u\rrbracket\,\mbox{\rm
d}t_{j}\wedge\delta u_{I}-f\llbracket u\rrbracket\,\delta
u_{It_{j}}\wedge\mbox{\rm d}t_{j}=\sum_{j=1}^{N}-\partial_{j}(f\llbracket
u\rrbracket\,\delta u_{I})\wedge\mbox{\rm d}t_{j}.$
The spaces $\Omega^{(a,b)}$, for $a\geq 0$ and $0\leq b\leq N$, related to
each other by the maps d and $\delta$, are collectively known as the
_variational bicomplex_ [8, Chapter 19]. A slightly different version of the
variational bicomplex, using contact 1-forms instead of vertical forms, is
presented in [1]. We will not discuss the rich algebraic structure of the
variational bicomplex here.
For a horizontal $(0,d)$-form $\mathcal{L}\llbracket u\rrbracket$, the
variational principle
$\delta\int_{\Gamma}\mathcal{L}\llbracket
u\rrbracket=\delta\int_{\Gamma}\sum_{i_{1}<\ldots<i_{d}}\mathcal{L}_{i_{1},\ldots,i_{d}}\llbracket
u\rrbracket\,\mbox{\rm d}t_{i_{1}}\wedge\ldots\wedge\mbox{\rm d}t_{i_{d}}=0$
can be understood as follows. Every vertical vector field
$V=v(t_{1},\ldots,t_{a})\frac{\partial}{\partial u}$, such that its
_prolongation_
$\operatorname{pr}V=\sum_{I}v_{I}\frac{\partial}{\partial u_{I}}$
vanishes on the boundary $\partial\Gamma$, must satisfy
$\int_{\Gamma}\iota_{\operatorname{pr}V}\delta\mathcal{L}=\int_{\Gamma}\sum_{i_{1}<\ldots<i_{d}}\iota_{\operatorname{pr}V}(\delta\mathcal{L}_{i_{1},\ldots,i_{d}}\llbracket
u\rrbracket)\,\mbox{\rm d}t_{i_{1}}\wedge\ldots\wedge\mbox{\rm d}t_{i_{d}}=0.$
Note that the integrand is a horizontal form, so the integration takes place
on $\Gamma\subset M$, independent of the bundle structure.
### 4.2 The space of functionals and its pre-symplectic structure
In the rest of our discussion, we will single out the variable $t_{1}=x$ and
view it as the space variable, as opposed to the time variables
$t_{2},\ldots,t_{N}$. For ease of presentation we will limit the discussion
here to real scalar fields, but it is easily extended to complex or vector-
valued fields. We consider functions
$u:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto u(x)$ as fields at a fixed time.
Let $J^{\infty}$ be the fiber of the corresponding infinite jet bundle, where
the prolongation of $u$ has coordinates $[u]=(u,u_{x},u_{xx},\ldots)$.
Consider the space of functions of the infinite jet of $u$,
$\mathcal{V}=\left\\{v:J^{\infty}\rightarrow\mathbb{R}\right\\}.$
Note that the domain $J^{\infty}$ is the fiber of the jet bundle, hence the
elements $v\in\mathcal{V}$ depend on $x$ only through $u$. We will be dealing
with integrals $\int v\,\mbox{\rm d}x$ of elements $v\in\mathcal{V}$. In order
to avoid convergence questions, we understand the symbol $\int v\,\mbox{\rm
d}x$ as a _formal integral_ , defined as the equivalence class of $v$ modulo
space-derivatives. In other words, we consider the space of functionals
$\mathcal{F}=\mathcal{V}\big{/}\partial_{x}\\!\mathcal{V},$
where
$\partial_{x}=\frac{\mbox{\rm d}}{\mbox{\rm
d}x}=\sum_{I}u_{Ix}\frac{\partial{}}{\partial{u_{I}}}.$
The variation of an element of $\mathcal{F}$ is computed as
$\delta\int v\,\mbox{\rm d}x=\int\frac{\delta{v}}{\delta{u}}\,\delta
u\wedge\mbox{\rm d}x,$ (18)
where
$\frac{\delta{}}{\delta{u}}=\sum_{\alpha=0}^{\infty}(-1)^{\alpha}\partial_{x}^{\alpha}\frac{\partial{}}{\partial{u_{x^{\alpha}}}}.$
Equation (18) is independent of the choice of representative $v\in\mathcal{V}$
because the variational derivative of a full $x$-derivative is zero.
Since $\mathcal{V}$ is a linear space, its tangent spaces can be identified
with $\mathcal{V}$ itself. In turn, every $v\in\mathcal{V}$ can be identified
with a vector field $v\frac{\partial}{\partial u}$. We will define Hamiltonian
vector fields in terms of $\mathcal{F}$-valued forms on $\mathcal{V}$. An
$\mathcal{F}$-valued 1-form $\theta$ can be represented as the integral of a
$(1,1)$-form in the variational bicomplex,
$\theta=\int\sum_{k}a_{k}[u]\,\delta u_{x^{k}}\wedge\mbox{\rm d}x$
and defines a map
$\mathcal{V}\rightarrow\mathcal{F}:v\mapsto\iota_{v}\theta=\int\sum_{k}a_{k}[u]\,\partial_{x}^{k}v[u]\,\mbox{\rm
d}x.$
This amounts to pairing the 1-form with the infinite jet prolongation of the
vector field $v\frac{\partial}{\partial u}$. Note that $\mathcal{F}$-valued
forms are defined modulo $x$-derivatives:
$\int\partial_{x}\theta\wedge\mbox{\rm d}x=0$ because its pairing with any
vector field in $\mathcal{V}$ will yield a full $x$-derivative, which
represents the zero functional in $\mathcal{F}$. Hence the space of
$\mathcal{F}$-valued 1-forms is $\Omega^{(1,1)}/\partial_{x}\Omega^{(1,1)}$.
An $\mathcal{F}$-valued 2-form
$\omega=\int\sum_{k,\ell}a_{k,\ell}[u]\,\delta u_{x^{k}}\wedge\delta
u_{x^{\ell}}\wedge\mbox{\rm d}x$
defines a skew-symmetric map
$\mathcal{V}\times\mathcal{V}\rightarrow\mathcal{F}:(v,w)\mapsto\iota_{w}\iota_{v}\omega=\int\sum_{k,\ell}a_{k,\ell}[u]\left(\partial_{x}^{k}v[u]\,\partial_{x}^{\ell}w[u]-\partial_{x}^{k}w[u]\,\partial_{x}^{\ell}v[u]\right)\mbox{\rm
d}x$
as well as a map from vector fields to $\mathcal{F}$-valued 1-forms
$\mathcal{V}\rightarrow\Omega^{(1,1)}/\partial_{x}\Omega^{(1,1)}:v\mapsto\iota_{v}\omega=\int\sum_{k,\ell}a_{k,\ell}[u]\left(\partial_{x}^{k}v[u]\,\delta
u_{x^{\ell}}-\partial_{x}^{\ell}v[u]\,\delta u_{x^{k}}\right)\wedge\mbox{\rm
d}x.$
###### Definition 4.1.
A closed $(2,1)$-form $\omega$ on $\mathcal{V}$ is called _pre-symplectic_.
Equivalently we can require the form to be vertically closed, i.e. closed with
respect to $\delta$. Since the horizontal space is 1-dimensional ($x$ is the
only independent variable) every $(a,1)$-form is closed with respect to the
horizontal exterior derivative d, so only vertical closedness is a nontrivial
property.
We choose to work with pre-symplectic forms instead of symplectic forms,
because the non-degeneracy required of a symplectic form is a subtle issue in
the present context. Consider for example the pre-symplectic form
$\omega=\int\delta u\wedge\delta u_{x}\wedge\mbox{\rm d}x$. It is degenerate
because
$\int\iota_{v}\omega=\int(v\,\delta u_{x}-v_{x}\,\delta u)\wedge\mbox{\rm
d}x=\int-2v_{x}\,\delta u\wedge\mbox{\rm d}x,$
which is zero whenever $v[u]$ is constant. However, if we restrict our
attention to compactly supported fields, then a constant must be zero, so the
restriction of $\omega$ to the space of compactly supported fields is non-
degenerate.
###### Definition 4.2.
A _Hamiltonian vector field_ with Hamilton functional
${\textstyle\int}H\,\mbox{\rm d}x$ is an element $v\in\mathcal{V}$ satisfying
the relation
$\int\iota_{v}\omega=\int\delta H\wedge\mbox{\rm d}x.$
Note that if $\omega$ is degenerate, we cannot guarantee existence or
uniqueness of a Hamiltonian vector field in general.
### 4.3 From pluri-Lagrangian to Hamiltonian systems
We will consider two different types of Lagrangian 2-forms. The first type are
those where for every $j$ the coefficient $\mathcal{L}_{1j}$ is linear in
$u_{t_{j}}$. This is the case for the 2-form for the potential KdV hierarchy
from Example 2.5 and for the Lagrangian 2-forms of many other hierarchies like
the AKNS hierarchy [21] and the modified KdV, Schwarzian KdV and Krichever-
Novikov hierarchies [30]. The second type satisfy the same property for $j>2$,
but have a coefficient $\mathcal{L}_{12}$ that is quadratic in $u_{t_{2}}$, as
is the case for the Boussinesq hierarchy from Example 2.6.
#### 4.3.1 When all $\mathcal{L}_{1j}$ are linear in $u_{t_{j}}$
Consider a Lagrangian 2-form $\mathcal{L}\llbracket
u\rrbracket=\sum_{i<j}\mathcal{L}_{ij}\llbracket u\rrbracket\,\mbox{\rm
d}t_{i}\wedge\mbox{\rm d}t_{j}$, where for all $j$ the variational derivative
$\frac{\delta_{1}{\mathcal{L}_{1j}}}{\delta{u_{t_{j}}}}$ does not depend on
any $t_{j}$-derivatives, hence we can write
$\frac{\delta_{1}{\mathcal{L}_{1j}}}{\delta{u_{t_{j}}}}=p[u]$
for some function $p[u]$ depending on on an arbitrary number of space
derivatives, but not on any time-derivatives. We use single square brackets
$[\cdot]$ to indicate dependence on space derivatives only. Note that $p$ does
not depend on the index $j$. This is imposed on us by the multi-time Euler-
Lagrange equation stating that
$\frac{\delta_{1}{\mathcal{L}_{1j}}}{\delta{u_{t_{j}}}}$ is independent of
$j$.
Starting from these assumptions and possibly adding a full $x$-derivative
(recall that $x=t_{1}$) we find that the coefficients $\mathcal{L}_{1j}$ are
of the form
$\mathcal{L}_{1j}\llbracket u\rrbracket=p[u]u_{j}-h_{j}[u],$ (19)
where $u_{j}$ is shorthand notation for the derivative $u_{t_{j}}$.
Coefficients of this form appear in many prominent examples, like the
potential KdV hierarchy and several hierarchies related to it [28, 29, 30] as
well as the AKNS hierarchy [21]. Their Euler-Lagrange equations are
$\mathcal{E}_{p}u_{j}-\frac{\delta_{1}{h_{j}[u]}}{\delta{u}}=0,$ (20)
where $\mathcal{E}_{p}$ is the differential operator
$\mathcal{E}_{p}=\sum_{k=0}^{\infty}\left((-1)^{k}\partial_{x}^{k}\frac{\partial{p}}{\partial{u_{x^{k}}}}-\frac{\partial{p}}{\partial{u_{x^{k}}}}\partial_{x}^{k}\right).$
We can also write $\mathcal{E}_{p}=\mathsf{D}_{p}^{*}-\mathsf{D}_{p}$, where
$\mathsf{D}_{p}$ is the Fréchet derivative of $p$ and $\mathsf{D}_{p}^{*}$ its
adjoint [17, Eqs (5.32) resp. (5.79)].
Consider the pre-symplectic form
$\begin{split}\omega&=-\delta p[u]\wedge\delta u\wedge\mbox{\rm d}x\\\
&=-\sum_{k=1}^{\infty}\frac{\partial{p}}{\partial{u_{x^{k}}}}\delta
u_{x^{k}}\wedge\delta u\wedge\mbox{\rm d}x.\end{split}$ (21)
Inserting the vector field $X=\chi\frac{\partial{}}{\partial{u}}$ we find
$\displaystyle\int\iota_{X}\omega$
$\displaystyle=\int\sum_{k=0}^{\infty}\left(\frac{\partial{p}}{\partial{u_{x^{k}}}}\chi\,\delta
u_{x^{k}}\wedge\mbox{\rm
d}x-\frac{\partial{p}}{\partial{u_{x^{k}}}}\chi_{x^{k}}\,\delta
u\wedge\mbox{\rm d}x\right)$
$\displaystyle=\int\sum_{k=0}^{\infty}\left((-1)^{k}\partial_{x}^{k}\\!\left(\frac{\partial{p}}{\partial{u_{x^{k}}}}\chi\right)-\frac{\partial{p}}{\partial{u_{x^{k}}}}\chi_{x^{k}}\right)\delta
u\wedge\mbox{\rm d}x$ $\displaystyle=\int\mathcal{E}_{p}\chi\,\delta
u\wedge\mbox{\rm d}x.$
From the Hamiltonian equation of motion
$\int\iota_{X}\omega=\int\delta h_{j}[u]\wedge\mbox{\rm d}x$
we now obtain that the Hamiltonian vector field
$X=\chi\frac{\partial{}}{\partial{u}}$ associated to $h_{j}$ satisfies
$\mathcal{E}_{p}\chi=\frac{\delta_{1}{h_{j}}}{\delta{u}},$
which corresponds the Euler-Lagrange equation (20) by identifying
$\chi=u_{t_{j}}$. This observation was made previously in the context of loop
spaces in [16, Section 1.3].
The Poisson bracket associated to the symplectic operator $\mathcal{E}_{p}$ is
formally given by
$\left\\{{\textstyle\int}f\,\mbox{\rm d}x,{\textstyle\int}g\,\mbox{\rm
d}x\right\\}=-\int\frac{\delta{f}}{\delta{u}}\,\mathcal{E}_{p}^{-1}\,\frac{\delta{g}}{\delta{u}}\,\mbox{\rm
d}x.$ (22)
If the pre-symplectic form is degenerate, then $\mathcal{E}_{p}$ will not be
invertible. In this case $\mathcal{E}_{p}^{-1}$ can be considered as a pseudo-
differential operator and the Poisson bracket is called _non-local_ [16, 7].
Note that $\\{\cdot,\cdot\\}$ does not satisfy the Leibniz rule because there
is no multiplication on the space $\mathcal{F}$ of formal integrals. However,
we can recover the Leibniz rule in one entry by introducing
$[f,g]=-\sum_{k=0}^{\infty}\frac{\partial{f}}{\partial{u_{x^{k}}}}\partial_{x}^{k}\,\mathcal{E}_{p}^{-1}\,\frac{\delta{g}}{\delta{u}}.$
Then we have
$\left\\{{\textstyle\int}f\,\mbox{\rm d}x,{\textstyle\int}g\,\mbox{\rm
d}x\right\\}=\int[f,g]\,\mbox{\rm d}x$
and
$[fg,h]=f[g,h]+[f,h]g.$
In summary, we have the following result:
###### Theorem 4.3.
Assume that $\frac{\delta_{1}{h_{j}[u]}}{\delta{u}}$ is in the image of
$\mathcal{E}_{p}$ and has a unique inverse (possibly in some equivalence
class) for each $j$. Then the evolutionary PDEs
$u_{j}=\mathcal{E}_{p}^{-1}\frac{\delta_{1}{h_{j}[u]}}{\delta{u}},$
which imply the Euler-Lagrange equations (20) of the Lagrangians (19), are
Hamiltonian with respect to the symplectic form (21) and the Poisson bracket
(22), with Hamilton functions $h_{j}$.
This theorem applies without assuming any kind of consistency of the system of
multi-time Euler-Lagrange equations. Of course we are mostly interested in the
case where the multi-time Euler-Lagrange equations are equivalent to an
integrable hierarchy. In almost all known examples (see e.g. [28, 21, 30]) the
multi-time Euler-Lagrange equations consist of an integrable hierarchy in its
evolutionary form and differential consequences thereof. Hence the general
picture suggested by these examples is that the multi-time Euler-Lagrange
equations are equivalent to the equations
$u_{j}=\mathcal{E}_{p}^{-1}\frac{\delta_{1}{h_{j}[u]}}{\delta{u}}$ form
Theorem 4.3. In light of these observations, we emphasize the following
consequence of Theorem 4.3.
###### Corollary 4.4.
If the multi-time Euler-Lagrange equations are evolutionary, then they are
Hamiltonian.
###### Example 4.5.
The pluri-Lagrangian structure for the potential KdV hierarchy, given in
Example 2.5, has $p=\frac{1}{2}u_{x}$. Hence
$\mathcal{E}_{p}=-\partial_{x}\frac{\partial{p}}{\partial{u_{x}}}-\frac{\partial{p}}{\partial{u_{x}}}\partial_{x}=-\partial_{x}$
and
$\left\\{{\textstyle\int}f\,\mbox{\rm d}x,{\textstyle\int}g\,\mbox{\rm
d}x\right\\}=\int\frac{\delta{f}}{\delta{u}}\partial_{x}^{-1}\frac{\delta{g}}{\delta{u}}\,\mbox{\rm
d}x.$
Here we assume that $\frac{\delta{g}}{\delta{u}}$ is in the image of
$\partial_{x}$. Then $\partial_{x}^{-1}\frac{\delta{g}}{\delta{u}}$ is
uniquely defined by the convention that it does not contain a constant term.
If $f$ and $g$ depend only on derivatives of $u$, not on $u$ itself, this
becomes the Gardner bracket [10]
$\left\\{{\textstyle\int}f\,\mbox{\rm d}x,{\textstyle\int}g\,\mbox{\rm
d}x\right\\}=\int\left(\partial_{x}\frac{\delta{f}}{\delta{u_{x}}}\right)\frac{\delta{g}}{\delta{u_{x}}}\,\mbox{\rm
d}x.$
The Hamilton functions are
$\displaystyle h_{2}[u]$
$\displaystyle=\frac{1}{2}u_{x}u_{t_{2}}-\mathcal{L}_{12}=u_{x}^{3}+\frac{1}{2}u_{x}u_{xxx},$
$\displaystyle h_{3}[u]$
$\displaystyle=\frac{1}{2}u_{x}u_{t_{3}}-\mathcal{L}_{13}=\frac{5}{2}u_{x}^{4}-5u_{x}u_{xx}^{2}+\frac{1}{2}u_{xxx}^{2},$
$\displaystyle\mathmakebox[\widthof{{}={}}][c]{\vdots}$
A related derivation of the Gardner bracket from the multi-symplectic
perspective was given in [11]. It can also be obtained from the Lagrangian
structure by Dirac reduction [15].
###### Example 4.6.
The Schwarzian KdV hierarchy,
$\displaystyle u_{2}$ $\displaystyle=-\frac{3u_{11}^{2}}{2u_{1}}+u_{111},$
$\displaystyle u_{3}$
$\displaystyle=-\frac{45u_{11}^{4}}{8u_{1}^{3}}+\frac{25u_{11}^{2}u_{111}}{2u_{1}^{2}}-\frac{5u_{111}^{2}}{2u_{1}}-\frac{5u_{11}u_{1111}}{u_{1}}+u_{11111},$
$\displaystyle\mathmakebox[\widthof{{}={}}][c]{\vdots}$
has a pluri-Lagrangian structure with coefficients [29]
$\displaystyle\mathcal{L}_{12}$
$\displaystyle=\frac{u_{3}}{2u_{1}}-\frac{u_{11}^{2}}{2u_{1}^{2}},$
$\displaystyle\mathcal{L}_{13}$
$\displaystyle=\frac{u_{5}}{2u_{1}}-\frac{3u_{11}^{4}}{8u_{1}^{4}}+\frac{u_{111}^{2}}{2u_{1}^{2}},$
$\displaystyle\mathcal{L}_{23}$
$\displaystyle=-\frac{45u_{11}^{6}}{32u_{1}^{6}}+\frac{57u_{11}^{4}u_{111}}{16u_{1}^{5}}-\frac{19u_{11}^{2}u_{111}^{2}}{8u_{1}^{4}}+\frac{7u_{111}^{3}}{4u_{1}^{3}}-\frac{3u_{11}^{3}u_{1111}}{4u_{1}^{4}}-\frac{3u_{11}u_{111}u_{1111}}{2u_{1}^{3}}$
$\displaystyle\quad+\frac{u_{1111}^{2}}{2u_{1}^{2}}+\frac{3u_{11}^{2}u_{11111}}{4u_{1}^{3}}-\frac{u_{111}u_{11111}}{2u_{1}^{2}}+\frac{u_{111}u_{113}}{u_{1}^{2}}-\frac{3u_{11}^{3}u_{13}}{2u_{1}^{4}}+\frac{2u_{11}u_{111}u_{13}}{u_{1}^{3}}$
$\displaystyle\quad-\frac{u_{1111}u_{13}}{u_{1}^{2}}+\frac{u_{11}u_{15}}{u_{1}^{2}}-\frac{27u_{11}^{4}u_{3}}{16u_{1}^{5}}+\frac{17u_{11}^{2}u_{111}u_{3}}{4u_{1}^{4}}-\frac{7u_{111}^{2}u_{3}}{4u_{1}^{3}}-\frac{3u_{11}u_{1111}u_{3}}{2u_{1}^{3}}$
$\displaystyle\quad+\frac{u_{11111}u_{3}}{2u_{1}^{2}}+\frac{u_{11}^{2}u_{5}}{4u_{1}^{3}}-\frac{u_{111}u_{5}}{2u_{1}^{2}},$
$\displaystyle\mathmakebox[\widthof{{}={}}][c]{\vdots}$
In this example we have $p=\frac{1}{2u_{x}}$, hence
$\mathcal{E}_{p}=-\partial_{x}\frac{\partial{p}}{\partial{u_{x}}}-\frac{\partial{p}}{\partial{u_{x}}}\partial_{x}=\frac{1}{u_{x}^{2}}\partial_{x}-\frac{u_{xx}}{u_{x}^{3}}=\frac{1}{u_{x}}\partial_{x}\frac{1}{u_{x}}$
and
$\mathcal{E}_{p}^{-1}=u_{x}\partial_{x}^{-1}u_{x}.$
This nonlocal operator seems to be the simplest Hamiltonian operator for the
SKdV equation, see for example [9, 31]. The Hamilton functions for the first
two equations of the hierarchy are
$h_{2}=\frac{u_{11}^{2}}{2u_{1}^{2}}\qquad\text{and}\qquad
h_{3}=\frac{3u_{11}^{4}}{8u_{1}^{4}}-\frac{u_{111}^{2}}{2u_{1}^{2}}.$
#### 4.3.2 When $\mathcal{L}_{12}$ is quadratic in $u_{t_{2}}$
Consider a Lagrangian 2-form $\mathcal{L}\llbracket
u\rrbracket=\sum_{i<j}\mathcal{L}_{ij}\llbracket u\rrbracket\,\mbox{\rm
d}t_{i}\wedge\mbox{\rm d}t_{j}$ with
$\mathcal{L}_{12}=\frac{1}{2}\alpha[u]u_{2}^{2}-V[u],$ (23)
and, for all $j\geq 3$, $\mathcal{L}_{1j}$ of the form
$\mathcal{L}_{1j}\llbracket u\rrbracket=\alpha[u]u_{2}u_{j}-h_{j}[u,u_{2}],$
(24)
where $[u,u_{2}]=(u,u_{2},u_{1},u_{12},u_{11},u_{112},\ldots)$ since the
single bracket $[\cdot]$ denotes dependence on the fields and their
$x$-derivatives only (recall that $x=t_{1}$). To write down the full set of
multi-time Euler-Lagrange equations we need to specify all $\mathcal{L}_{ij}$,
but for the present discussion it is sufficient to consider the equations
$\frac{\delta_{12}{\mathcal{L}_{12}}}{\delta{u}}=0\quad\Leftrightarrow\quad\alpha[u]u_{22}=-\frac{\mbox{\rm
d}\alpha[u]}{\mbox{\rm
d}t_{2}}u_{2}+\frac{1}{2}\sum_{k=0}^{\infty}(-1)^{k}\partial_{x}^{k}\left(\frac{\partial{\alpha[u]}}{\partial{u_{x^{k}}}}u_{2}^{2}\right)-\frac{\delta_{1}{V[u]}}{\delta{u}}$
and
$\frac{\delta_{1j}{\mathcal{L}_{1j}}}{\delta{u_{2}}}=0\quad\Leftrightarrow\quad\alpha[u]u_{j}=\frac{\delta_{1}{h_{j}[u,u_{2}]}}{\delta{u_{2}}}.$
We assume that all other multi-time Euler-Lagrange equations are consequences
of these.
Since $\mathcal{L}_{12}$ is non-degenerate, the Legendre transform is
invertible and allows us to identify $\pi=\alpha[u]u_{2}$. Consider the
canonical symplectic form on formal integrals, where now the momentum $\pi$
enters as a second field,
$\omega=\delta\pi\wedge\delta u\wedge\mbox{\rm d}x.$
This defines the Poisson bracket
$\left\\{{\textstyle\int}f\,\mbox{\rm d}x,{\textstyle\int}g\,\mbox{\rm
d}x\right\\}=-\int\left(\frac{\delta{f}}{\delta{\pi}}\frac{\delta{g}}{\delta{u}}-\frac{\delta{f}}{\delta{u}}\frac{\delta{g}}{\delta{\pi}}\right)\mbox{\rm
d}x.$ (25)
The coefficients $\mathcal{L}_{1j}\llbracket
u\rrbracket=\alpha[u]u_{2}u_{j}-h_{j}[u,u_{2}]$ are linear in their velocities
$u_{j}$, hence they are Hamiltonian with respect to the pre-symplectic form
$\delta(\alpha[u]u_{2})\wedge\delta u\wedge\mbox{\rm d}x,$
which equals $\omega$ if we identify $\pi=\alpha[u]u_{2}$. Hence we find the
following result.
###### Theorem 4.7.
A hierarchy described by a Lagrangian 2-form with coefficients of the form
(23)–(24) is Hamiltonian with respect to the canonical Poisson bracket (25),
with Hamilton functions
$H_{2}[u,\pi]=\frac{1}{2}\frac{\pi^{2}}{\alpha[u]}+V[u]$
and
$H_{j}[u,\pi]=h_{j}\\!\left[u,\frac{\pi}{\alpha[u]}\right]$
for $j\geq 3$.
###### Example 4.8.
The Lagrangian 2-form for the Boussinesq hierarchy from Example 2.6 leads to
$\displaystyle H_{2}$
$\displaystyle=\frac{1}{2}\pi^{2}+2u_{1}^{3}+\frac{3}{2}u_{11}^{2},$
$\displaystyle H_{3}$
$\displaystyle=-6u_{1}^{4}-27u_{1}u_{11}^{2}+6u\pi_{1}\pi-\frac{9}{2}u_{111}^{2}-\frac{3}{2}\pi_{1}^{2},$
where the Legendre transform identifies $\pi=u_{2}$. Indeed we have
$\displaystyle\left\\{{\textstyle\int}H_{2}\,\mbox{\rm
d}x,{\textstyle\int}u\,\mbox{\rm d}x\right\\}$
$\displaystyle={\textstyle\int}\pi\,\mbox{\rm
d}x={\textstyle\int}u_{2}\,\mbox{\rm d}x,$
$\displaystyle\left\\{{\textstyle\int}H_{2}\,\mbox{\rm
d}x,{\textstyle\int}\pi\,\mbox{\rm d}x\right\\}$
$\displaystyle={\textstyle\int}(12u_{1}u_{11}-3u_{111})\,\mbox{\rm
d}x={\textstyle\int}\pi_{2}\,\mbox{\rm d}x,$
and
$\displaystyle\left\\{{\textstyle\int}H_{3}\,\mbox{\rm
d}x,{\textstyle\int}u\,\mbox{\rm d}x\right\\}$
$\displaystyle={\textstyle\int}(-6u_{1}\pi+3\pi_{11})\,\mbox{\rm
d}x={\textstyle\int}u_{3}\,\mbox{\rm d}x,$
$\displaystyle\left\\{{\textstyle\int}H_{3}\,\mbox{\rm
d}x,{\textstyle\int}\pi\,\mbox{\rm d}x\right\\}$
$\displaystyle={\textstyle\int}\left(-72u_{1}^{2}u_{11}+108u_{11}u_{111}+54u_{1}u_{1111}-6\pi\pi_{1}-9u_{111111}\right)\mbox{\rm
d}x$ $\displaystyle={\textstyle\int}\pi_{3}\,\mbox{\rm d}x.$
### 4.4 Closedness and involutivity
Let us now have a look at the relation between the closedness of the
Lagrangian 2-form and the involutivity of the corresponding Hamiltonians.
###### Proposition 4.9.
On solutions of the multi-time Euler-Lagrange equations of a Lagrangian 2-form
with coefficients $\mathcal{L}_{1j}$ given by Equation (19), there holds
$\\{h_{i},h_{j}\\}=\int\left(p_{i}u_{j}-p_{j}u_{i}\right)\mbox{\rm
d}x=\int\left(\frac{\mbox{\rm d}\mathcal{L}_{1i}}{\mbox{\rm
d}t_{j}}-\frac{\mbox{\rm d}\mathcal{L}_{1j}}{\mbox{\rm
d}t_{i}}\right)\mbox{\rm d}x,$ (26)
where the Poisson bracket is given by Equation (22).
* Proof.
On solutions of the Euler-Lagrange equations we have
$\displaystyle\int\frac{\mbox{\rm d}\mathcal{L}_{1i}}{\mbox{\rm
d}t_{j}}\,\mbox{\rm d}x$
$\displaystyle=\int\left(\frac{\delta_{1}{\mathcal{L}_{1i}}}{\delta{u}}u_{j}+\frac{\partial{\mathcal{L}_{1i}}}{\partial{u_{i}}}u_{ij}\right)\mbox{\rm
d}x$ $\displaystyle=\int\left(\left(\frac{\mbox{\rm d}}{\mbox{\rm
d}t_{i}}\frac{\delta_{1i}{\mathcal{L}_{1i}}}{\delta{u_{i}}}\right)u_{j}+\frac{\partial{\mathcal{L}_{1i}}}{\partial{u_{i}}}u_{ij}\right)\mbox{\rm
d}x$ $\displaystyle=\int\left(p_{i}u_{j}+pu_{ij}\right)\mbox{\rm d}x.$
It follows that
$\int\left(\frac{\mbox{\rm d}\mathcal{L}_{1i}}{\mbox{\rm
d}t_{j}}-\frac{\mbox{\rm d}\mathcal{L}_{1j}}{\mbox{\rm
d}t_{i}}\right)\mbox{\rm d}x=\int\left(p_{i}u_{j}-p_{j}u_{i}\right)\mbox{\rm
d}x.$ (27)
On the other hand we have that
$\displaystyle\int\left(\frac{\mbox{\rm d}\mathcal{L}_{1i}}{\mbox{\rm
d}t_{j}}-\frac{\mbox{\rm d}\mathcal{L}_{1j}}{\mbox{\rm
d}t_{i}}\right)\mbox{\rm d}x$ $\displaystyle=\int\left(\frac{\mbox{\rm
d}}{\mbox{\rm d}t_{j}}(pu_{i}-h_{i})-\frac{\mbox{\rm d}}{\mbox{\rm
d}t_{i}}(pu_{j}-h_{j})\right)\mbox{\rm d}x$
$\displaystyle=-\int\left(p_{i}u_{j}-p_{j}u_{i}\right)\mbox{\rm
d}x+2\\{h_{i},h_{j}\\}.$
Combined with Equation (27), this implies both identities in Equation (26). ∎
###### Proposition 4.10.
On solutions of the multi-time Euler-Lagrange equations of a Lagrangian 2-form
with coefficients $\mathcal{L}_{1j}$ given by Equations (23)–(24), there holds
$\\{H_{i},H_{j}\\}=\int\left(\pi_{i}u_{j}-\pi_{j}u_{i}\right)\mbox{\rm
d}x=\int\left(\frac{\mbox{\rm d}\mathcal{L}_{1i}}{\mbox{\rm
d}t_{j}}-\frac{\mbox{\rm d}\mathcal{L}_{1j}}{\mbox{\rm
d}t_{i}}\right)\mbox{\rm d}x,$
where the Poisson bracket is given by Equation (25) and the Hamilton functions
$H_{j}$ are given in Theorem 4.7.
* Proof.
Analogous to the proof of Proposition 4.9, with $p[u]$ replaced by the field
$\pi$. ∎
###### Theorem 4.11.
Let $\mathcal{L}$ be a Lagrangian 2-form with coefficients $\mathcal{L}_{1j}$
given by Equation (19) or by Equations (23)–(24). Consider the corresponding
Hamiltonian structures, given by $H_{1j}=h_{j}$ or $H_{1j}=H_{j}$, as in
Theorems 4.3 and 4.7 respectively. There holds $\\{H_{1i},H_{1j}\\}=0$ if and
only if
$\int\left(\frac{\mbox{\rm d}\mathcal{L}_{ij}}{\mbox{\rm
d}t_{1}}-\frac{\mbox{\rm d}\mathcal{L}_{1j}}{\mbox{\rm
d}t_{i}}+\frac{\mbox{\rm d}\mathcal{L}_{1i}}{\mbox{\rm
d}t_{j}}\right)\mbox{\rm d}x=0$
on solutions of the multi-time Euler-Lagrange equations.
* Proof.
Recall that $t_{1}=x$, hence $\frac{\mbox{\rm d}}{\mbox{\rm
d}t_{1}}=\partial_{x}$. By definition of the formal integral as an equivalence
class, we have $\int\partial_{x}\mathcal{L}_{ij}\,\mbox{\rm d}x=0$. Hence the
claim follows from Proposition 4.9 or Proposition 4.10. ∎
It is known that $\mbox{\rm d}\mathcal{L}\llbracket u\rrbracket$ is constant
in the set of solutions $u$ to the multi-time Euler-Lagrange equations (see
Proposition A.2). In most examples, one can verify using a trivial solution
that this constant is zero.
###### Corollary 4.12.
If a Lagrangian 2-form, with coefficients $\mathcal{L}_{1j}\llbracket
u\rrbracket$ given by Equation (19) or by Equations (23)–(24), is closed for a
solution $u$ to the pluri-Lagrangian problem, then $\\{H_{1i},H_{1j}\\}=0$ for
all $i,j$.
All examples of Lagrangian 2-forms discussed so far satisfy $\mbox{\rm
d}\mathcal{L}=0$ on solutions. We now present a system where this is not the
case.
###### Example 4.13.
Consider a perturbation of the Boussinesq Lagrangian, obtained by adding $cu$
for some constant $c\in\mathbb{R}$,
$\mathcal{L}_{12}=\frac{1}{2}u_{2}^{2}-2u_{1}^{3}-\frac{3}{2}u_{11}^{2}+cu,$
combined with the Lagrangian coefficients
$\displaystyle\mathcal{L}_{13}$ $\displaystyle=u_{2}(u_{3}-1)$
$\displaystyle\mathcal{L}_{23}$
$\displaystyle=(6u_{1}^{2}-3u_{111})(u_{3}-1).$
The corresponding multi-time Euler-Lagrange equations consist of a perturbed
Boussinesq equation,
$u_{22}=12u_{1}u_{11}-3u_{1111}+c$
and
$u_{3}=1.$
We have
$\frac{\mbox{\rm d}\mathcal{L}_{12}}{\mbox{\rm d}t_{3}}-\frac{\mbox{\rm
d}\mathcal{L}_{13}}{\mbox{\rm d}t_{2}}+\frac{\mbox{\rm
d}\mathcal{L}_{23}}{\mbox{\rm d}t_{1}}=c$
on solutions, hence $\mbox{\rm d}\mathcal{L}$ is nonzero.
The multi-time Euler-Lagrange equations are equivalent to the canonical
Hamiltonian systems with
$\displaystyle H_{2}$
$\displaystyle=\frac{1}{2}\pi^{2}+2u_{1}^{3}+\frac{3}{2}u_{11}^{2}-cu$
$\displaystyle H_{3}$ $\displaystyle=\pi.$
They are not in involution, but rather
$\left\\{{\textstyle\int}H_{2}\,\mbox{\rm d}x,{\textstyle\int}H_{3}\,\mbox{\rm
d}x\right\\}=\int\left(12u_{11}u_{1}-3u_{1111}+c\right)\mbox{\rm d}x=\int
c\,\mbox{\rm d}x.$
Note that if we would allow the fields in $\mathcal{V}$ to depend explicitly
on $x$, then we would find ${\textstyle\int}c\,\mbox{\rm
d}x={\textstyle\int}\partial_{x}(cx)\,\mbox{\rm d}x=0$. Note that this is not
a property of the Lagrangian form, but of the function space we work in.
Allowing fields that depend on $x$ affects the definition of the formal
integral ${\textstyle\int}(\cdot)\,\mbox{\rm d}x$ as an equivalence class
modulo $x$-derivatives. If dependence on $x$ is allowed, then there is no such
thing as a nonzero constant functional in this equivalence class. However, in
our definition of $\mathcal{V}$, fields can only depend on $x$ through $u$,
hence $c$ is not an $x$-derivative and ${\textstyle\int}c\,\mbox{\rm d}x$ is
not the zero element of $\mathcal{F}$.
### 4.5 Additional (nonlocal) Poisson brackets
Even though the closedness property in Section 4.4 involves all coefficients
of a Lagrangian 2-form $\mathcal{L}$, so far we have only used the first row
of coefficients $\mathcal{L}_{1j}$ to construct Hamiltonian structures. A
similar procedure can be carried out for other $\mathcal{L}_{ij}$, but the
results are not entirely satisfactory. In particular, it will not lead to true
bi-Hamiltonian structures. Because of this slightly disappointing outcome, we
will make no effort to present the most general statement possible. Instead we
make some convenient assumptions on the form of the coefficients
$\mathcal{L}_{ij}$.
Consider a Lagrangian 2-form $\mathcal{L}$ such that for all $i<j$ the
coefficient $\mathcal{L}_{ij}$ only contains derivatives with respect to
$t_{1}$, $t_{i}$ and $t_{j}$ (no “alien derivatives” in the terminology of
[29]). In addition, assume that $\mathcal{L}_{ij}$ can be written as the sum
of terms that each contain at most one derivative with respect to $t_{i}$ (if
$i>1$) or $t_{j}$. In particular, $\mathcal{L}_{ij}$ does not contain higher
derivatives with respect to $t_{i}$ (if $i>1$) or $t_{j}$, but mixed
derivatives with respect to $t_{1}$ and $t_{i}$ or $t_{1}$ and $t_{j}$ are
allowed. There is no restriction on the amount of $t_{1}$-derivatives.
To get a Hamiltonian description of the evolution along the time direction
$t_{j}$ from the Lagrangian $\mathcal{L}_{ij}$, we should consider both
$t_{1}$ and $t_{i}$ as space coordinates. Hence we will work on the space
$\mathcal{V}\big{/}\left(\partial_{1}\\!\mathcal{V}+\partial_{i}\\!\mathcal{V}\right).$
For $i>1$, consider the momenta
$p^{[i]}[u]=\frac{\delta_{1i}{\mathcal{L}_{ij}}}{\delta{u_{j}}}.$
From the assumption that each term of $\mathcal{L}_{ij}$ contains at most one
time-derivative it follows that $p^{[i]}$ only depends on $u$ and its
$x$-derivatives. Note that $p^{[i]}$ is independent of $j$ because of the
multi-time Euler-Lagrange equation (6). The variational derivative in the
definition of $p^{[i]}$ is in the directions $1$ and $i$, corresponding to the
formal integral, whereas the Lagrangian coefficient has indices $i$ and $j$.
However, we can also write
$p^{[i]}[u]=\frac{\delta_{1}{\mathcal{L}_{ij}}}{\delta{u_{j}}}$
because of the assumption on the derivatives that occur in $\mathcal{L}_{ij}$,
which excludes mixed derivatives with respect to $t_{i}$ and $t_{j}$.
As Hamilton function we can take
$H_{ij}=p^{[i]}u_{j}-\mathcal{L}_{ij}.$
Its formal integral $\int H_{ij}\,\mbox{\rm d}x\wedge\mbox{\rm d}t_{i}$ does
not depend on any $t_{j}$-derivatives. Since we are working with 2-dimensional
integrals, we should take a $(2,2)$-form as symplectic form. In analogy to
Equation (13) we take
$\omega_{i}=-\delta p^{[i]}\wedge\delta u\wedge\mbox{\rm d}x\wedge\mbox{\rm
d}t_{i}.$
A Hamiltonian vector field $X=\chi\frac{\partial{}}{\partial{u}}$ satisfies
$\int\iota_{X}\omega_{i}=\int\delta H_{ij}\wedge\mbox{\rm d}x\wedge\mbox{\rm
d}t_{i}$
hence
$\mathcal{E}_{p^{[i]}}\chi=\frac{\delta_{1i}{H_{ij}}}{\delta{u}},$
where $\mathcal{E}_{p^{[i]}}$ is the differential operator
$\mathcal{E}_{p^{[i]}}=\sum_{k=0}^{\infty}\left((-1)^{k}\partial_{x}^{k}\frac{\partial{p^{[i]}}}{\partial{u_{x^{k}}}}-\frac{\partial{p^{[i]}}}{\partial{u_{x^{k}}}}\partial_{x}^{k}\right)$
The corresponding (nonlocal) Poisson bracket is
$\left\\{{\textstyle\int}f\,\mbox{\rm d}x\wedge\mbox{\rm
d}t_{i},{\textstyle\int}g\,\mbox{\rm d}x\wedge\mbox{\rm
d}t_{i}\right\\}_{i}=-\int\frac{\delta_{1i}{f}}{\delta{u}}\,\mathcal{E}_{p^{[i]}}^{-1}\,\frac{\delta_{1i}{g}}{\delta{u}}\,\mbox{\rm
d}x\wedge\mbox{\rm d}t_{i}.$
Note that $H$ is not skew-symmetric, $H_{ij}\neq H_{ji}$.
The space of functionals
$\mathcal{V}\big{/}\left(\partial_{1}\\!\mathcal{V}+\partial_{i}\\!\mathcal{V}\right)$,
on which the Poisson bracket $\\{\cdot,\cdot\\}_{i}$ is defined, depends on
$i$ and is different from the space of functionals for the bracket
$\\{\cdot,\cdot\\}$ from Equation (25). Hence no pair of these brackets are
compatible with each other in the sense of a bi-Hamiltonian system.
As before, we can relate Poisson brackets between the Hamilton functionals to
coefficients of $\mbox{\rm d}\mathcal{L}$.
###### Proposition 4.14.
Assume that for all $i,j>1$, $\mathcal{L}_{ij}$ does not depend on any second
or higher derivatives with respect to $t_{i}$ and $t_{j}$. On solutions of the
Euler-Lagrange equations there holds that, for $i,j,k>1$,
$\begin{split}\int\left(\frac{\mbox{\rm d}\mathcal{L}_{ij}}{\mbox{\rm
d}t_{k}}-\frac{\mbox{\rm d}\mathcal{L}_{ik}}{\mbox{\rm
d}t_{j}}\right)\mbox{\rm d}x\wedge\mbox{\rm
d}t_{i}&=\int\left(p^{[i]}_{j}u_{k}-p^{[i]}_{k}u_{j}\right)\mbox{\rm
d}x\wedge\mbox{\rm d}t_{i}\\\ &=\left\\{{\textstyle\int}H_{ij}\,\mbox{\rm
d}x\wedge\mbox{\rm d}t_{i},{\textstyle\int}H_{ik}\,\mbox{\rm
d}x\wedge\mbox{\rm d}t_{i}\right\\}_{i}.\end{split}$ (28)
* Proof.
Analogous to the proof of Proposition 4.9. ∎
###### Example 4.15.
For the potential KdV equation (see Example 2.5) we have
$p^{[2]}=\frac{\delta_{1}{\mathcal{L}_{23}}}{\delta{u_{3}}}=\frac{3}{2}u_{111}+\frac{3}{2}u_{1}^{2},$
hence
$\displaystyle\mathcal{E}_{p^{[2]}}$
$\displaystyle=-\partial_{1}\frac{\partial{p^{[i]}}}{\partial{u_{1}}}-\partial_{1}^{3}\frac{\partial{p^{[i]}}}{\partial{u_{111}}}-\frac{\partial{p^{[i]}}}{\partial{u_{1}}}\partial_{1}-\frac{\partial{p^{[i]}}}{\partial{u_{111}}}\partial_{1}^{3}$
$\displaystyle=-3\partial_{1}u_{1}-\frac{3}{2}\partial_{1}^{3}-3u_{1}\partial_{1}-\frac{3}{2}\partial_{1}^{3}$
$\displaystyle=-3\partial_{1}^{3}-6u_{1}\partial_{1}-3u_{11}.$
We have
$\displaystyle H_{23}$ $\displaystyle=p^{[2]}u_{3}-\mathcal{L}_{23}$
$\displaystyle=-3u_{1}^{5}+\frac{15}{2}u_{1}^{2}u_{11}^{2}-10u_{1}^{3}u_{111}+5u_{1}^{3}u_{3}-\frac{7}{2}u_{11}^{2}u_{111}-3u_{1}u_{111}^{2}+6u_{1}u_{11}u_{1111}$
$\displaystyle\quad-\frac{3}{2}u_{1}^{2}u_{11111}-10u_{1}u_{11}u_{12}+\frac{5}{2}u_{11}^{2}u_{2}+5u_{1}u_{111}u_{2}+\frac{1}{2}u_{1111}^{2}-\frac{1}{2}u_{111}u_{11111}$
$\displaystyle\quad+\frac{1}{2}u_{111}u_{112}-\frac{1}{2}u_{1}u_{113}-u_{1111}u_{12}+\frac{1}{2}u_{11}u_{13}+\frac{1}{2}u_{11111}u_{2}+u_{111}u_{3},$
where the terms involving $t_{3}$-derivatives cancel out when the Hamiltonian
is integrated. Its variational derivative is
$\displaystyle\frac{\delta_{12}{H_{23}}}{\delta{u}}$
$\displaystyle=60u_{1}^{3}u_{11}+75u_{11}^{3}+300u_{1}u_{11}u_{111}+75u_{1}^{2}u_{1111}-30u_{1}^{2}u_{12}-30u_{1}u_{11}u_{2}$
$\displaystyle\quad+120u_{111}u_{1111}+72u_{11}u_{11111}+24u_{1}u_{111111}-30u_{1}u_{1112}-45u_{11}u_{112}$
$\displaystyle\quad-25u_{111}u_{12}-5u_{1111}u_{2}+2u_{11111111}-5u_{111112}.$
On solutions this simplifies to
$\displaystyle\frac{\delta_{12}{H_{23}}}{\delta{u}}$
$\displaystyle=-210u_{1}^{3}u_{11}-195u_{11}^{3}-690u_{1}u_{11}u_{111}-150u_{1}^{2}u_{1111}-210u_{111}u_{1111}$
$\displaystyle\quad-123u_{11}u_{11111}-36u_{1}u_{111111}-3u_{11111111}$
$\displaystyle=\mathcal{E}_{p^{[2]}}\left(10u_{1}^{3}+5u_{11}^{2}+10u_{1}u_{111}+u_{11111}\right)$
$\displaystyle=\mathcal{E}_{p^{[2]}}u_{3}.$
Hence
$\frac{\mbox{\rm d}}{\mbox{\rm d}t_{3}}\int u\,\mbox{\rm d}x\wedge\mbox{\rm
d}t_{2}=\int\mathcal{E}_{p^{[2]}}^{-1}\frac{\delta_{12}{H_{23}}}{\delta{u}}\,\mbox{\rm
d}x\wedge\mbox{\rm d}t_{2}=\left\\{{\textstyle\int}H_{23}\,\mbox{\rm
d}x\wedge\mbox{\rm d}t_{2},{\textstyle\int}u\,\mbox{\rm d}x\wedge\mbox{\rm
d}t_{2}\right\\}_{2}.$
### 4.6 Comparison with the covariant approach
In Section 4.5 we derived Poisson brackets $\\{\cdot,\cdot\\}_{i}$, associated
to each time variable $t_{i}$. This was somewhat cumbersome because we had a
priori assigned $x=t_{1}$ as a distinguished variable. The recent work [6]
explores the relation of pluri-Lagrangian structures to covariant Hamiltonian
structures. The meaning of “covariant” here is that all variables are on the
same footing; there is no distinguished $x$ variable. More details on
covariant field theory, and its connection to the distinguished-variable (or
“instantaneous”) perspective, can be found in [13]. The main objects in the
covariant Hamiltonian formulation of [6] are:
* •
A “symplectic multiform” $\Omega$, which can be expanded as
$\Omega=\sum_{j}\omega_{j}\wedge\mbox{\rm d}t_{j},$
where each $\omega_{j}$ is a vertical 2-form in the variational bicomplex.
* •
A “Hamiltonian multiform” $\mathcal{H}=\sum_{i<j}\mathtt{H}_{ij}\,\mbox{\rm
d}t_{i}\wedge\mbox{\rm d}t_{j}$ which gives the equations of motion through
$\delta\mathcal{H}=\sum_{j}\mbox{\rm d}t_{j}\wedge\xi_{j}\lrcorner\,\Omega,$
(29)
where $\delta$ is the vertical exterior derivative in the variational
bicomplex, $\xi_{j}$ denotes the vector field corresponding to the
$t_{j}$-flow, and $\lrcorner$ denotes the interior product. This equation
should be understood as a covariant version of the instantaneous Hamiltonian
equation $\delta H=\xi\lrcorner\,\omega$. On the equations of motion there
holds $\mbox{\rm d}\mathcal{H}=0$ if and only if $\mbox{\rm d}\mathcal{L}=0$.
Since the covariant Hamiltonian equation (29) is of a different form than the
instantaneous Hamiltonian equation we use, the coefficients $\mathtt{H}_{ij}$
of the Hamiltonian multiform $\mathcal{H}$ are also different from the
$H_{ij}$ we found in Sections 4.3–4.5. Our $H_{ij}$ are instantaneous
Hamiltonians where $t_{1}$ and $t_{i}$ are considered as space variables and
the Legendre transformation has been applied with respect to $t_{j}$.
* •
A “multi-time Poisson bracket” $\\{|\cdot,\cdot|\\}$ which defines a pairing
between functions or (a certain type of) horizontal one-forms, defined by
$\\{|F,G|\\}=(-1)^{r}\xi_{F}\delta G,$
where $\xi_{F}$ is the Hamiltonian (multi-)vector field associated to $F$, and
$r$ is the horizontal degree of $F$ (which is either 0 or 1). The equations of
motion can be written as
$\mbox{\rm d}F=\sum_{i<j}\\{|\mathtt{H}_{ij},F|\\}\,\mbox{\rm
d}t_{i}\wedge\mbox{\rm d}t_{j}.$
Single-time Poisson brackets are obtained in [6] by expanding the multi-time
Poisson bracket as
$\left\\{\left|{\textstyle\sum_{j}}F_{j}\,\mbox{\rm
d}t_{j},{\textstyle\sum_{j}}G_{j}\,\mbox{\rm
d}t_{j}\right|\right\\}=\sum_{j}\\{F_{j},G_{j}\\}_{j}\,\mbox{\rm d}t_{j}$
where
$\\{f,g\\}_{j}=-\xi_{f}^{j}\lrcorner\,\delta
g\qquad\text{and}\qquad\xi_{f}^{j}\lrcorner\,\omega_{j}=\delta f.$ (30)
These are fundamentally different from the Poisson brackets of Sections
4.3–4.5 because they act on different function spaces. Equation (30) assumes
that $\delta f$ lies in the image of $\omega_{j}$ (considered as a map from
vertical vector fields to vertical one-forms). For example, for the potential
KdV hiararchy one has $\omega_{1}=\delta v\wedge\delta v_{1}$, hence the
Poisson bracket $\\{\cdot,\cdot\\}_{1}$ can only be applied to functions of
$v$ and $v_{1}$, not to functions depending on any higher derivatives. Similar
conditions on the function space apply to the higher Poisson brackets
corresponding to $\omega_{j}$, $j\geq 2$. On the other hand, the Poisson
brackets of Sections 4.3–4.5 are defined on an equivalence class of functions
modulo certain derivatives, without further restrictions on the functions in
this class.
In summary, the single-time Poisson brackets of [6] are constructed with a
certain elegance in a covariant way, but they are defined only in a restricted
function space. They are different from our Poisson brackets of Section
4.3–4.5, which have no such restrictions, but break covariance already in the
definition of the function space as an equivalence class. It is not clear how
to pass from one picture to the other, or if their respective benefits can be
combined into a single approach.
## 5 Conclusions
We have established a connection between pluri-Lagrangian systems and
integrable Hamiltonian hierarchies. In the case of ODEs, where the pluri-
Lagrangian structure is a 1-form, this connection was already obtained in
[26]. Our main contribution is its generalization to the case of 2-dimensional
PDEs, described by Lagrangian 2-forms. Presumably, this approach extends to
Lagrangian $d$-forms of any dimension $d$, but the details of this are
postponed to future work.
A central property in the theory of pluri-Lagrangian systems is that the
Lagrangian form is (almost) closed on solutions. We showed that closedness is
equivalent to the corresponding Hamilton functions being in involution.
Although one can obtain several Poisson brackets (and corresponding Hamilton
functions) from one Lagrangian 2-form, these do not form a bi-Hamiltonian
structure and it is not clear if a recursion operator can be obtained from
them. Hence it remains an open question to find a fully variational
description of bi-Hamiltonian hierarchies.
### Acknowledgements
The author would like to thank Frank Nijhoff for his inspiring questions and
comments, Matteo Stoppato for helpful discussions about the covariant
Hamiltonian approach, Yuri Suris for his constructive criticism on early
drafts of this paper, and the anonymous referees for their thoughtful
comments.
The author is funded by DFG Research Fellowship VE 1211/1-1. Part of the work
presented here was done at TU Berlin, supported by the SFB Transregio 109
“Discretization in Geometry and Dynamics”.
## Appendix A Pluri-Lagrangian systems and the variational bicomplex
In this appendix we study the pluri-Lagrangian principle using the variational
bicomplex, described in Section 4.1. We provide proofs that the multi-time
Euler-Lagrange equations from Section 2 are sufficient conditions for
criticality. Alternative proofs of this fact can be found in [28] and [23,
Appendix A].
###### Proposition A.1.
The field $u$ is a solution to the pluri-Lagrangian problem of a $d$-form
$\mathcal{L}\llbracket u\rrbracket$ if locally there exists a $(1,d-1)$-form
$\Theta$ such that $\delta\mathcal{L}\llbracket u\rrbracket=\mbox{\rm
d}\Theta$.
* Proof.
Consider a field $u$ such that such a $(1,d-1)$-form $\Theta$ exists. Consider
any $d$-manifold $\Gamma$ and any variation $v$ that vanishes (along with all
its derivatives) on the boundary $\partial\Gamma$. Note that the horizontal
exterior derivative d anti-commutes with the interior product operator
$\iota_{V}$, where $V$ is the prolonged vertical vector field
$V=\operatorname{pr}(v\partial/\partial_{u})$ defined by the variation $v$. It
follows that
$\int_{\Gamma}\iota_{V}\delta\mathcal{L}=-\int_{\Gamma}\mbox{\rm
d}\left(\iota_{V}\Theta\right)=-\int_{\partial\Gamma}\iota_{V}\Theta=0,$
hence $u$ solves the pluri-Lagrangian problem. ∎
If we are dealing with a classical Lagrangian problem from mechanics,
$\mathcal{L}=L(u,u_{t})\,\mbox{\rm d}t$, we have
$\Theta=-\frac{\partial{L}}{\partial{u_{t}}}\delta u$, which is the pull back
to the tangent bundle of the canonical 1-form $\sum_{i}p_{i}\,\mbox{\rm
d}q_{i}$ on the cotangent bundle.
Often we want the Lagrangian form to be closed when evaluated on solutions. As
we saw in Theorems 3.7 and 4.11, this implies that the corresponding
Hamiltonians are in involution. We did not include this in the definition of a
pluri-Lagrangian system, because our definition already implies a slightly
weaker property.
###### Proposition A.2.
The horizontal exterior derivative $\mbox{\rm d}\mathcal{L}$ of a pluri-
Lagrangian form is constant on connected components of the set of critical
fields for $\mathcal{L}$.
* Proof.
Critical points satisfy locally
$\delta\mathcal{L}=\mbox{\rm d}\Theta\qquad\Rightarrow\qquad\mbox{\rm
d}\delta\mathcal{L}=0\qquad\Rightarrow\qquad\delta\mbox{\rm d}\mathcal{L}=0.$
Hence for any variation $v$ the Lie derivative of $\mbox{\rm d}\mathcal{L}$
along its prolongation $V=\operatorname{pr}(v\partial/\partial_{u})$ is
$\iota_{V}\delta(\mbox{\rm d}\mathcal{L})=0$. Therefore, if a solution $u$ can
be continuously deformed into another solution $\bar{u}$, then $\mbox{\rm
d}\mathcal{L}\llbracket u\rrbracket=\mbox{\rm
d}\mathcal{L}\llbracket\bar{u}\rrbracket$. ∎
Now let us prove the sufficiency of the multi-time Euler-Lagrange equations
for 1-forms and 2-forms, as given in Theorems 2.2 and 2.4. For different
approaches to the multi-time Euler-Lagrange equations, including proofs of
necessity, see [28] and [23].
* Proof of sufficiency in Theorem 2.2..
We calculate the vertical exterior derivative $\delta\mathcal{L}$ of the
Lagrangian 1-form, modulo the multi-time Euler-Lagrange Equations (1) and (2).
We have
$\displaystyle\delta\mathcal{L}$
$\displaystyle=\sum_{j=1}^{N}\sum_{I}\frac{\partial{\mathcal{L}_{j}}}{\partial{u_{I}}}\,\delta
u_{I}\wedge\mbox{\rm d}t_{j}$
$\displaystyle=\sum_{j=1}^{N}\sum_{I}\left(\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{I}}}+\partial_{j}\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{It_{j}}}}\right)\delta
u_{I}\wedge\mbox{\rm d}t_{j}.$
Rearranging this sum, we find
$\delta\mathcal{L}=\sum_{j=1}^{N}\left[\sum_{I\not\ni
t_{j}}\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{I}}}\delta
u_{I}\wedge\mbox{\rm
d}t_{j}+\sum_{I}\left(\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{It_{j}}}}\delta
u_{It_{j}}\wedge\mbox{\rm
d}t_{j}+\left(\partial_{j}\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{It_{j}}}}\right)\delta
u_{I}\wedge\mbox{\rm d}t_{j}\right)\right].$
On solutions of Equation (2), we can define the generalized momenta
$p^{I}=\frac{\delta_{j}{\mathcal{L}_{j}}}{\delta{u_{It_{j}}}}.$
Using Equations (1) and (2) it follows that
$\displaystyle\delta\mathcal{L}$
$\displaystyle=\sum_{j=1}^{N}\sum_{I}\left(p^{I}\delta
u_{It_{j}}\wedge\mbox{\rm d}t_{j}+\left(\partial_{j}p^{I}\right)\delta
u_{I}\wedge\mbox{\rm d}t_{j}\right)=-d\left(\sum_{I}p^{I}\delta u_{I}\right).$
This implies by Proposition A.1 that $u$ solves the pluri-Lagrangian problem.
∎
* Proof of sufficiency in Theorem 2.4..
We calculate the vertical exterior derivative $\delta\mathcal{L}$,
$\displaystyle\delta\mathcal{L}$
$\displaystyle=\sum_{i<j}\sum_{I}\frac{\partial{\mathcal{L}_{ij}}}{\partial{u_{I}}}\,\delta
u_{I}\wedge\mbox{\rm d}t_{i}\wedge\mbox{\rm d}t_{j}$
$\displaystyle=\sum_{i<j}\sum_{I}\left(\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{I}}}+\partial_{i}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}}}}+\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{j}}}}+\partial_{i}\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}t_{j}}}}\right)\delta
u_{I}\wedge\mbox{\rm d}t_{i}\wedge\mbox{\rm d}t_{j}$ (31)
We will rearrange this sum according to the times occurring in the multi-index
$I$. We have
$\displaystyle\sum_{I}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{I}}}\,\delta
u_{I}=\sum_{I\not\ni
t_{i},t_{j}}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{I}}}\,\delta
u_{I}+\sum_{I\not\ni
t_{j}}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}}}}\,\delta
u_{It_{i}}$ $\displaystyle\hskip 85.35826pt+\sum_{I\not\ni
t_{i}}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{j}}}}\,\delta
u_{It_{j}}+\sum_{I}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}t_{j}}}}\,\delta
u_{It_{i}t_{j}},$
$\displaystyle\sum_{I}\partial_{i}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}}}}\,\delta
u_{I}=\sum_{I\not\ni
t_{j}}\partial_{i}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}}}}\,\delta
u_{I}+\sum_{I}\partial_{i}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}t_{j}}}}\,\delta
u_{It_{j}},$
$\displaystyle\sum_{I}\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{j}}}}\,\delta
u_{I}=\sum_{I\not\ni
t_{i}}\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{j}}}}\,\delta
u_{I}+\sum_{I}\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}t_{j}}}}\,\delta
u_{It_{i}}.$
Modulo the multi-time Euler-Lagrange equations (5)–(7), we can write these
expressions as
$\displaystyle\sum_{I}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{I}}}\,\delta
u_{I}=\sum_{I\not\ni t_{j}}p_{j}^{I}\,\delta u_{It_{i}}-\sum_{I\not\ni
t_{i}}p_{i}^{I}\,\delta u_{It_{j}}+\sum_{I}(n_{j}^{I}-n_{i}^{I})\,\delta
u_{It_{i}t_{j}},$
$\displaystyle\sum_{I}\partial_{i}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}}}}\,\delta
u_{I}=\sum_{I\not\ni t_{j}}\partial_{i}p_{j}^{I}\,\delta
u_{I}+\sum_{I}\partial_{i}(n_{j}^{I}-n_{i}^{I})\,\delta u_{It_{j}},$
$\displaystyle\sum_{I}\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{j}}}}\,\delta
u_{I}=\sum_{I\not\ni t_{i}}-\partial_{j}p_{i}^{I}\,\delta
u_{I}+\sum_{I}\partial_{j}(n_{j}^{I}-n_{i}^{I})\,\delta u_{It_{i}},$
$\displaystyle\sum_{I}\partial_{i}\partial_{j}\frac{\delta_{ij}{\mathcal{L}_{ij}}}{\delta{u_{It_{i}t_{j}}}}\,\delta
u_{I}=\sum_{I}\partial_{i}\partial_{j}(n_{j}^{I}-n_{i}^{I})\delta u_{I}.$
where
$\displaystyle
p_{j}^{I}=\frac{\delta_{1j}{\mathcal{L}_{1j}}}{\delta{u_{It_{1}}}}\qquad\text{for
}I\not\ni t_{j},$ $\displaystyle
n_{j}^{I}=\frac{\delta_{1j}{\mathcal{L}_{1j}}}{\delta{u_{It_{1}t_{j}}}}.$
Note that here the indices of $p$ and $n$ are labels, not derivatives. Hence
on solutions to equations (5)–(7), Equation (31) is equivalent to
$\displaystyle\delta\mathcal{L}$
$\displaystyle=\sum_{i<j}\Bigg{[}\sum_{I\not\ni t_{j}}\left(p_{j}^{I}\,\delta
u_{It_{i}}+\partial_{i}p_{j}^{I}\,\delta u_{I}\right)-\sum_{I\not\ni
t_{i}}\left(p_{i}^{I}\,\delta u_{It_{j}}+\partial_{j}p_{i}^{I}\,\delta
u_{I}\right)$ $\displaystyle\hskip
42.67912pt+\sum_{I}\Big{(}(n_{j}^{I}-n_{i}^{I})\,\delta
u_{It_{i}t_{j}}+\partial_{j}(n_{j}^{I}-n_{i}^{I})\,\delta u_{It_{i}}$
$\displaystyle\hskip 85.35826pt+\partial_{i}(n_{j}^{I}-n_{i}^{I})\,\delta
u_{It_{j}}+\partial_{i}\partial_{j}(n_{j}^{I}-n_{i}^{I})\,\delta
u_{I}\Big{)}\Bigg{]}\wedge\mbox{\rm d}t_{i}\wedge\mbox{\rm d}t_{j}.$
Using the anti-symmetry of the wedge product, we can write this as
$\displaystyle\delta\mathcal{L}$
$\displaystyle=\sum_{i,j=1}^{N}\Bigg{[}\sum_{I\not\ni
t_{j}}\left(p_{j}^{I}\,\delta u_{It_{i}}+\partial_{i}p_{j}^{I}\,\delta
u_{I}\right)$ $\displaystyle\hskip 42.67912pt+\sum_{I}\Big{(}n_{j}^{I}\,\delta
u_{It_{i}t_{j}}+\partial_{j}n_{j}^{I}\,\delta
u_{It_{i}}+\partial_{i}n_{j}^{I}\,\delta
u_{It_{j}}+\partial_{i}\partial_{j}n_{j}^{I}\,\delta
u_{I}\Big{)}\Bigg{]}\wedge\mbox{\rm d}t_{i}\wedge\mbox{\rm d}t_{j}$
$\displaystyle=\sum_{j=1}^{N}\Bigg{[}\sum_{I\not\ni t_{j}}-\mbox{\rm
d}\left(p_{j}^{I}\,\delta u_{I}\wedge\mbox{\rm
d}t_{j}\right)+\sum_{I}-d\left(n_{j}^{I}\,\delta u_{It_{j}}\wedge\mbox{\rm
d}t_{j}+\partial_{j}n_{j}^{I}\,\delta u_{I}\wedge\mbox{\rm
d}t_{j}\right)\Bigg{]}.$
It now follows by Proposition A.1 that $u$ is a critical field. ∎
## References
* Anderson [1992] Anderson I. M. Introduction to the variational bicomplex. In Gotay M., Marsden J. & Moncrief V., editors, _Mathematical Aspects of Classical Field Theory_ , pages 51–73. AMS, 1992.
* Bergvelt and De Kerf [1985] Bergvelt M. J. & De Kerf E. A. Poisson brackets for Lagrangians linear in the velocity. Letters in Mathematical Physics, 10 : 13–19, 1985.
* Bobenko and Suris [2010] Bobenko A. I. & Suris Yu. B. On the Lagrangian structure of integrable quad-equations. Letters in Mathematical Physics, 92 : 17–31, 2010.
* Bobenko and Suris [2015] Bobenko A. I. & Suris Yu. B. Discrete pluriharmonic functions as solutions of linear pluri-Lagrangian systems. Communications in Mathematical Physics, 336 : 199–215, 2015.
* Boll et al. [2014] Boll R., Petrera M. & Suris Yu. B. What is integrability of discrete variational systems? Proceedings of the Royal Society A, 470 : 20130550, 2014.
* Caudrelier and Stoppato [2020] Caudrelier V. & Stoppato M. Hamiltonian multiform description of an integrable hierarchy. Journal of Mathematical Physics, 61 : 123506, 2020.
* De Sole and Kac [2013] De Sole A. & Kac V. G. Non-local Poisson structures and applications to the theory of integrable systems. Japanese Journal of Mathematics, 8 : 233–347, 2013.
* Dickey [2003] Dickey L. A. Soliton Equations and Hamiltonian Systems. World Scientific, 2nd edition, 2003.
* Dorfman [1987] Dorfman I. Y. Dirac structures of integrable evolution equations. Physics Letters A, 125 : 240–246, 1987.
* Gardner [1971] Gardner C. S. Korteweg-de Vries equation and generalizations. IV. the Korteweg-de Vries equation as a Hamiltonian system. Journal of Mathematical Physics, 12 : 1548–1551, 1971.
* Gotay [1988] Gotay M. J. A multisymplectic approach to the KdV equation. In _Differential Geometrical Methods in Theoretical Physics_ , pages 295–305. Springer, 1988.
* Hietarinta et al. [2016] Hietarinta J., Joshi N. & Nijhoff F. W. Discrete Systems and Integrability. Cambridge University Press, Cambridge, 2016.
* Kanatchikov [1998] Kanatchikov I. V. Canonical structure of classical field theory in the polymomentum phase space. Reports on Mathematical Physics, 41 : 49–90, 1998.
* Lobb and Nijhoff [2009] Lobb S. & Nijhoff F. Lagrangian multiforms and multidimensional consistency. Journal of Physics A: Mathematical and Theoretical, 42 : 454013, 2009.
* Macfarlane [1982] Macfarlane A. J. Equations of Korteweg-de Vries type I: Lagrangian and Hamiltonian formalism. Technical Report TH-3289, CERN. http://cds.cern.ch/record/137678, 1982.
* Mokhov [1998] Mokhov O. I. Symplectic and poisson structures on loop spaces of smooth manifolds, and integrable systems. Russian Mathematical Surveys, 53 : 515, 1998.
* Olver [2000] Olver P. J. Applications of Lie Groups to Differential Equations. Volume 107 of _Graduate Texts in Mathematics_. Springer, 2nd edition, 2000.
* Petrera and Suris [2017] Petrera M. & Suris Yu. B. Variational symmetries and pluri-Lagrangian systems in classical mechanics. Journal of Nonlinear Mathematical Physics, 24 (Sup. 1) : 121–145, 2017.
* Petrera and Vermeeren [2020] Petrera M. & Vermeeren M. Variational symmetries and pluri-Lagrangian structures for integrable hierarchies of PDEs. European Journal of Mathematics, 2020.
* Rowley and Marsden [2002] Rowley C. W. & Marsden J. E. Variational integrators for degenerate Lagrangians, with application to point vortices. In _Proceedings of the 41st IEEE Conference on Decision and Control, 2002._ , pages 1521–1527. IEEE, 2002.
* Sleigh et al. [2019a] Sleigh D., Nijhoff F. & Caudrelier V. A variational approach to Lax representations. Journal of Geometry and Physics, 142 : 66–79, 2019a.
* Sleigh et al. [2019b] Sleigh D., Nijhoff F. & Caudrelier V. Variational symmetries and Lagrangian multiforms. Letters in Mathematical Physics : 1–22, 2019b.
* Sleigh et al. [2020] Sleigh D., Nijhoff F. & Caudrelier V. Lagrangian multiforms for Kadomtsev-Petviashvili (KP) and the Gelfand-Dickey hierarchy. arXiv:2011.04543, 2020.
* Sridhar and Suris [2019] Sridhar A. & Suris Yu. B. Commutativity in Lagrangian and Hamiltonian mechanics. Journal of Geometry and Physics, 137 : 154–161, 2019.
* Suris [2003] Suris Yu. B. The Problem of Integrable Discretization: Hamiltonian Approach. Birkhäuser, 2003.
* Suris [2013] Suris Yu. B. Variational formulation of commuting Hamiltonian flows: Multi-time Lagrangian 1-forms. Journal of Geometric Mechanics, 5 : 365–379, 2013.
* Suris [2016] Suris Yu. B. Variational symmetries and pluri-Lagrangian systems. In _Dynamical Systems, Number Theory and Applications: A Festschrift in Honor of Armin Leutbecher’s 80th Birthday_ , pages 255–266. World Scientific, 2016.
* Suris and Vermeeren [2016] Suris Yu. B. & Vermeeren M. On the Lagrangian structure of integrable hierarchies. In Bobenko A. I., editor, _Advances in Discrete Differential Geometry_ , pages 347–378. Springer, 2016.
* Vermeeren [2019a] Vermeeren M. Continuum limits of pluri-Lagrangian systems. Journal of Integrable Systems, 4 : xyy020, 2019a.
* Vermeeren [2019b] Vermeeren M. A variational perspective on continuum limits of ABS and lattice GD equations. SIGMA, 15 : 044, 2019b.
* Wilson [1988] Wilson G. On the quasi-hamiltonian formalism of the KdV equation. Physics Letters A, 132 : 445–450, 1988.
* Xenitidis et al. [2011] Xenitidis P., Nijhoff F. & Lobb S. On the Lagrangian formulation of multidimensionally consistent systems. Proceedings of the Royal Society A, 467 : 3295–3317, 2011.
* Yoo-Kong et al. [2011] Yoo-Kong S., Lobb S. & Nijhoff F. Discrete-time Calogero-Moser system and Lagrangian 1-form structure. Journal of Physics A: Mathematical and Theoretical, 44 : 365203, 2011.
|
2024-09-04T02:54:59.265619 | 2020-03-11T16:47:22 | 2003.05402 | {
"authors": "Boxin Zhao, Y. Samuel Wang, Mladen Kolar",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26171",
"submitter": "Boxin Zhao",
"url": "https://arxiv.org/abs/2003.05402"
} | arxiv-papers | # FuDGE: A Method to Estimate a Functional Differential Graph in a High-
Dimensional Setting
Boxin Zhao<EMAIL_ADDRESS>
Booth School of Business
The University of Chicago
Chicago, IL 60637, USA
Y. Samuel Wang<EMAIL_ADDRESS>
Department of Statistics and Data Science
Cornell University
Ithaca, NY 14853, USA
Mladen Kolar<EMAIL_ADDRESS>
Booth School of Business
The University of Chicago
Chicago, IL 60637, USA
###### Abstract
We consider the problem of estimating the difference between two functional
undirected graphical models with shared structures. In many applications, data
are naturally regarded as a vector of random functions rather than a vector of
scalars. For example, electroencephalography (EEG) data are more appropriately
treated as functions of time. In such a problem, not only can the number of
functions measured per sample be large, but each function is itself an
infinite dimensional object, making estimation of model parameters
challenging. This is further complicated by the fact that the curves are
usually only observed at discrete time points. We first define a functional
differential graph that captures the differences between two functional
graphical models and formally characterize when the functional differential
graph is well defined. We then propose a method, FuDGE, that directly
estimates the functional differential graph without first estimating each
individual graph. This is particularly beneficial in settings where the
individual graphs are dense, but the differential graph is sparse. We show
that FuDGE consistently estimates the functional differential graph even in a
high-dimensional setting for both fully observed and discretely observed
function paths. We illustrate the finite sample properties of our method
through simulation studies. We also propose a competing method, the Joint
Functional Graphical Lasso, which generalizes the Joint Graphical Lasso to the
functional setting. Finally, we apply our method to EEG data to uncover
differences in functional brain connectivity between a group of individuals
with alcohol use disorder and a control group.
Keywords: differential graph estimation, functional data analysis,
multivariate functional data, probabilistic graphical models, structure
learning
## 1 Introduction
We consider a setting where we observe two samples of multivariate functional
data, $X_{i}(t)$ for $i=1,\ldots,n_{X}$ and $Y_{i}(t)$ for $i=1,\ldots,n_{Y}$.
The primary goal is to determine if and how the underlying
populations—specifically their conditional dependency structures—differ. As a
motivating example, consider electroencephalography (EEG) data where the
electrical activity of multiple regions of the brain can be measured
simultaneously across a period of time. Given samples from the general
population, fitting a graphical model to the observed measurements would allow
a researcher to determine which regions of the brain are dependent after
conditioning on all other regions. The EEG data analyzed in Section 6.2
consists of two samples: one from a control group and the other from a group
of individuals with alcohol use disorder (AUD). Using this data, researchers
may be interested in explicitly comparing the two groups and investigating the
complex question of how brain functional connectivity patterns in the AUD
group differ from those in the control group.
The conditional independence structure within multivariate data is commonly
represented by a graphical model (Lauritzen, 1996). Let $G=\\{V,E\\}$ denote
an undirected graph where $V$ is the set of vertices with $|V|=p$ and
$E\subset V^{2}$ is the set of edges. At times, we also denote $V$ as
$[p]=\\{1,2,\dots,p\\}$. When the data consist of random vectors
$X=(X_{1},\dots,X_{p})^{\top}$, we say that $X$ satisfies the pairwise Markov
property with respect to $G$ if $X_{v}\centernot\perp\\!\\!\\!\perp
X_{w}\mid\\{X_{u}\\}_{u\in V\setminus\\{v,w\\}}$ holds if and only if
$\\{v,w\\}\in E$. When $X$ follows a multivariate Gaussian distribution with
covariance $\Sigma=\Theta^{-1}$, then $\Theta_{vw}\neq 0$ if and only if
$\\{v,w\\}\in E$. Thus, recovering the structure of an undirected graph from
multivariate Gaussian data is equivalent to estimating the support of the
precision matrix, $\Theta$.
When the primary interest is in characterizing the difference between the
conditional independence structure of two populations, the object of interest
may be the _differential graph_ , $G_{\Delta}=\\{V,E_{\Delta}\\}$. When $X$
and $Y$ follow multivariate normal distributions with covariance matrices
$\Sigma^{X}$ and $\Sigma^{Y}$, let $\Delta=\Theta^{X}-\Theta^{Y}$, where
$\Theta^{X}=(\Sigma^{X})^{-1}$ and $\Theta^{Y}=(\Sigma^{Y})^{-1}$ are the
precision matrices of $X$ and $Y$ respectively. The differential graph is then
defined by letting $E_{\Delta}=\left\\{\\{v,w\\}\,:\,\Delta_{v,w}\neq
0\right\\}$. This type of differential model for vector-valued data has been
adopted in Zhao et al. (2014), Xu and Gu (2016), and Cai (2017).
In the motivating example of EEG data, the electrical activity is observed
over a period of time. When measurements smoothly vary over time, it may be
more natural to consider the observations as arising from an underlying
function. This is particularly true when data from different subjects are
observed at different time points. Furthermore, when characterizing
conditional independence, it is likely that the activity of each region
depends not only on what is occurring simultaneously in the other regions, but
also on what has previously occurred in other regions; this suggests that a
functional graphical model might be appropriate.
In this paper, we define a differential graph for functional data that we
refer to as a functional differential graphical model. Similar to differential
graphs for vector-valued data, functional differential graphical models
characterize the differences in the conditional dependence structures of two
distributions of multivariate curves. We build on the functional graphical
model developed in Qiao et al. (2019). However, while Qiao et al. (2019)
required that the observed functions lie in a finite-dimensional space in
order for the functional graphical model to be well defined, the functional
differential graphical models may be well defined even in certain cases where
the observed functions live in an infinite-dimensional space.
We propose an algorithm called FuDGE to estimate the differential graph and
show that this procedure enjoys many benefits, similar to differential graph
estimation in the vector-valued setting. Most notably, we show that under
suitable conditions, the proposed method can consistently recover the
differential graph even in the high-dimensional setting where $p$, the number
of observed variables, may be larger than $n$, the number of observed samples.
A conference version of this paper was presented at the Conference on Neural
Information Processing Systems (Zhao et al., 2019). Compared to the conference
version, this paper includes the following new results:
* •
We give a new definition for a differential graph for functional data, which
allows us to circumvent the unnatural assumption made in the previous version
and take a truly functional approach. Specifically, instead of defining the
differential graph based on the difference between conditional covariance
functions, we use the limit of the norm of the difference between finite-
dimensional precision matrices.
* •
We include new theoretical guarantees for discretely observed curves. In
practice, we can only observe the functions at discrete time points, so this
extends the theoretical guarantees to a practical estimation procedure.
Discrete observations bring an additional source of error when the estimated
curves are used in functional PCA. In Theorem 4, we give an error bound for
estimating the covariance matrix of the PCA score vectors under mild
conditions.
* •
We introduce the Joint Functional Graphical Lasso, which is a generalization
of the Joint Graphical Lasso (Danaher et al., 2014) to the functional data
setting. Empirically, we show that the procedure performs competitively in
some settings, but is generally outperformed by the FuDGE procedure.
The software implementation can be found at https://github.com/boxinz17/FuDGE.
The repository also contains the code to reproduce the simulation results.
### 1.1 Related Work
The work we develop lies at the intersection of two different lines of
literature: graphical models for functional data and direct estimation of
differential graphs.
There are many previous works studying the structure estimation of a static
undirected graphical model (Chow and Liu, 1968; Yuan and Lin, 2007; Cai et
al., 2011; Meinshausen and Bühlmann, 2006; Yu et al., 2016, 2019; Vogel and
Fried, 2011). Previous methods have also been proposed for characterizing
conditional independence for multivariate observations recorded over time. For
example, Talih and Hengartner (2005), Xuan and Murphy (2007), Ahmed and Xing
(2009), Song et al. (2009a), Song et al. (2009b), Kolar et al. (2010b), Kolar
et al. (2009), Kolar and Xing (2009), Zhou et al. (2010), Yin et al. (2010),
Kolar et al. (2010a), Kolar and Xing (2011), Kolar and Xing (2012), Wang and
Kolar (2014), Lu et al. (2018) studied methods for dynamic graphical models
that assume the data are independently sampled at different time points, but
generated by related distributions. In these works, the authors proposed
procedures to estimate a series of graphs which represent the conditional
independence structure at each time point; however, they assume the observed
data does not encode “longitudinal” dependence. In contrast, Qiao et al.
(2019); Zhu et al. (2016); Li and Solea (2018); Zhang et al. (2018) considered
the setting where the data data are multivariate random functions. Most
similar to the setting we consider, Qiao et al. (2019) assumed that the data
are distributed as a multivariate Gaussian process (MGP) and use a graphical
lasso type procedure on the estimated functional principal component scores.
Zhu et al. (2016) also assumed an MGP, but proposed a Bayesian procedure.
Crucially, however, both required that the covariance kernel can essentially
be represented by a finite dimensional object. Zapata et al. (2019) showed
that under various notions of separability—roughly when the covariance kernel
can be decomposed into covariance across time and covariance across nodes—the
conditional independence of the MGP is well defined even when the functional
data are truly infinite dimensional and that the conditional independence
graph can be recovered by the union of a (potentially infinitely) countable
number of graphs over finite dimensional objects. In a different approach, Li
and Solea (2018) did not assume that the random functions are Gaussian, and
instead used the notion of additive conditional independence to define a
graphical model for the random functions. Finally, Qiao et al. (2020) also
assumed that the data are random functions, but also allowed for the
dependency structure to change smoothly across time—similar to a dynamic
graphical model.
We also draw on recent literature which has shown that when the object of
interest is the difference between two distributions, directly estimating the
difference can provide improvements over first estimating each distribution
and then taking the difference. Most notably, when estimating the difference
in graphs in the high-dimensional setting, even if each individual graph does
not satisfy the appropriate sparsity conditions, the differential graph may
still be recovered consistently. Zhao et al. (2014) considered data drawn from
two Gaussian graphical models, and they showed that even if both underlying
graphs are dense, if the difference between the precision matrices of each
distribution is sparse, the differential graph can still be recovered in the
high-dimensional setting. Liu et al. (2014) proposed procedure based on KLIEP
(Sugiyama et al., 2008) that estimates the differential graph by directly
modeling the ratio of two densities. They did not assume Gaussianity, but
required that both distributions lie in some exponential family. Fazayeli and
Banerjee (2016) extended this idea to estimate the differences in Ising
models. Wang et al. (2018) and Ghoshal and Honorio (2019) also proposed direct
difference estimators for directed graphs when the data are generated by
linear structural equation models that share a common topological ordering.
### 1.2 Notation
Let $|\cdot|_{p}$ denote the vector $p$-norm and $\|\cdot\|_{p}$ denote the
matrix/operator $p$-norm. For example, for a $p\times 1$ vector
$a=(a_{1},a_{2},\dots,a_{p})^{\top}$, we have $|a|_{1}=\sum_{j}|a_{j}|$,
$|a|_{2}=(\sum_{j}|a^{2}_{j}|)^{1/2}$ and $|a|_{\infty}=\max_{j}|a_{j}|$. For
a $p\times{q}$ matrix $A$ with entries $a_{jk}$, $|A|_{1}=\sum_{j,k}|a_{jk}|$,
$\|A\|_{1}=\max_{k}\sum_{j}|a_{jk}|$, $|A|_{\infty}=\max_{j,k}|a_{jk}|$, and
$\|A\|_{\infty}=\max_{j}\sum_{k}|a_{jk}|$. Let $\left\lVert
A\right\rVert_{\text{F}}=(\sum_{j,k}a^{2}_{jk})^{1/2}$ be the Frobenius norm
of $A$. When $A$ is symmetric, let $\mathrm{tr}(A)=\sum_{j}a_{jj}$ denote the
trace of A. Let $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ denote the minimum
and maximum eigenvalues, respectively. Let $a_{n}\asymp{b_{n}}$ denote that
$0<C_{1}\leq{\inf_{n}|a_{n}/b_{n}|}\leq{\sup_{n}|a_{n}/b_{n}|}\leq
C_{2}<\infty$ for some positive constants $C_{1}$ and $C_{2}$.
We assume that all random functions belong to a separable Hilbert space
$\mathbb{H}$. For any two functions $f_{1},f_{2}\in\mathbb{H}$, we define
their inner product as $\langle f_{1},f_{2}\rangle=\int f_{1}(t)f_{2}(t)dt$.
The induced norm is $\|f_{1}\|=\|f_{1}\|_{\mathcal{L}^{2}}=\\{\int
f_{1}^{2}(t)dt\\}^{1/2}$.
For a function vector $f(t)=(f_{1}(t),f_{2}(t),\dots,f_{p}(t))^{\top}$, we let
$\|f\|_{\mathcal{L}^{2},2}=(\sum^{p}_{j=1}\|f_{j}\|^{2})^{1/2}$ denote its
$\mathcal{L}^{2},2$-norm. For a bivariate function $g(s,t)$, we define the
Hilbert-Schmidt norm of $g(s,t)$ as
$\|g\|_{\text{HS}}=\int\int\\{g(s,t)\\}^{2}dsdt$. Typically, we will use
$f(\cdot)$ (and similarly $g(\cdot,*)$) to denote the entire function $f$,
while we use $f(t)$ (and similarly $g(s,t)$) to mean the value of $f$
evaluated at $t$.
For a vector space $\mathbb{V}$, we use $\mathbb{V}^{\bot}$ to denote its
orthogonal complement. For $v_{1},\ldots,v_{K}\in\mathbb{V}$, and
$v=(v_{1},\ldots,v_{K})^{\top}$, we use ${\rm
Span}\left\\{v_{1},v_{2},\dots,v_{K}\right\\}={\rm Span}\left(v\right)$ to
denote the vector subspace spanned by $v_{1},\ldots,v_{K}$.
## 2 Functional Differential Graphical Models
In this section, we give a review of functional graphical models and introduce
the notion of a functional differential graphical model.
### 2.1 Functional Graphical Model
Suppose
$X_{i}(\cdot)=\left(X_{i1}(\cdot),X_{i2}(\cdot),\dots,X_{ip}(\cdot)\right)^{\top}$
is a p-dimensional _multivariate Gaussian processes (MGP)_ with mean zero and
common domain $\mathcal{T}$, where $\mathcal{T}$ is a closed interval of the
real line with length $\lvert\mathcal{T}\rvert$.111We assume mean zero and a
common domain $\mathcal{T}$ to simplify the notation, but the methodology and
theory generalize to non-zero means and different time domains. Each
observation, for $i=1,2,\ldots,n$, is i.i.d. In addition, assume that for
$j\in V$, $X_{ij}(\cdot)$ is a random element of a separable Hilbert space
$\mathbb{H}$. Qiao et al. (2019), define the conditional cross-covariance
function for $X_{i}(\cdot)$ as
${}C^{X}_{jl}(s,t)\;=\;\mathrm{Cov}\left(X_{ij}(s),X_{il}(t)\,\mid\,\\{X_{ik}(\cdot)\\}_{k\neq
j,l}\right).$ (1)
If $C^{X}_{jl}(s,t)=0$ for all $s,t\in\mathcal{T}$, then the random functions
$X_{j}(\cdot)$ and $X_{l}(\cdot)$ are conditionally independent given the
other random functions, and the graph $G_{X}=\\{V,E_{X}\\}$ represents the
pairwise Markov properties of $X_{i}(\cdot)$ if
$E_{X}=\left\\{(j,l)\,:\,j<l\text{ and }\|C^{X}_{jl}\|_{\text{HS}}\neq
0\right\\}.$ (2)
In general, we cannot directly estimate (2), since $X_{i}(\cdot)$ may be an
infinite dimensional object. Thus, before applying a statistical estimation
procedure, dimension reduction is typically required. Qiao et al. (2019) used
_functional principal component analysis_ (FPCA) to project each observed
function onto an orthonormal function basis defined by a finite number of
eigenfunctions. Their procedure then estimates the conditional independence
structure from the “projection scores” of this basis. We outline their
approach below. However, in contrast to Qiao et al. (2019), we do not restrict
ourselves to dimension reduction by projecting onto the FPCA basis, and in our
discussion we instead consider a general function subspace.
Let $\mathbb{V}^{M_{j}}_{j}\subseteq\mathbb{H}$ be a subspace of a separable
Hilbert space $\mathbb{H}$ with dimension $M_{j}\in\mathbb{N}^{+}$ for all
$j=1,2,\dots,p$. Our theory easily generalizes to the setting where $M_{j}$
may differ, but to simplify notation, we assume $M_{j}=M$ for all $j$ and
simply write $\mathbb{V}^{M}_{j}$ instead of $\mathbb{V}^{M_{j}}_{j}$. Let
$\mathbb{V}^{M}_{[p]}\coloneqq\mathbb{V}^{M}_{1}\otimes\mathbb{V}^{M}_{2}\otimes\dots\otimes\mathbb{V}^{M}_{p}$.
For any function $g(\cdot)\in\mathbb{H}$ and a subspace
$\mathbb{F}\subseteq\mathbb{H}$, let $\pi(g(\cdot);\mathbb{F})\in\mathbb{F}$
denote the projection of the function $g(\cdot)$ onto the subspace
$\mathbb{F}$, and let
$\pi(X_{i}(\cdot);\mathbb{V}^{M}_{[p]})=\left(\pi(X_{i1}(\cdot);\mathbb{V}^{M}_{1}),\pi(X_{i2}(\cdot);\mathbb{V}^{M}_{2}),\dots,\pi(X_{ip}(\cdot);\mathbb{V}^{M}_{p})\right)^{\top}.$
When the choice of the subspace is clear from the context, we will use the
following shorthand notation:
$X^{\pi}_{ij}(\cdot)=\pi(X_{ij}(\cdot);\mathbb{V}^{M}_{j})$, $j=1,2,\dots,p$,
and $X^{\pi}_{i}(\cdot)=\pi(X_{i}(\cdot);\mathbb{V}^{M}_{[p]})$.
Similar to the definitions in (1) and (2), we define the conditional
independence graph of $X^{\pi}(\cdot)$ as
$E^{\pi}_{X}=\left\\{\\{j,l\\}\,:\,j<l\text{ and
}\|C^{X,\pi}_{jl}\|_{\text{HS}}\neq 0\right\\},$ (3)
where
${}C^{X,\pi}_{jl}(s,t)\;=\;\mathrm{Cov}\left(X^{\pi}_{ij}(s),X^{\pi}_{il}(t)\,\mid\,\\{X^{\pi}_{ik}(\cdot)\\}_{k\neq
j,l}\right).$ (4)
Note that $E^{\pi}_{X}$ depends on the choice of $\mathbb{V}^{M}_{[p]}$
through the projection operator $\pi$, and as we discuss below, $E_{X}^{\pi}$
may be recovered from the observed samples.
When the data arise from an MGP, we can estimate the projected graphical
structure by studying the precision matrix of projection score vectors
(defined below) with _any_ orthonormal function basis of the subspace
$\mathbb{V}^{M}_{[p]}$. Let
$e^{M}_{j}=(e_{j1}(\cdot),e_{j2}(\cdot),\dots,e_{jM}(\cdot))^{\top}$ be any
orthonormal function basis of $\mathbb{V}^{M}_{j}$ and let
$e^{M}(\cdot)=\\{e^{M}_{j}\\}^{p}_{j=1}$ be orthonormal function basis of
$\mathbb{V}^{M}_{[p]}$. Let
$a^{X}_{ijk}=\int_{\mathcal{T}}X_{ij}(t)e_{jk}(t)dt$
denote the projection score of $X_{ij}(\cdot)$ onto $e_{jk}(\cdot)$ and let
$\displaystyle
a^{X,M}_{ij}=(a^{X}_{ij1},a^{X}_{ij2},\dots,a^{X}_{ijM})^{\top}\;\text{ and
}\;a^{X,M}_{i}=((a^{X,M}_{i1})^{\top},\ldots,(a^{X,M}_{ip})^{\top})^{\top}\in{\mathbb{R}^{pM}}.$
Since $X_{i}(\cdot)$ is a $p$-dimensional MGP, $a^{X,M}_{i}$ follows a
multivariate Gaussian distribution and we denote the covariance matrix of that
distribution as $\Sigma^{X,M}=(\Theta^{X,M})^{-1}\in\mathbb{R}^{pM\times pM}$.
Each function $X_{ij}(\cdot)$ is associated with $M$ rows and columns of
$\Sigma^{X,M}$ corresponding to $a_{ij}^{X,M}$. We use $\Theta_{jl}^{X,M}$ to
refer to the $M\times M$ sub-matrix of $\Theta^{X,M}$ corresponding to
functions $X_{ij}(\cdot)$ and $X_{il}(\cdot)$. Lemma 1, from Qiao et al.
(2019), shows that the conditional independence structure of the projected
functional data can be obtained from the block sparsity of $\Theta^{X,M}$.
###### Lemma 1
[Qiao et al. (2019)] Let $\Theta^{X,M}$ denote the inverse covariance of the
projection scores. Then, $X^{\pi}_{ij}(s)\perp\\!\\!\\!\perp
X^{\pi}_{il}(t)\mid\\{X^{\pi}_{ik}(\cdot)\\}_{k\neq j,l}$ for all222More
precisely, we only need the conditional independence to hold for all
$s,t\in{\cal T}$ except for a subset of $\mathcal{T}^{2}$ with zero measure.
$s,t\in{\cal T}$ if and only if $\Theta_{jl}^{X,M}\equiv 0$. This implies that
$E^{\pi}_{X}$—as defined in (3)—can be equivalently defined as
$E^{\pi}_{X}\;=\;\left\\{\\{j,l\\}\,:\,j<l\text{ and
}\|\Theta^{X,M}_{jl}\|_{F}\neq 0\right\\}.$ (5)
While Qiao et al. (2019) only considered projections onto the span of the FPCA
basis (that is, the eigenfunctions of $X_{ij}(\cdot)$ corresponding to $M$
largest eigenvalues), the result trivially extends to the more general case of
_any subspace_ and _any orthonormal function basis_ of that subspace.
Although $\Theta^{X,M}$ depends on the specific basis onto which
$X_{i}(\cdot)$ is projected, the edge set $E^{\pi}_{X}$ only depends on the
subspace $\mathbb{V}^{M}_{[p]}$, that is, the span of the basis onto which
$X_{i}(\cdot)$ is projected. Thus, Lemma 1 implies that although the entries
of $\Theta^{X,M}$ may change when using different orthonormal function bases
to represent $\mathbb{V}^{M}_{[p]}$, the block sparsity pattern of
$\Theta^{X,M}$ only depends on the span of the selected basis.
When $X_{i}(\cdot)\neq X^{\pi}_{i}(\cdot)$, $E^{\pi}_{X}$ may not be the same
as $E_{X}$; furthermore, it may not be the case that $E^{\pi}_{X}\subseteq
E_{X}$ or $E_{X}\subseteq E^{\pi}_{X}$. Thus, Condition 2 of Qiao et al.
(2019) requires a finite $M^{\star}<\infty$ such that $X_{ij}$ lies in
$\mathbb{V}^{M^{\star}}_{[p]}$ almost surely. When $M=M^{\star}$, then
$X_{i}(\cdot)=X_{i}^{\pi}(\cdot)$ and $E^{\pi}_{X}=E_{X}$. Under this
assumption, to estimate $E^{\pi}_{X}=E_{X}$, Qiao et al. (2019) proposed the
functional graphical lasso estimator (fglasso), which solves the following
objective:
$\hat{\Theta}^{X,M}=\operatorname*{arg\,max}_{\Theta^{X,M}}{\left\\{\log{\text{det}\left(\Theta^{X,M}\right)}-\mathrm{tr}\left(S^{X,M}\Theta^{X,M}\right)-\gamma_{n}\sum_{j\neq
l}{\left\lVert\Theta^{X,M}_{jl}\right\rVert_{\text{F}}}\right\\}}.$ (6)
In (6), $\Theta^{X,M}$ is a symmetric positive definite matrix,
$\Theta^{X,M}_{jl}\in\mathbb{R}^{M\times M}$ corresponds to the $(j,l)$ sub-
matrix of $\Theta^{X,M}$, $\gamma_{n}$ is a non-negative tuning parameter, and
$S^{X,M}$ is an estimator of $\Sigma^{X,M}$. The matrix $S^{X,M}$ is obtained
by using FPCA on the empirical covariance functions (see Section 2.3 for
details). The resulting estimated edge set for the functional graph is
$\hat{E}_{X}^{\pi}=\left\\{\\{j,l\\}\,:\,j<l\text{ and
}\left\lVert\hat{\Theta}^{X,M}_{jl}\right\rVert_{\text{F}}>0\right\\}.$ (7)
We also note that the objective in (6) was earlier used in Kolar et al. (2013)
and Kolar et al. (2014) for estimation of graphical models from multi-
attribute data.
However, the requirement that $X_{i}(\cdot)$ lies in a subspace with finite
dimension may be violated in many practical applications and negates one of
the primary benefits of considering the observations as functions.
Unfortunately, the extension to infinite-dimensional data is nontrivial, and
indeed Condition 2 in Qiao et al. (2019) requires that the observed functional
data lies within a finite-dimensional span. To see why, we first note that
$\Sigma^{X,M^{\star}}$ is always a compact operator on
$\mathbb{R}^{pM^{\star}}$. Thus, as $M^{\star}\to\infty$, the smallest
eigenvalue of $\Sigma^{X,M^{\star}}$ will go to zero. As a consequence,
$\Sigma^{X,M^{\star}}$ becomes increasingly ill-conditioned, and
$\Theta^{X,M^{\star}}$, the inverse of $\Sigma^{X,M^{\star}}$ will become ill-
defined when $M^{\star}=\infty$. This behaviour makes the estimation of a
functional graphical model —at least through the basis expansion approach
proposed by Qiao et al. (2019)—generally infeasible for truly infinite-
dimensional functional data. When the data is truly infinite-dimensional, the
best we can do is to estimate a finite-dimensional approximation and hope that
it captures the relevant information.
### 2.2 Functional Differential Graphical Models: Finite Dimensional Setting
In this paper, instead of estimating the conditional independence structure of
a single MGP, we are interested in characterizing the difference between two
MGPs, $X$ and $Y$. For brevity, we will typically only explicitly define the
notation for $X$; however, the reader should infer that all notation for $Y$
is defined analogously. As described in the introduction, Li et al. (2007) and
Zhao et al. (2014) consider the setting where $X$ and $Y$ are multivariate
Gaussian vectors, and define the differential graph
$G_{\Delta}=\\{V,E_{\Delta}\\}$ by letting
$E_{\Delta}=\left\\{(v,w)\,:\,v<w\text{ and }\Delta_{vw}\neq 0\right\\}$ (8)
where $\Delta=(\Sigma^{X})^{-1}-(\Sigma^{Y})^{-1}$ and $\Sigma^{X},\Sigma^{Y}$
are the covariance matrices of $X$ and $Y$.
We extend this definition to the functional data setting and define functional
differential graphical models. To develop the intuition, we first start by
defining the differential graph with respect to their finite-dimensional
projections, that is, with respect to $X^{\pi}_{i}(t)$ and $Y^{\pi}_{i}(t)$
for some choice of $\mathbb{V}^{M}_{[p]}$. As implied by Lemma 1, in the
functional graphical model setting, the $M\times M$ blocks of the precision
matrix of the projection scores play a similar role to the individual entries
of a precision matrix in the vector-valued Gaussian graphical model setting.
Thus, we also define a functional differential graphical model by the
difference of the precision matrices of the projection scores. Note that for
each $j\in V$, we require that both $a^{X}_{ij}$ and $a^{Y}_{ij}$ are computed
by the same function basis of $\mathbb{V}^{M}_{j}$. Let
$\Theta^{X,M}=\left(\Sigma^{X,M}\right)^{-1}$ and
$\Theta^{Y,M}=\left(\Sigma^{Y,M}\right)^{-1}$ be the precision matrices for
the projection scores for $X$ and $Y$, respectively, where the inverse should
be understood as the pseudo-inverse when $\Sigma^{X,M}$ or $\Sigma^{Y,M}$ are
not invertible. The functional differential graphical model is defined as
$\Delta^{M}=\Theta^{X,M}-\Theta^{Y,M}.$ (9)
Let $\Delta^{M}_{jl}$ be the $(j,l)$-th $M\times M$ block of $\Delta^{M}$ and
define the edges of the functional differential graph of the projected data
as:
$E^{\pi}_{\Delta}\,=\,\left\\{(j,l)\,:\,j<l\text{ and
}\,\|\Delta^{M}_{jl}\|_{F}>0\right\\}.$ (10)
While the entries of $\Delta^{M}$ depend on the choice of orthonormal function
basis, the definition of $E^{\pi}_{\Delta}$ is invariant to the particular
basis and only depends on the span. The following lemma formally states this
result.
###### Lemma 2
Suppose that ${\rm span}(e^{M}(\cdot))={\rm span}(\tilde{e}^{M}(\cdot))$ for
two orthonormal bases $e^{M}(\cdot)$ and $\tilde{e}^{M}(\cdot)$. Let
$E_{\Delta}^{\pi}$ and $E_{\Delta}^{\tilde{\pi}}$ be defined by (10) when
projecting $X$ and $Y$ onto $e^{M}(\cdot)$ and $\tilde{e}^{M}(\cdot)$,
respectively. Then, $E_{\Delta}^{\pi}=E_{\Delta}^{\tilde{\pi}}$.
Proof See Appendix B.1.
We have several comments regarding $E^{\pi}_{\Delta}$ defined in (10).
##### Projecting $X$ and $Y$ onto different subspaces:
While we project both $X$ and $Y$ onto the same subspace
$\mathbb{V}^{M}_{[p]}$, our definition can be easily generalized to a setting
where we project $X$ onto $\mathbb{V}^{X,M}_{[p]}$ and $Y$ onto
$\mathbb{V}^{Y,M}_{[p]}$, with
$\mathbb{V}^{X,M}_{[p]}\neq\mathbb{V}^{Y,M}_{[p]}$. For instance, naively
following the procedure of Qiao et al. (2019), we could perform FPCA on $X$
and $Y$ separately, and subsequently we could use the difference between the
precision matrices of projection scores to define the functional differential
graph. Although defining the functional differential graph using this
alternative approach may be suitable for some applications, it may result in
the undesirable case where $(j,l)\in E_{\Delta}^{\pi}$ even though
$C_{jl}^{X,\pi}(\cdot,*)=C_{jl}^{Y,\pi}(\cdot,*)$,
$C_{jj}^{X,\pi}(\cdot,*)=C_{ll}^{Y,\pi}(\cdot,*)$, and $C_{ll}^{\setminus
j,X,\pi}(\cdot,*)=C_{ll}^{\setminus j,Y,\pi}(\cdot,*)$. Therefore, we restrict
our discussion to the setting where both $X$ and $Y$ are projected onto the
same subspace.
##### Connection to Multi-Attribute Graphical Models:
The selection of a specific functional subspace is connected to multi-
attribute graphical models (Kolar et al., 2014). If we treat the random
function $X_{ij}(\cdot)$ as representing an infinite number of attributes,
then $X^{\pi}_{ij}(\cdot)$ will be an approximation using $M$ attributes. The
chosen attributes are given by the subspace $\mathbb{V}^{M}_{j}$. While we
allow different nodes to choose different attributes by allowing
$\mathbb{V}^{M}_{j}$ to vary across $j$, we require that the same attributes
are used to represent both $X$ and $Y$ by restricting $\mathbb{V}^{M}_{[p]}$
to be the same for $X$ and $Y$. The specific choice of $\mathbb{V}^{M}_{[p]}$,
can extract different attributes from the data. For instance, using the
subspace spanned by the Fourier basis can be viewed as extracting frequency
information, while using the subspace spanned by the eigenfunctions—as
introduced in the next section—can be viewed as extracting the dominant modes
of variation.
Given definition (10) and Lemma 2, there are two main questions to be
answered: First, how do we choose $\mathbb{V}^{M}_{[p]}$? Second, what happens
when $X$ and $Y$ are infinite-dimensional? We answer the first question in
Section 2.3 and the second question in Section 2.4.
### 2.3 Choosing Functional Subspace via FPCA
As discussed in Section 2.2, the choice of $\mathbb{V}^{M}_{[p]}$ in
Definition 10 decides—roughly speaking—the attributes or dimensions in which
we compare the conditional independence structures of $X$ and $Y$. In some
applications, we may have a very good prior knowledge about this choice.
However, in many cases we may not have strong prior knowledge. In this
section, we describe our recommended “default choice” that uses FPCA on the
combined $X$ and $Y$ observations. In particular, suppose there exist
subspaces $\\{\mathbb{V}^{M^{\star}}_{j}\\}_{j\in V}$ such that
$\mathbb{V}^{M^{\star}}_{j}$ has dimension $M^{\star}<\infty$ and
$X_{ij}(t),Y_{ij}(t)\in\mathbb{V}^{M^{\star}}_{j}$ for all $j\in V$. Then,
FPCA—when given population values—recovers this subspace.
Similar to the way principal component analysis provides the $L_{2}$ optimal
lower dimensional representation of vector-valued data, FPCA provides the
$L_{2}$ optimal finite dimensional representation of functional data. Let
$K^{X}_{jj}(t,s)=\mathrm{Cov}(X_{ij}(t),X_{ij}(s))$ denote the covariance
function for $X_{ij}$ where $j\in V$. Then, there exist orthonormal
eigenfunctions and eigenvalues
$\\{\phi^{X}_{jk}(t),\lambda^{X}_{jk}\\}_{k\in\mathbb{N}}$ such that
$\int_{\mathcal{T}}K^{X}_{jj}(s,t)\phi_{jk}^{X}(t)dt=\lambda_{jk}^{X}\phi_{jk}^{X}(s)$
for all $k\in\mathbb{N}$ (Hsing and Eubank, 2015). Since $K^{X}_{jj}(s,t)$ is
symmetric and non-negative definite, we assume, without loss of generality,
that $\\{\lambda^{X}_{js}\\}_{s\in\mathbb{N}^{+}}$ is non-negative and non-
increasing. By the Karhunen-Loève expansion (Hsing and Eubank, 2015,
Theorem7.3.5), $X_{ij}(t)$ can be expressed as
$X_{ij}(t)=\sum_{k=1}^{\infty}a^{X}_{ijk}\phi^{X}_{jk}(t)$, where the
principal component scores satisfy
$a^{X}_{ijk}=\int_{\mathcal{T}}X_{ij}(t)\phi^{X}_{jk}(t)dt$ and
$a^{X}_{ijk}\sim N(0,\lambda_{jk}^{X})$ with $E(a^{X}_{ijk}a^{X}_{ijl})=0$ if
$k\neq l$. Because the eigenfunctions are orthonormal, the $L_{2}$ projection
of $X_{ij}$ onto the span of the first $M$ eigenfunctions is
$X^{M}_{ij}(t)=\sum_{k=1}^{M}a^{X}_{ijk}\phi^{X}_{jk}(t)$. Similarly, we can
define $K^{Y}_{jj}(t,s)$,
$\\{\phi^{Y}_{jk}(t),\lambda^{Y}_{jk}\\}_{k\in\mathbb{N}}$ and
$Y^{M}_{ij}(t)$. Let $K_{jj}(s,t)=K^{X}_{jj}(s,t)+K^{Y}_{jj}(s,t)$ and let
$\\{\phi_{jk}(t),\lambda_{jk}\\}_{k\in\mathbb{N}}$ be the eigenfunction-
eigenvalue pairs of $K_{jj}(s,t)$.
Lemma 3 implies that $X_{ij}(\cdot)$ and $Y_{ij}(\cdot)$ lie within the span
of the eigenfunctions corresponding to the non-zero eigenvalues of $K_{jj}$.
Furthermore, this subspace is minimal in the sense that no subspace with
smaller dimension contains $X_{ij}(\cdot)$ and $Y_{ij}(\cdot)$ almost surely.
Thus, the FPCA basis of $K_{jj}$ provides a good default choice for dimension
reduction.
###### Lemma 3
Let $|\mathbb{V}|$ denote the dimension of a subspace $\mathbb{V}$ and suppose
$M^{\prime}_{j}=\inf\\{|\mathbb{V}|:\mathbb{V}\subseteq\mathbb{H},X_{ij}(\cdot),Y_{ij}(\cdot)\in\mathbb{V}\,\text{almost
surely}\\}.$
Let $\\{\phi_{jk}(t),\lambda_{jk}\\}_{k\in\mathbb{N}}$ be the eigenfunction-
eigenvalue pairs of $K_{jj}(s,t)$ and
$M^{\star}_{j}=\sup\\{M\in\mathbb{N}^{+}:\lambda_{jM}>0\\}.$
Then $M^{\prime}_{j}=M^{\star}_{j}$ and $X_{ij},Y_{ij}\in{\rm
Span}\\{\phi_{j1}(\cdot),\phi_{j2}(\cdot),\dots,\phi_{j,M^{\star}_{j}}(\cdot)\\}$
almost surely.
Proof See Appendix B.2.
### 2.4 Infinite Dimensional Functional Data
In Section 2.2, we defined a functional differential graph for functional data
that have finite-dimensional representation. In this section, we present a
more general definition that also allows for infinite-dimensional functional
data.
As discussed in Section 2.1, when the data are infinite-dimensional,
estimating a functional graphical model is not straightforward because the
precision matrix of the scores does not have a well-defined limit as $M$, the
dimension of the projected data, increases to $\infty$. When estimating the
differential graph, however, although $\|\Theta^{X,M}\|_{\text{F}}\to\infty$
and $\|\Theta^{Y,M}\|_{\text{F}}\to\infty$ as $M\to\infty$, it is still
possible for $\|\Theta^{X,M}-\Theta^{Y,M}\|_{\text{F}}$ to be bounded as
$M\to\infty$. For instance, $x_{n},y_{n}\in\mathbb{R}$ may both tend to
infinity, but $\lim_{n}x_{n}-y_{n}$ may still exist and be bounded.
Furthermore, even when
$\|\Theta^{X,M}-\Theta^{Y,M}\|_{\text{F}}\rightarrow\infty$, it is still
possible for the difference $\Theta^{X,M}-\Theta^{Y,M}$ to be informative.
This observation leads to Definition 1 below. To simplify notation, in the
rest of the paper, we assume that $X_{ij}(\cdot)$ and $Y_{ij}(\cdot)$ live in
an $M^{\star}$ dimensional space where $M^{\star}\leq\infty$. Recall that
$\\{\phi^{X}_{jk}(\cdot),\lambda^{X}_{jk}\\}_{k\in\mathbb{N}}$ and
$\\{\phi^{Y}_{jk}(\cdot),\lambda^{Y}_{jk}\\}_{k\in\mathbb{N}}$ denote the
eigenpairs of $K_{jj}^{X}$ and $K_{jj}^{Y}$ respectively.
###### Definition 1 (Differential Graph Matrix and Comparability)
The MGPs $X$ and $Y$ are comparable if, for all $j\in[p]$, $K_{jj}^{X}$ and
$K_{jj}^{Y}$ have $M^{\star}$ non-zero eigenvalues and
$\mathrm{span}\left(\\{\phi_{jk}^{X}\\}_{k=1}^{M^{\star}}\right)=\mathrm{span}\left(\\{\phi_{jk}^{Y}\\}_{k=1}^{M^{\star}}\right)$.
Furthermore, for every $(j,l)\in V^{2}$ and a projection subspace sequence
$\left\\{\mathbb{V}^{M}_{[p]}\right\\}_{M\geq 1}$ satisfying that $\lim_{M\to
M^{\star}}\mathbb{V}^{M}_{j}=\mathrm{span}\left(\\{\phi_{jk}^{X}\\}_{k=1}^{M^{\star}}\right)$,
we have either:
$\lim_{M\to
M^{\star}}\|\Delta^{M}_{jl}\|_{\text{F}}=0\qquad\text{or}\qquad\lim\inf_{M\to
M^{\star}}\|\Delta^{M}_{jl}\|_{\text{F}}>0.$
In this case, we define the differential graph matrix (DGM)
$D=(D_{jl})_{(j,l)\in V^{2}}\in\mathbb{R}^{p\times p}$, where
$D_{jl}=\lim\inf_{M\to M^{\star}}\|\Delta^{M}_{jl}\|_{\text{F}}.$ (11)
We say that $X$ and $Y$ are incomparable, if for some $j$, $K_{jj}^{X}$ and
$K_{jj}^{Y}$ have a different number of non-zero eigenvalues, or if
$\mathrm{span}\left(\\{\phi_{jk}^{X}\\}_{k=1}^{M^{\star}}\right)\neq\mathrm{span}\left(\\{\phi_{jk}^{Y}\\}_{k=1}^{M^{\star}}\right)$,
or if there exists some $(j,l)$ such that given
$\left\\{\mathbb{V}^{M}_{[p]}\right\\}_{M\geq 1}$ satisfying that $\lim_{M\to
M^{\star}}\mathbb{V}^{M}_{j}=\mathrm{span}\left(\\{\phi_{jk}^{X}\\}_{k=1}^{M^{\star}}\right)$,
we have
$\lim\inf_{M\to
M^{\star}}\|\Delta^{M}_{jl}\|_{\text{F}}=0,\qquad\text{but}\qquad\lim\sup_{M\to
M^{\star}}\|\Delta^{M}_{jl}\|_{\text{F}}>0.$
In Definition 1 we say $\lim_{M\to
M^{\star}}\mathbb{V}^{M}_{j}=\mathrm{span}\left(\\{\phi_{jk}^{X}\\}_{k=1}^{M^{\star}}\right)$,
to mean the following: For any $\epsilon>0$ and all
$g\in\mathrm{span}\left(\\{\phi_{jk}^{X}\\}_{k=1}^{M^{\star}}\right)$, there
exists $M^{\prime}=M^{\prime}(\epsilon)<\infty$ such that
$\|g-g^{M}_{P}\|<\epsilon$ for all $M\geq M^{\prime}$, where $g^{M}_{P}$
denotes the projection of $g$ onto the subspace of $\mathbb{V}^{M}_{j}$.
When $M^{\star}<\infty$, the conditional independence structure in $X_{i}$ and
$Y_{i}$ can be completely captured by a finite dimensional representation.
When $M^{\star}=\infty$, as $M\to\infty$, $\Delta^{M}_{jl}$ approaches the
difference of two matrices with unbounded eigenvalues. Nonetheless, when $X$
and $Y$ are comparable, the limits are still informative. This would suggest
that by using a sufficiently large subspace, we can capture such a difference
arbitrarily well. On the other hand, if the MGPs are not comparable, then
using a larger subspace may not improve the approximation regardless of the
sample size. For this reason, in the rest of the paper, we only focus on the
setting where $X$ and $Y$ are comparable.
To our knowledge, there is no existing procedure to estimate a graphical model
for functional data when the functions are infinite-dimensional. Thus, it is
not straightforward to determine whether the comparability condition is
stronger or weaker than what might be required for estimating the graphs
separately and then comparing post hoc. Nonetheless, we hope to provide some
intuition for the reader.
Suppose $X$ and $Y$ are of the same dimension, $M^{\star}$. If
$M^{\star}<\infty$ and the functional graphical model for each sample could be
estimated separately (that is, $\|\Theta^{X,M}\|_{F}<\infty$ and
$\|\Theta^{Y,M}\|_{F}<\infty$), then $X$ and $Y$ are comparable when the
minimal basis which spans $X$ and $Y$ is the same. Thus, the functional
differential graph is also well defined. On the other hand, the conditions
required by Qiao et al. (2019, Condition 2) for consistent estimation are not
satisfied when $M^{\star}=\infty$, since
$\lim_{M\rightarrow\infty}\|\Theta^{X,M}\|_{F}=\infty$ due to the compactness
of the covariance operator. However, $X$ and $Y$ may still be comparable
depending on the limiting behavior of $\Theta^{X,M}$ and $\Theta^{Y,M}$. Thus,
there are settings where the differential graph may exist and be consistently
recovered even when each individual graph cannot be recovered (even when $p$
is fixed).
However, when one MGP is finite-dimensional and the other is infinite-
dimensional, then the MGPs are incomparable. To see this, without loss of
generality, we assume that MGP $X$ has infinite dimension
$M^{X}_{j}=M^{\star}_{X}=\infty$ for all $j\in V$ and MGP $Y$ has finite
dimension $M^{Y}_{j}=M^{\star}_{Y}<\infty$ for all $j\in V$. Then
$\Theta^{Y,M}$ is ill-defined when $M>M^{\star}_{Y}$ and recovering the
differential graph is not straightforward.
We now define the notion of a functional differential graph.
###### Definition 2
When two MGPs $X$ and $Y$ are comparable, we define their functional
differential graph as an undirected graph $G_{\Delta}=\\{V,E_{\Delta}\\}$,
where $E_{\Delta}$ is defined as
$E_{\Delta}=\left\\{\\{j,l\\}\,:\,j<l\text{ and }D_{jl}>0\right\\}.$ (12)
###### Remark 1
The functional graphical model defined by Qiao et al. (2019) uses the
conditional covariance function $C_{jl}^{X}(\cdot,*)$ given in (1). Thus, it
would be quite natural to use the conditional covariance functions directly to
define a differential graph where
$E_{\Delta}=\left\\{\\{j,l\\}\;:\;j<l\text{ and }C_{jl}^{X}(\cdot,*)\neq
C_{jl}^{Y}(\cdot,*)\right\\}.$ (13)
Unfortunately, this definition does not always coincide with the one we
propose in Definition 2. Nevertheless, the functional differential graph given
in Definition 2 has many nice statistical properties and retains important
features of the graph defined in (13).
The primary statistical benefit of the graph defined in Definition 2 is that
it can be directly estimated without estimating each conditional independence
function: $C^{X}_{jl}(\cdot,\cdot)$ and $C^{Y}_{jl}(\cdot,\cdot)$. Similar to
the vector-valued case considered by (Zhao et al., 2014), this allows for a
much lower sample complexity when each individual graph is dense but the
difference is sparse. In some settings, there may not be enough samples to
estimate each individual graph accurately, but the difference may still be
recovered. This result is demonstrated in Theorem 1.
The statistical advantages of our estimand unfortunately come at the cost of a
slightly less precise characterization of the difference in the conditional
covariance functions. However, many of the key characteristics are still
preserved. Suppose $X_{i}$ and $Y_{i}$ are both $M^{\star}$-dimensional with
$M^{\star}<\infty$ and further suppose that
$\\{\phi_{jm}(\cdot)\phi_{lm^{\prime}}(*)\\}_{m,m^{\prime}\in[M^{\star}]\times[M^{\star}]}$
is a linearly independent set of functions. Suppose the conditional covariance
functions for $j,l\in V$ are unchanged so that
$C_{jj}^{X}(\cdot,*)=C_{jj}^{Y}(\cdot,*)$ and $C_{ll}^{\backslash
j,X}(\cdot,*)=C_{ll}^{\backslash j,Y}(\cdot,*)$, where
$C_{ll}^{\backslash j,X}(\cdot,*)\coloneqq{\rm
Cov}(X_{l}(\cdot),X_{l}(*)\,|\,X_{k}(\cdot),k\neq j,l)$
and $C_{ll}^{\backslash j,Y}(\cdot,*)$ is defined similarly; then,
$\Delta_{jl}=0$ if and only if $C_{jl}^{X}(\cdot,*)=C_{jl}^{Y,\pi}(\cdot,*)$.
When this holds for all pairs $j,l\in V$, then the definitions of a
differential graph in Definition 2 and (13) are equivalent. When the
conditional covariance functions may change so that $C_{jj}^{X}(\cdot,*)\neq
C_{jj}^{Y}(\cdot,*)$, then we still have that $\Delta_{jl}\neq 0$ if
$C_{jl}^{X,\pi}(\cdot,*)=0$ and $C_{jl}^{Y,\pi}(\cdot,*)\neq 0$ (or vice
versa). Thus, even in this more general setting, the functional differential
graph given in Definition 2 captures all qualitative differences between the
conditional covariance functions $C_{jl}^{X}(\cdot,*)$ and
$C_{jl}^{Y}(\cdot,*)$.
Our objective is to directly estimate $E_{\Delta}$ without first estimating
$E_{X}$ or $E_{Y}$. Since the functions we consider may be infinite-
dimensional objects, in practice, what we can directly estimate is actually
$E^{\pi}_{\Delta}$ defined in (10). We will use a sieve estimator to estimate
$\Delta^{M}$, where $M$ grows with the sample size $n$. When $M^{\star}=M$,
then $E^{\pi}_{\Delta}=E_{\Delta}$. When $M<M^{\star}\leq\infty$, then this is
generally not true; however, we would expect the graphs to be similar when $M$
is large enough compared with $M^{\star}$. Thus, by constructing a suitable
estimator of $\Delta^{M}$, we can still recover $E_{\Delta}$.
### 2.5 Illustration of comparability
We provide few examples that illustrate the notion of comparability. In the
first two examples, the graphs are comparable, while in the third example, the
graphs are incomparable. First, we state a lemma that will be helpful in the
following discussions. The lemma follows directly from the properties of the
multivariate normal and the inverse of block matrices.
###### Lemma 4
Let $H^{X,M}_{jl}=\mathrm{Cov}(a^{X,M}_{ij},a^{X,M}_{il}\mid
a^{X,M}_{ik},k\neq j,l)$ and $H^{\backslash
l,X,M}_{jj}=\mathrm{Var}(a^{X,M}_{ij}\mid a^{X,M}_{ik},k\neq j,l)$. For any
$j\in V$, we have $\Theta^{X,M}_{jj}=(H^{X,M}_{jj})^{-1}$. For any $(j,l)\in
V^{2}$ and $j\neq l$, we have
$\Theta^{X,M}_{jl}=-(H^{X,M}_{jj})^{-1}H^{X,M}_{jl}(H^{\backslash
j,X,M}_{ll})^{-1}$.
Proof See Appendix B.3.
The following proposition follows directly from Lemma 4.
###### Proposition 1
Assume that for any $(j,l)\in V^{2}$ and $j\neq l$, we have
$a^{X}_{ijm}\perp\\!\\!\\!\perp a^{X}_{ijm^{\prime}}\mid a^{X,M}_{ik},k\neq
j\qquad\text{and}\qquad a^{X}_{ijm}\perp\\!\\!\\!\perp
a^{X}_{ijm^{\prime}}\mid a^{X,M}_{ik},k\neq j,l,$
for any $M$ and $1\leq m\neq m^{\prime}\leq M$. We then have
$\Theta^{X,M}_{jj}={\rm diag}\left(\frac{1}{{\rm Var}\left(a^{X}_{ij1}\mid
a^{X,M}_{ik},k\neq j\right)},\dots,\frac{1}{{\rm Var}\left(a^{X}_{ijM}\mid
a^{X,M}_{ik},k\neq j\right)}\right)$
and
$\Theta^{X,M}_{jl,mm^{\prime}}=\frac{{\rm
Cov}\left(a^{X}_{ijm},a^{X}_{ilm^{\prime}}\mid a^{X,M}_{ik},k\neq
j,l\right)}{{\rm Var}\left(a^{X}_{ijm}\mid a^{X,M}_{ik},k\neq j\right){\rm
Var}\left(a^{X}_{ilm^{\prime}}\mid a^{X,M}_{ik},k\neq
j\right)}\overset{\Delta}{=}\bar{v}^{X,jl,M}_{mm^{\prime}},$
for any $M$ and $1\leq m\neq m^{\prime}\leq M$. In addition, if
$a^{Y}_{ijm}\perp\\!\\!\\!\perp a^{Y}_{ijm^{\prime}}\mid a^{Y,M}_{ik},k\neq
j\quad\text{and}\quad a^{Y}_{ijm}\perp\\!\\!\\!\perp a^{Y}_{ijm^{\prime}}\mid
a^{Y,M}_{ik},k\neq j,l,$
for any $M$ and $1\leq m\neq m^{\prime}\leq M$, then
$\displaystyle\Theta^{X,M}_{jj}-\Theta^{Y,M}_{jj}$ $\displaystyle={\rm
diag}\left(\left\\{\frac{{\rm Var}\left(a^{Y}_{ijm}\mid a^{Y,M}_{ik},k\neq
j\right)-{\rm Var}\left(a^{X}_{ijm}\mid a^{X,M}_{ik},k\neq j\right)}{{\rm
Var}\left(a^{X}_{ijm}\mid a^{X,M}_{ik},k\neq j\right){\rm
Var}\left(a^{Y}_{ijm}\mid a^{Y,M}_{ik},k\neq
j\right)}\right\\}^{M}_{m=1}\right)$ $\displaystyle\overset{\Delta}{=}{\rm
diag}\left(\bar{w}^{j,M}_{1},\bar{w}^{j,M}_{2},\dots,\bar{w}^{j,M}_{M}\right)$
and
$\displaystyle\Theta^{X,M}_{jl,mm^{\prime}}-\Theta^{Y,M}_{jl,mm^{\prime}}$
$\displaystyle=\frac{{\rm Cov}\left(a^{X}_{ijm},a^{X}_{ilm^{\prime}}\mid
a^{X,M}_{ik},k\neq j,l\right)}{{\rm Var}\left(a^{X}_{ijm}\mid
a^{X,M}_{ik},k\neq j\right){\rm Var}\left(a^{X}_{ilm^{\prime}}\mid
a^{X,M}_{ik},k\neq j\right)}$ $\displaystyle\qquad\qquad\qquad-\frac{{\rm
Cov}\left(a^{Y}_{ijm},a^{Y}_{ilm^{\prime}}\mid a^{Y,M}_{ik},k\neq
j,l\right)}{{\rm Var}\left(a^{Y}_{ijm}\mid a^{Y,M}_{ik},k\neq j\right){\rm
Var}\left(a^{Y}_{ilm^{\prime}}\mid a^{Y,M}_{ik},k\neq j\right)}$
$\displaystyle=\bar{v}^{Y,jl,M}_{mm^{\prime}}-\bar{v}^{X,jl,M}_{mm^{\prime}}\overset{\Delta}{=}\bar{z}^{jl,M}_{mm^{\prime}},$
for any $M$ and $1\leq m\neq m^{\prime}\leq M$.
With the notation defined in Proposition 1, we have that
$\|\Delta^{M}_{jj}\|^{2}_{\text{HS}}=\sum^{M}_{m=1}\left(\bar{w}^{j,M}_{m}\right)^{2}\qquad\text{and}\qquad\|\Delta^{M}_{jl}\|^{2}_{\text{HS}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{jl,M}_{mm^{\prime}}\right)^{2}.$
(14)
As a result, we have the following condition for comparability.
###### Proposition 2
Under the assumptions in Proposition 1, assume that MGPs $X$ and $Y$ are
$M^{\star}$-dimensional, with $1\leq M^{\star}\leq\infty$, and lie in the same
space. Then they are comparable if and only if for every $(j,l)\in V\times V$,
we have either
$\lim\inf_{M\to
M^{\star}}\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{jl,M}_{mm^{\prime}}\right)^{2}>0\qquad\text{or}\qquad\lim_{M\to
M^{\star}}\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{jl,M}_{mm^{\prime}}\right)^{2}=0,$
(15)
where $\bar{z}^{jl,M}_{mm^{\prime}}$ are defined in Proposition 1.
We now give an infinite-dimensional comparable example.
###### Example 1
Assume that $\\{\epsilon^{X}_{i1k}\\}_{k\geq 1}$,
$\\{\epsilon^{X}_{i2k}\\}_{k\geq 1}$, and $\\{\epsilon^{X}_{i3k}\\}_{k\geq 1}$
are all independent mean zero Gaussian variables with ${\rm
Var}(\epsilon^{X}_{ijk})=\sigma^{2}_{X,jk}$, $j=1,2,3$, $k\geq 1$ for all $i$.
For any $k\geq 1$, let
$a^{X}_{i1k}=a^{X}_{i2k}+\epsilon^{X}_{i1k},\quad
a^{X}_{i2k}=\epsilon^{X}_{i2k},\quad
a^{X}_{i3k}=a^{X}_{i2k}+\epsilon^{X}_{i3k}.$
Let $a^{X,M}_{ij}=(a^{X}_{ij1},\cdots,a^{X}_{ijM})^{\top}$, $j=1,2,3$. We then
define $X_{ij}(t)=\sum^{\infty}_{k=1}a^{X}_{ijk}b_{k}(t)$, $j=1,2,3$, where
$\\{b_{k}(t)\\}^{\infty}_{k=1}$ is some orthonormal function basis of
$\mathbb{H}$. We define $\\{\epsilon^{Y}_{ijk}\\}_{k\geq 1}$,
$\\{a^{Y}_{ijk}\\}_{k\geq 1}$, $a^{Y,M}_{ij}$, and $Y_{ij}(t)$, $j=1,2,3$,
similarly.
The graph structure of $X$ and $Y$ is shown in Figure 1. Since $a^{X,M}_{ij}$
follows a multivariate Gaussian distribution, for any $M\geq 2$, $1\leq
m,m^{\prime}\leq M$ and $m\neq m^{\prime}$:
$\displaystyle{\rm Var}\left(a^{X}_{i1m}\mid
a^{X,M}_{i2},a^{X,M}_{i3}\right)=\sigma^{2}_{X,1m},$ $\displaystyle{\rm
Var}\left(a^{X}_{i3m}\mid a^{X,M}_{i1},a^{X,M}_{i2}\right)=\sigma^{2}_{X,3m},$
$\displaystyle{\rm Var}\left(a^{X}_{i2m}\mid
a^{X,M}_{i1},a^{X,M}_{i3}\right)=\frac{\sigma^{2}_{X,1m}\sigma^{2}_{X,2m}\sigma^{2}_{X,3m}}{\sigma^{2}_{X,1m}\sigma^{2}_{X,2m}+\sigma^{2}_{X,1m}\sigma^{2}_{X,3m}+\sigma^{2}_{X,2m}\sigma^{2}_{X,3m}},$
and
$\displaystyle{\rm Var}\left(a^{X}_{i1m}\mid
a^{X,M}_{i2}\right)=\sigma^{2}_{X,1m},$ $\displaystyle{\rm
Var}\left(a^{X}_{i1m}\mid
a^{X,M}_{i3}\right)=\frac{\sigma^{2}_{X,1m}\sigma^{2}_{X,2m}+\sigma^{2}_{X,1m}\sigma^{2}_{X,3m}+\sigma^{2}_{X,2m}\sigma^{2}_{X,3m}}{\sigma^{2}_{2m}+\sigma^{2}_{3m}},$
$\displaystyle{\rm Var}\left(a^{X}_{i3m}\mid
a^{X,M}_{i2}\right)=\sigma^{2}_{X,3m},$ $\displaystyle{\rm
Var}\left(a^{X}_{i3m}\mid
a^{X,M}_{i1}\right)=\frac{\sigma^{2}_{X,1m}\sigma^{2}_{X,2m}+\sigma^{2}_{X,1m}\sigma^{2}_{X,3m}+\sigma^{2}_{X,2m}\sigma^{2}_{X,3m}}{\sigma^{2}_{2m}+\sigma^{2}_{1m}},$
$\displaystyle{\rm Var}\left(a^{X}_{i2m}\mid
a^{X,M}_{i1}\right)=\frac{\sigma^{2}_{X,1m}\sigma^{2}_{X,2m}}{\sigma^{2}_{X,1m}+\sigma^{2}_{X,2m}},$
$\displaystyle{\rm Var}\left(a^{X}_{i2m}\mid
a^{X,M}_{i3}\right)=\frac{\sigma^{2}_{X,3m}\sigma^{2}_{X,2m}}{\sigma^{2}_{X,3m}+\sigma^{2}_{X,2m}}.$
In addition, we also have
$\displaystyle{\rm Cov}(a^{X}_{i1m},a^{X}_{3m^{\prime}}\mid a^{X,M}_{i2})=0,$
$\displaystyle{\rm Cov}(a^{X}_{i1m},a^{X}_{i2m^{\prime}}\mid
a^{X,M}_{i3})=\mathbbm{1}(m=m^{\prime})\cdot\frac{\sigma^{2}_{X,3m}\sigma^{2}_{X,2m}}{\sigma^{2}_{X,3m}+\sigma^{2}_{X,2m}},$
$\displaystyle{\rm Cov}(a^{X}_{i2m},a^{X}_{i3m^{\prime}}\mid
a^{X,M}_{i3})=\mathbbm{1}(m=m^{\prime})\cdot\frac{\sigma^{2}_{X,1m}\sigma^{2}_{X,2m}}{\sigma^{2}_{X,1m}+\sigma^{2}_{X,2m}}.$
123 Figure 1: The conditional independence graph for both $X$ and $Y$ in
Example 1. The differential graph between $X$ and $Y$ has the same structure.
Similar results hold for $Y$. Suppose that
$\sigma^{2}_{X,jk},\sigma^{2}_{Y,jk}\asymp
k^{-\alpha}\quad\text{and}\quad|\sigma^{2}_{X,jk}-\sigma^{2}_{Y,jk}|\asymp
k^{-\beta},\quad j=1,2,3,$
where $\alpha,\beta>0$ and $\beta>\alpha$. Then
$\displaystyle\bar{z}^{13,M}_{mm^{\prime}}=0,$
$\displaystyle\bar{z}^{12,M}_{mm^{\prime}}=\mathbbm{1}(m=m^{\prime})\frac{\sigma^{2}_{X,1m}-\sigma^{2}_{Y,1m}}{\sigma^{2}_{X,1m}\cdot\sigma^{2}_{Y,1m}}\asymp\mathbbm{1}(m=m^{\prime})\cdot
m^{-(\beta-\alpha)},$
$\displaystyle\bar{z}^{23,M}_{mm^{\prime}}=\mathbbm{1}(m=m^{\prime})\frac{\sigma^{2}_{X,3m}-\sigma^{2}_{Y,3m}}{\sigma^{2}_{X,3m}\cdot\sigma^{2}_{Y,3m}}\asymp\mathbbm{1}(m=m^{\prime})\cdot
m^{-(\beta-\alpha)}.$
This implies that
$\displaystyle\|\Delta^{M}_{13}\|^{2}_{\text{F}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{13,M}_{mm^{\prime}}\right)^{2}=0,$
(16)
$\displaystyle\|\Delta^{M}_{12}\|^{2}_{\text{F}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{12,M}_{mm^{\prime}}\right)^{2}\asymp\sum^{M}_{m=1}\frac{1}{m^{\beta-\alpha}},$
$\displaystyle\|\Delta^{M}_{23}\|^{2}_{\text{F}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{23,M}_{mm^{\prime}}\right)^{2}\asymp\sum^{M}_{m=1}\frac{1}{m^{\beta-\alpha}}.$
When $\beta>\alpha+1$, we have
$0<\lim_{M\to\infty}\|\Delta^{M}_{12}\|_{\text{F}}=\lim_{M\to\infty}\|\Delta^{M}_{23}\|_{\text{F}}<\infty$.
When $\beta\leq\alpha+1$, we have
$\lim_{M\to\infty}\|\Delta^{M}_{12}\|_{\text{F}}=\lim_{M\to\infty}\|\Delta^{M}_{23}\|_{\text{F}}=\infty$.
In both cases the two graphs are comparable.
The following example describes a sequence of MGPs that are comparable;
however, the differential graph is intrinsically hard to estimate.
###### Example 2
We define $\\{\epsilon^{X}_{ijk}\\}_{k\geq 1}$, $\\{a^{X}_{ijk}\\}_{k\geq 1}$,
$\\{\epsilon^{Y}_{ijk}\\}_{k\geq 1}$, and $\\{a^{Y}_{ijk}\\}_{k\geq 1}$ as in
Example 1. Let $X_{ij}(t)=\sum^{M^{\star}}_{k=1}a^{X}_{ijk}b_{k}(t)$ and
$Y_{ij}(t)=\sum^{M^{\star}}_{k=1}a^{Y}_{ijk}b_{k}(t)$, $j=1,2,3$, where
$M^{\star}$ is a positive integer. Suppose that
$\sigma^{2}_{X,jk},\sigma^{2}_{Y,jk}\asymp
k^{-\alpha}\quad\text{and}\quad|\sigma^{2}_{X,jk}-\sigma^{2}_{Y,jk}|\asymp\mathbbm{1}(k=M^{\star})k^{-\beta},\quad
j=1,2,3,$
where $\alpha,\beta>0$ and $\beta>\alpha$. Following the argument in Example
1, for any $1\leq M\leq M^{\star}$, we have
$\displaystyle\bar{z}^{13,M}_{mm^{\prime}}=0,$
$\displaystyle\bar{z}^{12,M}_{mm^{\prime}}=\mathbbm{1}(m=m^{\prime})\mathbbm{1}(m=M^{\star})\cdot\frac{\sigma^{2}_{X,1m}-\sigma^{2}_{Y,1m}}{\sigma^{2}_{X,1m}\cdot\sigma^{2}_{Y,1m}}\asymp\mathbbm{1}(m=m^{\prime})\mathbbm{1}(m=M^{\star})\cdot
m^{-(\beta_{1}-2\alpha_{1})},$
$\displaystyle\bar{z}^{23,M}_{mm^{\prime}}=\mathbbm{1}(m=m^{\prime})\mathbbm{1}(m=M^{\star})\cdot\frac{\sigma^{2}_{X,3m}-\sigma^{2}_{Y,3m}}{\sigma^{2}_{X,3m}\cdot\sigma^{2}_{Y,3m}}\asymp\mathbbm{1}(m=m^{\prime})\mathbbm{1}(m=M^{\star})\cdot
m^{-(\beta_{3}-2\alpha_{3})}.$
This implies that
$\displaystyle\|\Delta^{M}_{13}\|^{2}_{\text{F}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{13,M}_{mm^{\prime}}\right)^{2}=0,$
(17)
$\displaystyle\|\Delta^{M}_{12}\|^{2}_{\text{F}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{12,M}_{mm^{\prime}}\right)^{2}\asymp
M^{-2(\beta-2\alpha)}\mathbbm{1}(M=M^{\star}),$
$\displaystyle\|\Delta^{M}_{23}\|^{2}_{\text{F}}=\sum^{M}_{m^{\prime}=1}\sum^{M}_{m=1}\left(\bar{z}^{23,M}_{mm^{\prime}}\right)^{2}\asymp
M^{-2(\beta-2\alpha)}\mathbbm{1}(M=M^{\star}).$
Based on the calculation above, we observe that estimation of the differential
graph here is intrinsically hard. For any $M<M^{\star}$, we have
$\|\Delta^{M}_{12}\|_{\text{F}}=\|\Delta^{M}_{23}\|_{\text{F}}=0$. Thus, when
$M<M^{\star}$ is used for estimation, the resulting target graph
$E^{\pi}_{\Delta}$ would be empty. However, by Definition 1 and Definition 2,
we have $D_{12}=D_{23}\asymp(M^{\star})^{-2(\beta-2\alpha)}>0$ and
$E_{\Delta}=\\{(1,2),(2,3)\\}$.
In practice, if $M^{\star}$ is very large and we do not have enough samples to
accurately estimate $\Delta^{M}$ for a large $M$, then it is hopeless for us
to estimate the differential graph correctly. Moreover, the situation is worse
if $\beta>2\alpha$, since $D_{12}$ and $D_{23}$—the signal strength—vanish as
$M^{\star}$ increases. Figure 2 shows how the signal strength (defined as
$D_{12}$) changes as $M^{\star}$ increases for three cases: $\beta<2\alpha$,
$\beta=2\alpha$, and $\beta>2\alpha$.
This problem is intrinsically hard because the difference between two graphs
only occurs between components with the smallest positive eigenvalue. To
capture this difference, we have to use a large number of basis $M$ to
approximate the functional data, which is statistically expensive. As we
increase $M$, no useful information is captured until $M=M^{\star}$.
Furthermore, if the difference between eigenvalues decreases fast relative to
the decrease of eigenvalues, the signal strength will be very weak when the
intrinsic dimension is large. This example shows that the estimation of
functional differential graphical models is harder compared to the scalar
case.
Figure 2: Signal Strength $D_{12}\asymp(M^{\star})^{-2(\beta-2\alpha)}$ in
Example 2.
In Example 1, we characterized a pair of infinite-dimensional MGPs which are
comparable, and in Example 2 we discussed a sequence of models which are all
comparable, but increasingly difficult to recover. The following example
demonstrates that there are infinite-dimensional MGPs that may share the same
eigenspace, but are still not comparable.
###### Example 3
We construct two MGPs that are both infinite-dimensional and have the same
eigenspace, but are incomparable. As with the previous two examples, let
$V=\\{1,2,3\\}$. We assume that $X$ and $Y$ share a common set of
eigenfunctions: $\\{\phi_{m}\\}_{m=1}^{\infty}$ for $j=1,2,3$.
We construct the distribution of the scores of $X$ and $Y$ as follows. For for
any $m\in\mathbb{N}^{+}$, let $a^{X}_{i\,\cdot\,m}$ denote the vector of
scores $(a^{X}_{i1m},a^{X}_{i2m},a^{X}_{i3m})$ and define
$a^{Y}_{i\,\cdot\,m}$ analogously. For any natural number $z$, we first assume
that
$a^{X}_{i\,\cdot\,(3z-2)},a^{X}_{i\,\cdot\,(3z-1)},a^{X}_{i\,\cdot\,(3z)}\perp\\!\\!\\!\perp\\{a^{X}_{i\,\cdot\,k}\\}_{k\neq
3z,3z-1,3z-2}.$ (18)
Thus, the conditional independence graph for the individual scores is a set of
disconnected subgraphs corresponding to
$\\{a^{X}_{i\,\cdot\,(3z-2)},a^{X}_{i\,\cdot,(3z-1)},a^{X}_{i\,\cdot\,(3z)}\\}$
for $z\in\mathbb{N}^{+}$. We make the analogous assumption for the scores of
$Y$.
Within the sets
$\\{a^{X}_{i\,\cdot\,(3z-2)},a^{X}_{i\,\cdot\,(3z-1)},a^{X}_{i\,\cdot\,(3z)}\\}$
and
$\\{a^{Y}_{i\,\cdot\,(3z-2)},a^{Y}_{i\,\cdot\,(3z-1)},a^{Y}_{i\,\cdot\,(3z)}\\}$,
we assume that the conditional independence graph has the structure shown in
Figure 3. By construction, when projecting onto the span of the first $M$
functions, the edge set of individual functional graphical models for
$X^{\pi}$ and $Y^{\pi}$ is not stable as $M\rightarrow\infty$. In particular,
for both $X$ and $Y$, the edges $(1,2)$ and $(2,3)$ will persist; however, the
edge $(1,3)$ will either appear or be absent depending on $M$.
$a^{X}_{i1(3z-2)}$$a^{X}_{i2(3z-2)}$$a^{X}_{i3(3z-2)}$$a^{X}_{i1(3z-1)}$$a^{X}_{i2(3z-1)}$$a^{X}_{i3(3z-1)}$$a^{X}_{i1(3z)}$$a^{X}_{i2(3z)}$$a^{X}_{i3(3z)}$
(a) CI graph for $X$ scores
$a^{Y}_{i1(3z-2)}$$a^{Y}_{i2(3z-2)}$$a^{Y}_{i3(3z-2)}$$a^{Y}_{i1(3z-1)}$$a^{Y}_{i2(3z-1)}$$a^{Y}_{i3(3z-1)}$$a^{Y}_{i1(3z)}$$a^{Y}_{i2(3z)}$$a^{Y}_{i3(3z)}$
(b) CI graph for $Y$ scores
Figure 3: CI graph for the individual scores for two incomparable MGPs.
If $M\mod 3=1$, which corresponds to the first row in Figure 3 where $M=3z-2$
for some $z\in\mathbb{N}^{+}$, then
$\\{a^{X}_{i1k}\\}_{k<M}\perp\\!\\!\\!\perp\\{a^{X}_{i3k}\\}_{k<M}\mid\\{a^{X}_{i2k}\\}_{k\leq
M}\quad\text{ and
}\quad\\{a^{Y}_{i1k}\\}_{k<M}\perp\\!\\!\\!\perp\\{a^{Y}_{i3k}\\}_{k<M}\mid\\{a^{Y}_{i2k}\\}_{k\leq
M}.$ (19)
However, $a^{X}_{i1M}\not\perp\\!\\!\\!\perp
a^{X}_{i3M}\mid\\{a^{X}_{i2k}\\}_{k\leq M}$ since we do not condition on
$a^{X}_{i2(M+1)}$. Similarly, $a^{Y}_{i1M}\not\perp\\!\\!\\!\perp
a^{Y}_{i3M}\mid\\{a^{Y}_{i2k}\\}_{k\leq M}$ since we do not condition on
$a^{Y}_{i2(M+2)}$. Thus, the edge $(1,3)$ is in the functional graphical model
for both $X^{\pi}$ and $Y^{\pi}$; however, the specific values of
$\Theta^{X,M}$ and $\Theta^{Y,M}$ may differ.
In contrast to the previous case, when $M\mod 3=2$, which corresponds to the
second row in Figure 3 where $M=3z-1$ for some $z\in\mathbb{N}^{+}$, the
functional graphical models for $X^{\pi}$ and $Y^{\pi}$ now differ. Note that,
$\\{a^{X}_{i1k}\\}_{k\leq M}\perp\\!\\!\\!\perp\\{a^{X}_{i3k}\\}_{k\leq
M}\mid\\{a^{X}_{i2k}\\}_{k\leq M}$. Thus, the edge $(1,3)$ is absent in the
functional graphical model for $X^{\pi}$ and $\Theta^{X,M}_{1,3}=0$.
Considering $Y^{\pi}$, we have that
$\\{a^{Y}_{i1k}\\}_{k<M-1}\perp\\!\\!\\!\perp\\{a^{Y}_{i3k}\\}_{k<M-1}\mid\\{a^{Y}_{i2k}\\}_{k\leq
M}$. However, because we do not condition on $a^{Y}_{i2(M+1)}$ (the node in
the third row of Figure 3), the $(1,3)$ edge exists in the functional
graphical model for $Y^{\pi}$ since $a^{Y}_{i1(M-1)}\not\perp\\!\\!\\!\perp
a^{Y}_{i3(M-1)}\mid\\{a^{Y}_{i2k}\\}_{k\leq M}$.
In this setting where $M\mod 3=2$, for all $z\in\mathbb{N}^{+}$, we set the
covariance of the scores to be
$z^{-\beta}\times\hbox{}\vbox{\kern 0.86108pt\hbox{$\kern 0.0pt\kern
2.5pt\kern-5.0pt\left[\kern
0.0pt\kern-2.5pt\kern-5.55557pt\vbox{\kern-0.86108pt\vbox{\vbox{
\halign{\kern\arraycolsep\hfil\@arstrut$\kbcolstyle#$\hfil\kern\arraycolsep&
\kern\arraycolsep\hfil$\@kbrowstyle#$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep&&
\kern\arraycolsep\hfil$\@kbrowstyle#$\ifkbalignright\relax\else\hfil\fi\kern\arraycolsep\cr
5.0pt\hfil\@arstrut$\scriptstyle$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i1(3z-2)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i1(3z-1)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i1(3z)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i2(3z-2)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i2(3z-1)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i2(3z)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i3(3z-2)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i3(3z-1)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
a^{Y}_{i3(3z)}\\\a^{Y}_{i1(3z-2)}$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
3/2$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle-1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
1/2$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i1(3z-1)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i1(3z)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i2(3z-2)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 8$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i2(3z-1)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
4$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i2(3z)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle-1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 2$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle-1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i3(3z-2)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 1/2$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle-1$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 3/2$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i3(3z-1)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
1$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0\\\a^{Y}_{i3(3z)}$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern
5.0pt&5.0pt\hfil$\scriptstyle 0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle
0$\hfil\kern 5.0pt&5.0pt\hfil$\scriptstyle 1\\\$\hfil\kern
5.0pt\crcr}}}}\right]$}},$ (20)
where $\beta>0$ is a parameter which determines the decay rate of the
eigenvalues (see Assumption 3). We then set all other elements of the
covariance to be $0$. The support of the inverse of this matrix corresponds to
the edges of the graph in Figure 3. However, when we consider the marginal
distribution of the first $M$ scores and invert the corresponding covariance,
$\Theta^{Y,M}_{1,3}$ is $0$ everywhere except for the element corresponding to
$a^{Y}_{i,1,M-1}$ and $a^{Y}_{i,3,M-1}$, that is, nodes in the top row of
Figure 3, which is equal to $-1/4\times((M+1)/3)^{\beta}$. Thus,
$\|\Delta^{M}_{1,3}\|_{F}=1/4\times((M+1)/3)^{\beta}$ and
$\lim\sup_{M\rightarrow\infty}\|\Delta^{M}_{1,3}\|_{F}=\infty$.
Finally, when $M\mod 3=0$, that is, $M=3z$ for some $z\in\mathbb{N}^{+}$, the
$(1,3)$ edge is absent in both functional graphical models for $X^{\pi}$ and
$Y^{\pi}$ because
$\\{a^{X}_{i1k}\\}_{k\leq M}\perp\\!\\!\\!\perp\\{a^{X}_{i3k}\\}_{k\leq
M}\mid\\{a^{X}_{i2k}\\}_{k\leq M}\ \text{ and }\ \\{a^{Y}_{i1k}\\}_{k\leq
M}\perp\\!\\!\\!\perp\\{a^{Y}_{i3k}\\}_{k\leq M}\mid\\{a^{Y}_{i2k}\\}_{k\leq
M}.$
Thus, $\Theta^{X,M}_{1,3}=\Theta^{Y,M}_{1,3}=\Delta^{M}_{1,3}=0$. This implies
that $\lim\inf_{M\rightarrow\infty}\|\Delta^{M}_{1,3}\|_{F}=0$.
Because $\lim\inf_{M\rightarrow\infty}\|\Delta^{M}_{1,3}\|_{F}=0$, but
$\lim\sup_{M\rightarrow\infty}\|\Delta^{M}_{1,3}\|_{F}=\infty$, $X$ and $Y$
are incomparable.
The notion of comparability illustrates the intrinsic difficulty of dealing
with functional data. However, it also illustrates when we can still hope to
estimate the differential network consistently. We have formally stated when
two infinite-dimensional functional graphical models will be comparable and
have given conditions and examples of comparability. Unfortunately, these
conditions cannot be checked using observational data. For this reason, we
mainly discuss the methodology and theoretical properties for estimation of
$E^{\pi}_{\Delta}$. Prior knowledge about the problem at hand should be used
to decide whether two infinite-dimensional functional graphs are comparable.
This is similar to other assumptions common in the graphical modeling
literature, such as “faithfulness” (Spirtes et al., 2000), that are critical
to graph recovery, but can not be verified.
## 3 Functional Differential Graph Estimation: FuDGE
In this section, we detail our methodology for estimating a functional
differential graph. Unfortunately, in most situations, there may not be prior
knowledge on which subspace to use to define the functional differential
graph. In such situations, we suggest using the principle component scores of
$K_{jj}(s,t)=K^{X}_{jj}(s,t)+K^{Y}_{jj}(s,t)$, $j\in V$ as a default choice.
In addition, each observed function is only recorded (potentially with
measurement error) at discrete time points. In Section 3.1 we consider this
practical setting. Of course, if an appropriate basis for dimension reduction
is known in advance or if the functions are fully observed at all time points,
then the estimated objects can always be replaced with their known/observed
counterparts.
### 3.1 Estimating the covariance of the scores
For each $X_{ij}$, suppose we have measurements at time points $t_{ijk}$,
$k=1,\ldots,T$,333For simplicity, we assume that all functions have the same
number of observations, however, our method and theory can be trivially
extended to allow a different number of observations for each function. and
the recorded data, $h_{ijk}$, are the function values with random noise. That
is,
$h_{ijk}=g_{ij}(t_{ijk})+\epsilon_{ijk},$ (21)
where $g_{ij}$ can denote either $X_{ij}$ or $Y_{ij}$ and the unobserved noise
$\epsilon_{ijk}$ is i.i.d. Gaussian with mean $0$ and variance
$\sigma^{2}_{0}$. Without loss of generality, we assume that
$t_{ij1}<\ldots<t_{ijT}$ for any $1\leq i\leq n$ and $1\leq j\leq p$. We do
not assume that $t_{ijk}=t_{i^{\prime}jk}$ for $i\neq i^{\prime}$, so that
each observation may be observed on a different grid.
We first use a basis expansion to estimate a least squares approximation of
the whole curve $X_{ij}(t)$ (see Section 4.2 in Ramsay and Silverman (2005)).
Specifically, given an initial basis function vector
$b(t)=(b_{1}(t),\dots,b_{L}(t))^{\top}$—for example, the B-spline or Fourier
basis—our estimated approximation for $X_{ij}(t)$ is given by:
$\displaystyle\hat{X}_{ij}(t)$ $\displaystyle=\hat{\beta}_{ij}^{\top}b(t),$
(22) $\displaystyle\hat{\beta}_{ij}$
$\displaystyle=\left(B^{\top}_{ij}B_{ij}\right)^{-1}B^{\top}_{ij}h_{ij},$
where $h_{ij}=(h_{ij1},h_{ij2},\dots,h_{ijT})^{\top}$ and $B_{ij}$ is the
design matrix for $g_{ij}$:
$B_{ij}=\left[\begin{matrix}b_{1}(t_{ij1})&\cdots&b_{L}(t_{ij1})\\\
\vdots&\ddots&\vdots\\\
b_{1}(t_{ijT})&\cdots&b_{L}(t_{ijT})\end{matrix}\right]\in\mathbb{R}^{T\times
L}.$ (23)
The computational complexity of the basis expansion procedure is
$O(npT^{3}L^{3})$, and in practice, there are many efficient package
implementations of this step; for example, fda (Ramsay et al., 2020).
We repeat this process for the observed $Y$ functions. After obtaining
$\\{\hat{X}_{ij}(t)\\}_{j\in V,i=1,2,\dots,n_{X}}$ and
$\\{\hat{Y}_{ij}(t)\\}_{j\in V,i=1,2,\dots,n_{Y}}$, we use them as inputs to
the FPCA procedure. Specifically, we first estimate the sum of the covariance
functions by
$\hat{K}_{jj}(s,t)=\hat{K}^{X}_{jj}(s,t)+\hat{K}^{Y}_{jj}(s,t)=\frac{1}{n_{X}}\sum^{n_{X}}_{i=1}\hat{X}_{ij}(s)\hat{X}_{ij}(t)+\frac{1}{n_{Y}}\sum^{n_{Y}}_{i=1}\hat{Y}_{ij}(s)\hat{Y}_{ij}(t).$
(24)
Using $\hat{K}_{jj}(s,t)$ as the input to FPCA, we can estimate the
corresponding eigenfunctions $\hat{\phi}_{jk}(t)$, $k=1,\ldots,M$,
$j=1,\ldots,p$. Given the estimated eigenfunctions, we compute the estimated
projection scores
$\displaystyle\hat{a}^{X}_{ijk}$
$\displaystyle=\int_{\mathcal{T}}\hat{X}_{ij}(t)\hat{\phi}_{jk}(t)dt\qquad\text{and}\qquad\hat{a}^{Y}_{ijk}=\int_{\mathcal{T}}Y_{ij}(t)\hat{\phi}_{jk}(t)dt,$
and collect them into vectors
$\displaystyle a^{X,M}_{ij}$
$\displaystyle=(a^{X}_{ij1},a^{X}_{ij2},\dots,a^{X}_{ijM})^{\top}\in{\mathbb{R}^{M}}\qquad\text{and}\qquad
a^{X,M}_{i}$
$\displaystyle=((a^{X,M}_{i1})^{\top},\ldots,(a^{X,M}_{ip})^{\top})^{\top}\in{\mathbb{R}^{pM}},$
$\displaystyle a^{Y,M}_{ij}$
$\displaystyle=(a^{Y}_{ij1},a^{Y}_{ij2},\dots,a^{Y}_{ijM})^{\top}\in{\mathbb{R}^{M}}\qquad\text{and}\qquad
a^{Y,M}_{i}$
$\displaystyle=((a^{Y,M}_{i1})^{\top},\ldots,(a^{Y,M}_{ip})^{\top})^{\top}\in{\mathbb{R}^{pM}}.$
Finally, we estimate the covariance matrices of the score vectors,
$\Sigma^{X,M}$ and $\Sigma^{Y,M}$, as
$\displaystyle
S^{X,M}=\frac{1}{n_{X}}\sum^{n_{X}}_{i=1}\hat{a}^{X,M}_{i}(\hat{a}^{X,M}_{i})^{\top}\qquad\text{and}\qquad
S^{Y,M}=\frac{1}{n_{Y}}\sum^{n_{Y}}_{i=1}\hat{a}^{Y,M}_{i}(\hat{a}^{Y,M}_{i})^{\top}.$
### 3.2 FuGDE: Functional Differential Graph Estimation
We now describe the FuDGE algorithm for Functional Differential Graph
Estimation. To estimate $\Delta^{M}$, we solve the following optimization
program:
$\hat{\Delta}^{M}\in\operatorname*{arg\,min}_{\Delta\in{\mathbb{R}^{pM\times
pM}}}L(\Delta)+\lambda_{n}\sum_{\\{i,j\\}\in V^{2}}\|\Delta_{ij}\|_{F},$ (25)
where
$L(\Delta)=\mathrm{tr}\left[\frac{1}{2}S^{Y,M}\Delta^{\top}{S^{X,M}}\Delta-\Delta^{\top}\left(S^{Y,M}-S^{X,M}\right)\right]$
and $S^{X,M}$ and $S^{Y,M}$ are obtained as described in Section 3.1.
We construct the loss function, $L(\Delta)$, so that the true parameter value,
$\Delta^{M}=\left(\Sigma^{X,M}\right)^{-1}-\left(\Sigma^{Y,M}\right)^{-1}$,
minimizes the population loss $\mathbb{E}\left[L(\Delta)\right]$, which for a
differentiable and convex loss function, is equivalent to selecting $L$ such
that $\mathbb{E}\left[\nabla L(\Delta^{M})\right]=0$. Since $\Delta^{M}$
satisfies
$\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M}-(\Sigma^{Y,M}-\Sigma^{X,M})=0,$
a choice for $\nabla L(\Delta)$ is
$\nabla{L(\Delta^{M})}=S^{X,M}\Delta^{M}{S^{Y,M}}-\left(S^{Y,M}-S^{X,M}\right)$
(26)
so that
$\mathbb{E}\left[\nabla
L(\Delta^{M})\right]=\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M}-(\Sigma^{Y,M}-\Sigma^{X,M})=0.$
Given this choice of $\nabla L(\Delta)$, $L(\Delta)$ in (25) directly follows
from properties of the differential of the trace function. The chosen loss is
quadratic (see (B.9) in appendix) and leads to an efficient algorithm. Similar
loss functions are used in Xu and Gu (2016), Yuan et al. (2017), Na et al.
(2019), and Zhao et al. (2014).
We also include the additional group lasso penalty (Yuan and Lin, 2006) to
promote blockwise sparsity in $\hat{\Delta}^{M}$. The objective in (25) can be
solved by a proximal gradient method detailed in Algorithm 1. Finally, we form
$\hat{E}_{\Delta}$ by thresholding $\hat{\Delta}^{M}$ so that:
${}\hat{E}_{\Delta}=\left\\{\\{j,l\\}\,:\,\|\hat{\Delta}^{M}_{jl}\|_{F}>\epsilon_{n}\;\;\text{or}\;\;\|\hat{\Delta}^{M}_{lj}\|_{F}>\epsilon_{n}\right\\}.$
(27)
The thresholding step in (27) is used for theoretical purposes. Specifically,
it helps correct for the bias induced by the finite-dimensional truncation and
relaxes commonly used assumptions for the graph structure recovery, such as
the irrepresentability or incoherence condition (van de Geer and Bühlmann,
2009). In practice, one can simply set $\epsilon_{n}=0$, as we do in the
simulations.
### 3.3 Optimization Algorithm for FuDGE
Algorithm 1 Functional differential graph estimation
0: $S^{X,M},S^{Y,M},\lambda_{n},\eta$.
0: $\hat{\Delta}^{M}$.
Initialize $\Delta^{(0)}=0_{pM}$.
repeat
$A=\Delta-\eta\nabla L(\Delta)=\Delta-\eta\left(S^{X,M}\Delta
S^{Y,M}-\left(S^{Y,M}-S^{X,M}\right)\right)$
for $1\leq{i,j}\leq{p}$ do
$\Delta_{jl}\leftarrow\left(\frac{\|A_{jl}\|_{F}-\lambda_{n}\eta}{\|A_{jl}\|_{F}}\right)_{+}\cdot
A_{jl}$
end for
until Converge
The optimization program (25) can be solved by a proximal gradient method
(Parikh and Boyd, 2014) summarized in Algorithm 1. Specifically, at each
iteration, we update the current value of $\Delta$, denoted as
$\Delta^{\text{old}}$, by solving the following problem:
${}\Delta^{\text{new}}=\operatorname*{arg\,min}_{\Delta}\left(\frac{1}{2}\left\|\Delta-\left(\Delta^{\text{old}}-\eta\nabla
L\left(\Delta^{\text{old}}\right)\right)\right\|_{F}^{2}+\eta\cdot\lambda_{n}\sum^{p}_{j,l=1}\|\Delta_{jl}\|_{F}\right),$
(28)
where $\nabla L(\Delta)$ is defined in (26) and $\eta$ is a user specified
step size. Note that $\nabla L(\Delta)$ is Lipschitz continuous with Lipschitz
constant $\lambda^{S}_{\max}=\|S^{Y,M}\otimes
S^{X,M}\|_{2}=\lambda_{\max}(S^{Y,M})\lambda_{\max}(S^{X,M})$. Thus, for any
step size $\eta$ such that $0<\eta\leq 1/\lambda^{S}_{\max}$, the proximal
gradient method is guaranteed to converge (Beck and Teboulle, 2009).
The update in (28) has a closed-form solution:
${}\Delta^{\text{new}}_{jl}=\left[\left(\|A^{\text{old}}_{jl}\|_{F}-\lambda_{n}\eta\right)/\|A^{\text{old}}_{jl}\|_{F}\right]_{+}\cdot
A^{\text{old}}_{jl},\qquad 1\leq{j,l}\leq{p},$ (29)
where $A^{\text{old}}=\Delta^{\text{old}}-\eta\nabla L(\Delta^{\text{old}})$
and $x_{+}=\max\\{0,x\\},x\in{\mathbb{R}}$, represents the positive part of
$x$. Detailed derivations are given in the appendix. Note that although the
true $\Delta^{M}$ is symmetric, we do not explicitly enforce symmetry in
$\hat{\Delta}^{M}$ in Algorithm 1.
After performing FPCA, the proximal gradient descent method converges in
$O\left(\lambda^{S}_{\max}/\text{tol}\right)$ iterations, where tol is a user
specified optimization error tolerance, and each iteration takes $O((pM)^{3})$
operations; see Tibshirani (2010) for a convergence analysis of the general
proximal gradient descent algorithm.
### 3.4 Selection of Tuning Parameters
There are four tuning parameters that need to be chosen for implementing
FuDGE: $L$ (dimension of the basis used to estimate the curves from the
discretely observed data), $M$ (dimension of subspace to estimate the
projection scores), $\lambda_{n}$ (regularization parameter to tune the block
sparsity of $\Delta^{M}$), and $\epsilon_{n}$ (thresholding parameter for
$\hat{E}_{\Delta}$). While we need the thresholding parameter $\epsilon_{n}$
in (27) to establish theoretical results, in practice, we simply set
$\epsilon_{n}=0$. To select $M$, we follow the procedure in Qiao et al.
(2019). More specifically, for each discretely-observed curve, we first
estimate the underlying functions by fitting an $L$-dimensional B-spline
basis. Both $M$ and $L$ are then chosen by 5-fold cross-validation as
discussed in Qiao et al. (2019).
Finally, to choose $\lambda_{n}$, we recommend using selective cross-
validation (SCV) (She, 2012). Given a value of $\lambda_{n}$, we use the
entire data set to estimate a sparsity pattern. Then, fixing the sparsity
pattern, we use a typical cross-validation procedure to calculate the CV
error. Ultimately, we choose the value of $\lambda_{n}$ that results in the
sparsity pattern that minimizes the CV error. In addition to SCV, if we have
some prior knowledge about the number of edges in the differential graph, we
can also choose $\lambda_{n}$ that results in a desired level of sparsity of
the differential graph.
## 4 Theoretical Properties
In this section, we provide theoretical guarantees for FuDGE. We first give a
deterministic result for $\hat{E}_{\Delta}$ defined in (27) when the max-norm
of the difference between the estimates $S^{X,M},S^{Y,M}$ and their
corresponding parameters, $\Sigma^{X,M},\Sigma^{Y,M}$, is bounded by
$\delta_{n}$. We then show that when projecting the data onto either a fixed
basis or an estimated basis—under some mild conditions—$\delta_{n}$ can be
controlled and the bias of the finite-dimensional projection decreases fast
enough that $E_{\Delta}$ can be consistently recovered.
### 4.1 Deterministic Guarantees for $\hat{E}_{\Delta}$
In this section, we assume that $S^{X,M},S^{Y,M}$ are good estimates of
$\Sigma^{X,M},\Sigma^{Y,M}$ and give a deterministic result in Theorem 1. Let
$n=\min\\{n_{X},n_{Y}\\}$. We assume that the following holds.
###### Assumption 1
The matrices $S^{X,M},S^{Y,M}$ are estimates of $\Sigma^{X,M},\Sigma^{Y,M}$
that satisfy
$\max\left\\{|S^{X,M}-\Sigma^{X,M}|_{\infty},|S^{Y,M}-\Sigma^{Y,M}|_{\infty}\right\\}\leq\delta_{n}.$
(30)
We also require that $E_{\Delta}$ is sparse. This does not preclude the case
where $E_{X}$ and $E_{Y}$ are dense, as long as there are not too many
differences in the precision matrices. This assumption is also required when
estimating a differential graph from vector-valued data; for example, see
Condition 1 in Zhao et al. (2014).
###### Assumption 2
There are $s$ edges in the differential graph; that is, $|E_{\Delta}|=s$ and
$s\ll p$.
We introduce the following three quantities that characterize the problem
instance and will be used in Theorem 1 below:
$\nu_{1}=\nu_{1}(M)=\min_{(j,l)\in
E_{\Delta}}\|\Delta^{M}_{jl}\|_{F},\quad\nu_{2}=\nu_{2}(M)=\max_{(j,l)\in
E^{C}_{\Delta}}\|\Delta^{M}_{jl}\|_{F},$
and
$\tau=\tau(M)=\nu_{1}(M)-\nu_{2}(M).$ (31)
Roughly speaking, $\nu_{1}(M)$ indicates the “signal strength” present when
using the $M$-dimensional representation and $\nu_{2}(M)$ measures the bias.
By Definition 1, when $X$ and $Y$ are comparable, we have $\lim\inf_{M\to
M^{\star}}\nu_{1}(M)>0$ and $\lim_{M\to M^{\star}}\nu_{2}(M)=0$. Therefore,
for a large enough $M$, we have $\tau>0$. However, a smaller $\tau$ implies
that the differential graph is harder to recover.
Before we give the deterministic result in Theorem 1, we first define
additional quantities that will be used in subsequent results. Let
$\displaystyle\sigma_{\max}$
$\displaystyle=\max\\{|\Sigma^{X,M}|_{\infty},|\Sigma^{Y,M}|_{\infty}\\},$
(32) $\displaystyle\lambda^{*}_{\min}$
$\displaystyle=\lambda_{\min}\left(\Sigma^{X,M}\right)\times\lambda_{\min}\left(\Sigma^{Y,M}\right),\text{
and }$ $\displaystyle\Gamma^{2}_{n}$
$\displaystyle=\frac{9\lambda^{2}_{n}s}{\kappa^{2}_{\mathcal{L}}}+\frac{2\lambda_{n}}{\kappa_{\mathcal{L}}}(\omega^{2}_{\mathcal{L}}+2p^{2}\nu_{2}),$
where
$\displaystyle\lambda_{n}$
$\displaystyle=\;2M\left[\left(\delta_{n}^{2}+2\delta_{n}\sigma_{\max}\right)\left|\Delta^{M}\right|_{1}+2\delta_{n}\right],$
(33) $\displaystyle\kappa_{\mathcal{L}}$
$\displaystyle=\;(1/2)\lambda^{*}_{\min}-8M^{2}s\left(\delta_{n}^{2}+2\delta_{n}\sigma_{\max}\right),$
$\displaystyle\omega_{\mathcal{L}}$
$\displaystyle=\;4Mp^{2}\nu_{2}\sqrt{\delta_{n}^{2}+2\delta_{n}\sigma_{\max}},$
and $\delta_{n}$ is defined in Assumption 1. Note that $\Gamma_{n}$—which
measures the estimation error of
$\|\hat{\Delta}^{M}-\Delta^{M}\|_{\text{F}}$—implicitly depends on
$\delta_{n}$ through $\lambda_{n}$, $\kappa_{\mathcal{L}}$, and
$\omega_{\mathcal{L}}$. We observe that $\Gamma_{n}$ decreases to zero as
$\delta_{n}$ goes to zero. The quantity $\kappa_{\mathcal{L}}$ is the maximum
restricted eigenvalue from the analysis framework of Negahban et al. (2012).
Finally, $\omega_{\mathcal{L}}$ is the tolerance parameter that comes from the
fact that $\nu_{2}$ might be larger than zero, and it will decrease to zero as
$\nu_{2}$ goes to zero.
###### Theorem 1
Given Assumptions 1 and 2, when
$\nu_{1}(M),\nu_{2}(M),\delta_{n},\lambda_{n},\sigma_{\max},M$ and $s$ satisfy
$\displaystyle
0<\Gamma_{n}<\tau/2\qquad\text{and}\qquad\delta_{n}<(1/4)\sqrt{\left(\lambda^{*}_{\min}+16M^{2}s(\sigma_{\max})^{2}\right)/\left(M^{2}s\right)}-\sigma_{\max},$
(34)
then setting
$\epsilon_{n}\in\left[\nu_{2}+\Gamma_{n},\nu_{1}-\Gamma_{n}\right)$ ensures
that $\hat{E}_{\Delta}=E_{\Delta}$.
As shown in Section 4.2, under a few additional conditions, Assumption 1 holds
for a sequence of $\delta_{n}$ that decreases to $0$ as $n$ goes to infinity.
Thus, as $M$ and $n$ both increase to infinity, we have
$\nu_{2}+\Gamma_{n}\approx 0$ and $\nu_{1}-\Gamma_{n}\approx\min_{(j,l)\in
E_{\Delta}}D_{jl}$, and we only require $0\leq\epsilon_{n}<\min_{(j,l)\in
E_{\Delta}}D_{jl}$.
### 4.2 Theoretical Guarantees for $S^{X,M}$ and $S^{Y,M}$
In this section, we prove that under some mild conditions, (30) will hold with
high probability for specific values of $\delta_{n}$. We discuss the results
in two cases: the case where the curves are fully observed and the case where
the curves are only observed at discrete time points.
#### 4.2.1 Fully Observed Curves
In this section, we discuss the case where each curve is fully observed. We
first consider the case where the basis defining the differential graph are
known in advance; that is, the exact form of $\\{e_{jk}\\}_{k\geq 1}$ for all
$j\in V$ is known. In this case, the projection score vectors $a^{X,M}_{i}$
and $a^{Y,M}_{i}$ can be exactly recovered for all $i=1,2,\dots,n$. By the
assumption that $X_{i}(t)$ and $Y_{i}(t)$ are $p$-dimensional multivariate
Gaussian processes with mean zero, we then have $a^{X,M}_{i}\sim
N(0,\Sigma^{X,M})$ and $a^{Y,M}_{i}\sim N(0,\Sigma^{Y,M})$. The following
result follows directly from standard results on the sample covariance of
multivariate Gaussian variables.
###### Theorem 2
Assume that $S^{X,M}$ and $S^{Y,M}$ are computed as in Section 3.1, except the
basis functions $\\{e_{jk}\\}_{k\geq 1}$, $j\in V$, are fixed and known in
advance. Recall that
$n=\min\\{n_{X},n_{Y}\\}\quad\text{and}\quad\sigma_{\max}=\max\\{|\Sigma^{X,M}|_{\infty},|\Sigma^{Y,M}|_{\infty}\\}.$
Fix $\iota\in(0,1]$. Suppose that $n$ is large enough so that
$\delta_{n}=\sigma_{\max}\sqrt{\frac{C_{1}}{n}\log\left(\frac{8p^{2}M^{2}}{\iota}\right)}\leq
C_{2},$
for some universal constants $C_{1},C_{2}>0$. Then (30) holds with probability
at least $1-\iota$.
Proof The proof follows directly from Lemma 1 of Ravikumar et al. (2011) and
a union bound. [2mm]
With fully observed curves and known basis functions, it follows from Theorem
2 that $\delta_{n}\asymp\sqrt{\log(p^{2}M^{2})/n}$ with high probability. As
assumed in Section 2.2 (and also in Qiao et al. (2019)), when
$\lambda^{X}_{jm^{\prime}}=\lambda^{Y}_{jm^{\prime}}=0$ for all $j$ and
$m^{\prime}>M$ (where $M$ is allowed to grow with $n$), then $\nu_{2}(M)=0$,
$\tau(M)=\nu_{1}(M)=\min_{(j,l)\in E_{\Delta}}D_{jl}>0$, and
$E_{\Delta}=E^{\pi}_{\Delta}$. We can recover $E_{\Delta}$ with high
probability even in the high-dimensional setting, as long as
$\max\left\\{\frac{sM^{2}\log(p^{2}M^{2})|\Delta^{M}|_{1}^{2}/((\lambda^{\star}_{\min})^{2}\tau^{2})}{n},\frac{sM^{2}\log(p^{2}M^{2})/\lambda^{\star}_{\min}}{n}\right\\}\rightarrow
0.$
Even with an infinite number of positive eigenvalues, high-dimensional
consistency is still possible for quickly increasing $\nu_{1}$ and quickly
decaying $\nu_{2}$.
We then consider the case where the curves are fully observed, but we do not
have any prior knowledge on which orthonormal function basis should be used.
In this case, as discussed in Section 2.3, we recommend using the
eigenfunctions of $K_{jj}(\cdot,*)=K^{X}_{jj}(\cdot,*)+K^{Y}_{jj}(\cdot,*)$ as
basis functions. We use FPCA to estimate the eigenfuctions of
$K_{jj}(\cdot,*)$ and make the following assumption.
###### Assumption 3
Let $\\{\lambda_{jk},\phi_{jk}(\cdot)\\}$ be the eigenpairs of
$K_{jj}(\cdot,*)=K^{X}_{jj}(\cdot,*)+K^{Y}_{jj}(\cdot,*)$, $j\in V$, and
suppose that $\lambda_{jk}$ are non-increasing in $k$.
1. (i)
Suppose $\max_{j\in{V}}\sum_{k=1}^{\infty}\lambda_{jk}<\infty$ and assume that
there exists a constant $\beta>1$ such that, for each $k\in\mathbb{N}$,
$\lambda_{jk}\asymp{k^{-\beta}}$ and $d_{jk}\lambda_{jk}=O(k)$ uniformly in
$j\in{V}$, where
$d_{jk}=2\sqrt{2}\max\\{(\lambda_{j(k-1)}-\lambda_{jk})^{-1},(\lambda_{jk}-\lambda_{j(k+1)})^{-1}\\}$,
$k\geq 2$, and $d_{j1}=2\sqrt{2}(\lambda_{j1}-\lambda_{j2})^{-1}$.
2. (ii)
For all $k$, $\phi_{jk}(\cdot)$’s are continuous on the compact set
$\mathcal{T}$ and satisfy
$\max_{j\in{V}}\sup_{s\in{\mathcal{T}}}\sup_{k\geq{1}}|\phi_{jk}(s)|_{\infty}=O(1).$
This assumption was used in Qiao et al. (2019, Condition 1). We have the
following result.
###### Theorem 3
Suppose Assumption 3 holds and the basis functions are estimated using FPCA of
$K_{jj}(\cdot,*)$ with fully observed curves. Fix $\iota\in(0,1]$. Suppose $n$
is large enough so that
$\delta_{n}=M^{1+\beta}\sqrt{\frac{\log\left(2C_{2}p^{2}M^{2}/\iota\right)}{n}}\leq
C_{1},$
for some universal constants $C_{1},C_{2}>0$. Then (30) holds with probability
at least $1-\iota$.
Proof The proof follows directly from Theorem 1 of Qiao et al. (2019) and the
fact that
$\|\hat{K}_{jj}(\cdot,*)-K_{jj}(\cdot,*)\|_{\text{HS}}\leq\|\hat{K}^{X}_{jj}(\cdot,*)-K^{X}_{jj}(\cdot,*)\|_{\text{HS}}+\|\hat{K}^{Y}_{jj}(\cdot,*)-K^{Y}_{jj}(\cdot,*)\|_{\text{HS}}$.
[2mm]
It follows from Theorem 3 that $\delta_{n}\asymp
M^{1+\beta}\sqrt{\log(p^{2}M^{2}/)/n}$ with high probability. Compared with
Theorem 2, there is an additional $M^{1+\beta}$ term that arises from FPCA
estimation error. Similarly, when
$\lambda^{X}_{jm^{\prime}}=\lambda^{Y}_{jm^{\prime}}=0$ for all $j$ and
$m^{\prime}>M$, we can recover $E_{\Delta}$ with high probability as long as
$\max\left\\{\frac{sM^{(4+2\beta)}\log(p^{2}M^{2})|\Delta^{M}|_{1}^{2}/((\lambda^{\star}_{\min})^{2}\tau^{2})}{n},\frac{sM^{(4+2\beta)}\log(p^{2}M^{2})/\lambda^{\star}_{\min}}{n}\right\\}\rightarrow
0.$
#### 4.2.2 Discretely-Observed Curves
Finally, we discuss the case when the curves are only observed at discrete
time points—possibly with measurement error. Following Chapter 1 of Kokoszka
and Reimherr (2017), we first estimate each curve from the available
observations by basis expansion; then we use the estimated curves to form
empirical covariance functions from which we estimate the eigenfunctions using
FPCA. The estimated eigenfunctions are then used to calculate the scores.
Recall the model for discretely observed functions given in (21):
$h_{ijk}=g_{ij}(t_{ijk})+\epsilon_{ijk},$
where $g_{ij}$ denotes either $X_{ij}$ or $Y_{ij}$, $\epsilon_{ijk}$ are
i.i.d. Gaussian noise with mean $0$ and variance $\sigma^{2}_{0}$. Assume that
$t_{ij1}<\dots<t_{ijT}$ for any $1\leq i\leq n$ and $1\leq j\leq p$. Note that
we do not need $X$ and $Y$ to be observed at the same time points and we use
$t_{ijk}$ to represent either $t^{X}_{ijk}$ or $t^{Y}_{ijk}$. Furthermore,
recall that we first compute a least squares estimator of $X_{ij}(\cdot)$ and
$Y_{ij}(\cdot)$ by projecting it onto the basis
$b(\cdot)=\left(b_{1}(\cdot),\ldots,b_{L}(\cdot)\right)$.
First, we assume that as we increase the number of basis functions, we can
approximate any function in $\mathbb{H}$ arbitrarily well.
###### Assumption 4
We assume that $\\{b_{l}\\}^{\infty}_{l=1}$ is a complete orthonormal system
(CONS) (See Definition 2.4.11 of Hsing and Eubank, 2015) of $\mathbb{H}$, that
is, $\overline{{\rm Span}\left(\\{b_{l}\\}^{\infty}_{l=1}\right)}=\mathbb{H}$.
Assumption 4 requires that the basis functions are orthonormal. When this
assumption is violated—for example, when using the B-splines basis—we can
always first use an orthonormalization process, such as Gram-Schmidt, to
convert the basis to an orthonormal one. For B-splines, there are many
algorithms that can efficiently provide orthonormalization (Liu et al., 2019).
To establish theoretical guarantees for the least squares estimator, we
require smoothness in both the curves we are trying to estimate as well as the
basis functions we use.
###### Assumption 5
We assume that the basis functions $\\{b_{l}(\cdot)\\}^{\infty}_{l=1}$ satisfy
the following conditions.
$D_{0,b}\coloneqq\sup_{l\geq 1}\sup_{t\in\mathcal{T}}\lvert
b_{l}(t)\rvert<\infty,\qquad D_{1,b}(l)\coloneqq\sup_{t\in\mathcal{T}}\lvert
b^{\prime}_{l}(t)\rvert<\infty,\qquad D_{1,b,L}\coloneqq\max_{1\leq l\leq
L}D_{1,b}(l).$ (35)
We also require that the curves $g_{ij}$ satisfy the following smoothness
condition:
$\max_{1\leq j\leq p}\sum^{\infty}_{m=1}\mathbb{E}\left[\left(\langle
g_{ij},b_{m}\rangle\right)^{2}\right]D^{2}_{1,b}(m)<\infty.$ (36)
To better understand Assumption 5, we use the Fourier basis as an example. Let
$\mathcal{T}=[0,1]$ and $b_{m}(t)=\sqrt{2}\cos(2\pi mt)$, $0\leq t\leq 1$ and
$m\in\mathbb{N}$. Thus, $\\{b_{m}(t)\\}^{\infty}_{m=0}$ then constitutes an
orthonormal basis of $\mathbb{H}=\mathcal{L}^{2}[0,1]$. We then have
$b^{\prime}(t)=-2\sqrt{2}\pi m\sin(2\pi mt)$, $D_{0,b}=\sqrt{2}$,
$D_{1,b}(m)=2\sqrt{2}\pi m$ and $D_{1,b,L}=2\sqrt{2}\pi L$. In this case, (36)
is equivalent to
$\max_{1\leq j\leq p}\sum^{\infty}_{m=1}\mathbb{E}\left[\left(\langle
g_{ij},b_{m}\rangle\right)^{2}\right]m^{2}<\infty.$
On the other hand, $g_{ij}(t)=\sum^{\infty}_{m=1}\langle g_{ij},b_{m}\rangle
b_{m}(t)$ and $g^{\prime}_{ij}(t)=\sum^{\infty}_{m=1}\langle
g_{ij},b_{m}\rangle b^{\prime}_{m}(t)$. Suppose that,
$\mathbb{E}\left[\|g^{\prime}_{ij}\|^{2}\right]<\infty$. Then
$\mathbb{E}\left[\|g^{\prime}_{ij}\|^{2}\right]=\sum^{\infty}_{m=1}\mathbb{E}\left[\left(\langle
g_{ij},b_{m}\rangle\right)^{2}\right]\|b^{\prime}_{m}\|^{2}\asymp\sum^{\infty}_{m=1}\mathbb{E}\left[\left(\langle
g_{ij},b_{m}\rangle\right)^{2}\right]m^{2}.$ (37)
Therefore, $\max_{1\leq j\leq
p}\mathbb{E}\left[\|g^{\prime}_{ij}\|^{2}\right]<\infty$, which is a commonly
used assumption in nonparameteric statistics (e.g., Section 7.2 of Wasserman
(2006)), implies (36).
Finally, we require each function to be observed at time points that are
“evenly spaced.” Formally, we require the following assumption.
###### Assumption 6
The observation time points $\\{t_{ijk}:1\leq i\leq n,1\leq j\leq p,1\leq
k\leq T\\}$ satisfy
$\max_{1\leq i\leq n}\max_{1\leq j\leq p}\max_{1\leq k\leq
T+1}\left|\frac{t_{ijk}-t_{ij(k-1)}}{\lvert\mathcal{T}\rvert}-\frac{1}{T}\right|\leq\frac{\zeta_{0}}{T^{2}},$
(38)
where $t_{ij0}$ and $t_{ij(T+1)}$ are endpoints of $\mathcal{T}$ for any
$1\leq i\leq n$, $1\leq j\leq p$, and $\zeta_{0}$ is a positive constant that
does not depend on $i$ or $j$.
Any $g_{ij}$ can be decomposed into
$g_{ij}=g^{\shortparallel}_{ij}+g^{\bot}_{ij}$, where
$g_{ij}^{\shortparallel}\in{\rm Span}(b)$ and $g_{ij}^{\bot}\in{\rm
Span}(b)^{\bot}$. We denote the eigenvalues of the covariance operator of
$g_{ij}$ as $\\{\lambda_{jk}\\}_{k\geq 1}$ and
$\lambda_{j0}=\sum^{\infty}_{k=1}\lambda_{jk}$; and denote the eigenvalues of
the covariance operator of $g^{\bot}_{ij}$ as
$\\{\lambda^{\bot}_{jk}\\}_{k\geq 1}$ and
$\lambda^{\bot}_{j0}=\sum^{\infty}_{k=1}\lambda^{\bot}_{jk}$. Note that under
Assumption 3, we have $\max_{1\leq j\leq p}\lambda_{j0}<\infty$. Let
$1<\lambda_{0,\max}<\infty$ be a constant such that $\max_{1\leq j\leq
p}\lambda_{j0}\leq\lambda_{0,\max}$. Let $B_{ij}$ be the design matrix of
$g_{ij}$ as defined in (23) and let $\lambda^{B}_{\min}=\min_{1\leq i\leq
n,1\leq j\leq p}\left\\{\lambda_{\min}(B^{\top}_{ij}B_{ij})\right\\}$. We
define
$\displaystyle\tilde{\psi}_{1}(T,L)=\frac{\sigma_{0}L}{\sqrt{\lambda^{B}_{\min}}},\quad\tilde{\psi}_{2}(T,L)=\frac{L^{2}}{(\lambda^{B}_{\min})^{2}}\left(\lambda_{0}\left(\tilde{c}_{1}D^{2}_{1,b,L}+\tilde{c_{2}}\right)\tilde{\psi}_{3}(L)+\tilde{c}_{1}\tilde{\psi}_{4}(L)\right),$
(39) $\displaystyle\tilde{\psi}_{3}(L)\;=\;\max_{1\leq j\leq
p}\left(\lambda^{\bot}_{j0}/\lambda_{j0}\right),\quad\tilde{\psi}_{4}(L)=\max_{1\leq
j\leq p}\sum_{m>L}\mathbb{E}\left[\left(\langle
g_{ij},b_{m}\rangle\right)^{2}\right]D^{2}_{1,b}(m),$ (40)
$\displaystyle\Phi(T,L)=\min\left\\{1/\tilde{\psi}_{1}(T,L),1/\sqrt{\tilde{\psi}_{3}(L)}\right\\},$
(41)
where $\tilde{c}_{1}=18D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}$ and
$\tilde{c_{2}}=36D^{4}_{0,b}(2\zeta_{0}+1)^{2}$.
We now use superscripts or subscripts to indicate the specific quantities for
$X$ and $Y$. In this way, we define $L_{X}$, $L_{Y}$, $T_{X}$, $T_{Y}$,
$\tilde{\psi}^{X}_{1}$-$\tilde{\psi}^{X}_{4}$,
$\tilde{\psi}^{Y}_{1}$-$\tilde{\psi}^{Y}_{4}$, and $\Phi^{X},\Phi^{Y}$. In
addition, let $T=\min\\{T_{X},T_{Y}\\}$, $L=\min\\{L_{X},L_{Y}\\}$,
$\bar{\psi}_{k}=\max\\{\tilde{\psi}^{X}_{k},\tilde{\psi}^{Y}_{k}\\}$,
$k=1,\cdots,4$, $\bar{\Phi}=\min\\{\Phi^{X},\Phi^{Y}\\}$, and let $n$, $\beta$
be defined as in Section 4.1.
###### Theorem 4
Assume the observation model given in (21). Suppose Assumption 3 holds, and
Assumption 4-6 hold for both $X$ and $Y$. Suppose $T$ and $L$ are large enough
so that
$\displaystyle\bar{\psi}_{1}(T,L)\leq\gamma_{1}\frac{\delta_{n}}{M^{1+\beta}},\quad\bar{\psi}_{3}(L)\leq\gamma_{3}\frac{\delta_{n}^{2}}{M^{2+2\beta}}$
(42)
where
$\delta_{n}=\max\left\\{\frac{M^{1+\beta}\log\left(4\bar{C}_{1}np/\iota\right)}{\bar{C}_{2}\bar{\Phi}(T,L)},M^{1+\beta}\sqrt{\frac{1}{C_{6}}\bar{\psi}_{2}(T,L)\log\left(\frac{C_{5}npL}{\iota}\right)},\right.\\\
\left.M^{1+\beta}\sqrt{\frac{\log\left(4\bar{C}_{3}p^{2}M^{2}/\iota\right)}{\bar{C}_{4}n}}\right\\},$
(43)
$\bar{C}_{1}=\max\\{C^{X}_{1},C^{Y}_{1}\\}$,
$\bar{C}_{2}=\min\\{C^{X}_{2},C^{Y}_{2}\\}$,
$\bar{C}_{3}=\max\\{C^{X}_{3},C^{Y}_{3}\\}$,
$\bar{C}_{4}=\min\\{C^{X}_{4},C^{Y}_{4}\\}$,
$\bar{C}_{5}=\max\\{C^{X}_{5},C^{Y}_{6}\\}$,
$\bar{C}_{6}=\min\\{C^{X}_{6},C^{Y}_{6}\\}$. $\gamma^{X}_{k}$,
$\gamma^{Y}_{k}$, $k=1,2,3$, and $C^{X}_{k}$, $C^{Y}_{k}$, $k=1,\cdots,6$ are
constants that do not depend on $n$, $p$, and $M$. Then
$\max\left\\{|S^{X,M}-\Sigma^{X,M}|_{\infty},|S^{Y,M}-\Sigma^{Y,M}|_{\infty}\right\\}\leq\delta_{n}$
(44)
holds with probability at least $1-\iota$.
Proof See Appendix B.5. [2mm]
The rate $\delta_{n}$ in Theorem 4 is comprised of three terms. The first two
terms correspond to the error incurred by measuring the curves at discrete
locations and are approximation errors. The third term, which also appears in
Theorem 3, is the sampling error.
We provide some intuition on how $\tilde{\psi}_{1}$, $\tilde{\psi}_{2}$,
$\tilde{\psi}_{3}$, and $\tilde{\psi}_{4}$ depend on $T$ and $L$. Note that we
choose an orthonormal basis. Then as $T\to\infty$, we have
$\displaystyle\frac{1}{T}B^{\top}_{ij}B_{ij}$
$\displaystyle=\frac{1}{T}\sum^{T}_{k=1}\left[\begin{matrix}b^{2}_{1}(t_{ijk})&b_{1}(t_{ijk})b_{2}(t_{ijk})&\cdots&b_{1}(t_{ijk})b_{L}(t_{ijk})\\\
\vdots&\vdots&\ddots&\vdots\\\
b_{L}(t_{ijk})b_{1}(t_{ijk})&b_{L}(t_{ijk})b_{2}(t_{ijk})&\cdots&b^{2}_{L}(t_{ijk})\end{matrix}\right]$
$\displaystyle\approx\left[\begin{matrix}\|b_{1}\|^{2}&\langle
b_{1},b_{2}\rangle&\cdots&\langle b_{1},b_{L}\rangle\\\
\vdots&\vdots&&\vdots\\\ \langle b_{L},b_{1}\rangle&\langle
b_{L},b_{2}\rangle&\cdots&\|b_{L}\|^{2}\end{matrix}\right]$
$\displaystyle=\left[\begin{matrix}1&0&\cdots&0\\\ \vdots&\vdots&&\vdots\\\
0&0&\cdots&1\end{matrix}\right].$
Thus, as $T$ grows, we would expect
$\lambda_{\min}(B^{\top}_{ij}B_{ij})\approx T$ for any $1\leq j\leq p$ and
$1\leq i\leq n$. This implies that $\tilde{\psi}_{1}(T,L)\approx L/\sqrt{T}$
and
$\tilde{\psi}_{2}(T,L)\approx\left(D^{2}_{1,b,L}\tilde{\psi}_{3}(L)+\tilde{\psi}_{4}(L)\right)L^{2}/T^{2}$.
Furthermore, $D^{2}_{1,b,L}\asymp L^{2}$ when we use Fourier basis.
To understand $\tilde{\psi}_{3}(L)$ and $\tilde{\psi}_{4}(L)$, note that
$\lambda^{\bot}_{j0}=\mathbb{E}[\|g^{\bot}_{ij}\|^{2}]=\mathbb{E}_{g_{ij}}[\mathbb{E}_{\epsilon}[\|g^{\bot}_{ij}\|^{2}\mid
g_{ij}]]$. Under Assumption 4, $\lambda^{\bot}_{j0}\to 0$ as $L\to\infty$;
however, the speed at which $\lambda^{\bot}_{j0}$ goes to zero will depend on
$\mathbb{H}$ and the choice of the basis functions. For example, for a fixed
$g_{ij}$, by well known approximation results (see, for example, Barron and
Sheu (1991)), if $g_{ij}$ has $r$-th continuous and square integrable
derivatives, $\|g^{\bot}_{ij}\|^{2}\approx 1/L^{r}$ for frequently used bases
such as the Legendre polynomials, B-splines, and Fourier basis. Thus, roughly
speaking, we should have $\tilde{\psi}_{3}(L)\approx 1/L^{r}$ when
$\mathbb{H}$ is a Sobolev space of order $r$. When $g_{ij}$ is an infinitely
differentiable function and all derivatives can be uniformly bounded, then
$\|g^{\bot}_{ij}\|^{2}\approx\exp(-L)$ and thus
$\tilde{\psi}_{3}(L)\approx\exp(-L)$. Similarly, we have
$\tilde{\psi}_{4}(L)\approx 1/L^{r-1}$ if $g_{ij}$ has $r$-th continuous and
square integrable derivatives; and $\tilde{\psi}_{4}(L)\approx\exp(-L)$ if
$g_{ij}$ is an infinitely differentiable function and all derivatives can be
uniformly bounded.
To roughly show how $M$, $T$, $L$ and $n$ may co-vary, we assume that $p$ and
$s$ are fixed, and all elements of $\mathbb{H}$ have $r$-th continuous and
square integrable derivatives. Then FuDGE will recover the differential graph
with high probability, if $M\ll n^{1/(2+2\beta)}$, $\sqrt{T}/L\gg
M^{1+\beta}$, $T\gg L^{2-r/2}$, and $L\gg M^{(1+\beta)/r}$.
As pointed out by a reviewer, the noise term in (21) will create a nugget
effect in the covariance, meaning that
$\text{Var}(h_{ijk})=\text{Var}(g_{ij}(t_{ijk}))+\sigma^{2}_{0}$. This nugget
effect leads to bias in the estimated eigenvalues (variances of the scores).
In our theorem, the nugget effect is reflected by $\sigma_{0}$ in
$\tilde{\psi}_{1}$. When $\sigma_{0}$ is large, adding a regularization term
when estimating the eigenvalues can improve the estimation of FPCA scores and
their covariance matrices (see Chapter 6 of Hsing and Eubank (2015)). However,
adding a regularization term increases the number of tuning parameters that
need to be chosen. An alternative approach to estimating the covariance matrix
is through local polynomial regression (Zhang and Wang, 2016). Since the focus
of the paper is on the estimation of differential functional graphical models,
we do not explore ways to improve the estimation of FPCA scores. However, we
recognize that there are alternative approaches that can perform better in
some cases.
## 5 Joint Functional Graphical Lasso
In this section, we introduce two variants of a Joint Functional Graphical
Lasso (JFGL) estimator which we compare empirically to our proposed FuDGE
procedure in Section 6.1. Danaher et al. (2014) proposed the Joint Graphical
Lasso (JGL) to estimate multiple related Gaussian graphical models from
different classes simultaneously. Given $Q\geq 2$ data sets, where the $q$-th
data set consists of $n_{q}$ independent random vectors drawn from
$N(\mu_{q},\Sigma_{q})$, JGL simultaneously estimates
$\\{\Theta\\}=\\{\Theta^{(1)},\Theta^{(2)},\dots,\Theta^{(Q)}\\}$, where
$\Theta^{(q)}=\Sigma^{-1}_{q}$ is the precision matrix of the $q$-th data set.
Specifically, JGL constructs an estimator
$\\{\hat{\Theta}\\}=\\{\hat{\Theta}^{(1)},\hat{\Theta}^{(2)},\dots,\hat{\Theta}^{(Q)}\\}$
by solving the penalized log-likelihood:
$\\{\hat{\Theta}\\}=\operatorname*{arg\,min}_{\\{\Theta\\}}\left\\{-\sum^{Q}_{q=1}n_{q}\left(\log\text{det}\Theta^{(q)}-\text{trace}\left(S^{(q)}\Theta^{(q)}\right)\right)+P(\\{\Theta\\})\right\\},$
(45)
where $S^{(q)}$ is the sample covariance of the $q$-th data set and
$P(\\{\Theta\\})$ is a penalty function. The fused graphical lasso (FGL) is
obtained by setting
$P(\\{\Theta\\})=\lambda_{1}\sum^{Q}_{q=1}\sum_{i\neq
j}|\Theta^{(q)}_{ij}|+\lambda_{2}\sum_{q<q^{\prime}}\sum_{i\neq
j}|\Theta^{(q)}_{ij}-\Theta^{(q^{\prime})}_{ij}|,$ (46)
while the group graphical lasso (GGL) is obtained by setting
$P(\\{\Theta\\})=\lambda_{1}\sum^{Q}_{q=1}\sum_{i\neq
j}|\Theta^{(q)}_{ij}|+\lambda_{2}\sum_{i\neq
j}\sqrt{\sum^{Q}_{q=1}\left(\Theta^{(q)}_{ij}\right)^{2}}.$ (47)
The terms $\lambda_{1}$ and $\lambda_{2}$ are non-negative tuning parameters,
while $\Theta^{(q)}_{ij}$ denotes the $(i,j)$-th entry of $\Theta^{(q)}$. For
both penalties, the first term is the lasso penalty, which encourages sparsity
for the off-diagonal entries of all precision matrices; however, FGL and GGL
differ in the second term. For FGL, the second term encourages the off-
diagonal entries of precision matrices among all classes to be similar, which
means that it encourages not only similar network structure, but also similar
edge values. For GGL, the second term is a group lasso penalty, which
encourages the support of the precision matrices to be similar, but allows the
specific values to differ.
A similar approach can be used for estimating the precision matrix of the
score vectors. In contrast to the direct estimation procedure proposed in
Section 3, we could first estimate $\hat{\Theta}^{X,M}$ and
$\hat{\Theta}^{Y,M}$ using a joint graphical lasso objective, and then take
the difference to estimate $\Delta$.
In the functional graphical model setting, we are interested in the block
sparsity, so we modify the entry-wise penalties to a block-wise penalty.
Specifically, we propose solving the objective function in (45), where
$S^{(q)}$ and $\Theta^{(q)}$ denote the sample covariance and estimated
precision of the projection scores for the $q$-th group. Note that now
$S^{(q)}$, $\Theta^{(q)}$ and $\hat{\Theta}^{(q)}$, $q=1,\ldots,Q$ are all
$pM\times pM$ matrices. Similar to the GGL and FGL procedures, we define the
Grouped Functional Graphical Lasso (GFGL) and Fused Functional Graphical Lasso
(FFGL) penalties for functional graphs. Specifically, let $\Theta^{(q)}_{jl}$
denote the $(j,l)$-th $M\times M$ block matrix, the GFGL penalty is
$P(\\{\Theta\\})=\lambda_{1}\sum^{Q}_{q=1}\sum_{j\neq
l}\|\Theta^{(q)}_{jl}\|_{\text{F}}+\lambda_{2}\sum_{j\neq
l}\sqrt{\sum^{Q}_{q=1}\|\Theta^{(q)}_{jl}\|^{2}_{\text{F}}},$ (48)
where $\lambda_{1}$ and $\lambda_{2}$ are non-negative tuning parameters. The
FFGL penalty can be defined in two ways. The first way is to use the Frobenius
norm for the second term:
$P(\\{\Theta\\})=\lambda_{1}\sum^{Q}_{q=1}\sum_{j\neq
l}\|\Theta^{(q)}_{jl}\|_{\text{F}}+\lambda_{2}\sum_{q<q^{\prime}}\sum_{j,l}\|\Theta^{(q)}_{jl}-\Theta^{(q^{\prime})}_{jl}\|_{\text{F}}.$
(49)
The second way is to keep the element-wise $L_{1}$ norm as in FGL:
$P(\\{\Theta\\})=\lambda_{1}\sum^{Q}_{q=1}\sum_{j\neq
l}\|\Theta^{(q)}_{jl}\|_{\text{F}}+\lambda_{2}\sum_{q<q^{\prime}}\sum_{j,l}|\Theta^{(q)}_{jl}-\Theta^{(q^{\prime})}_{jl}|_{1},$
(50)
where $\lambda_{1}$ and $\lambda_{2}$ are non-negative tuning parameters.
The Joint Functional Graphical Lasso accommodates an arbitrary $Q$. However,
when estimating the functional differential graph, we set $Q=2$. We will refer
to (49) as FFGL and to (50) as FFGL2. The algorithms for solving GFGL, FFGL,
and FFGL2 are given in Appendix A.
## 6 Experiments
We examine the performance of FuDGE using both simulations and a real data
set.444Code to replicate the simulations is available at
https://github.com/boxinz17/FuDGE.
### 6.1 Simulations
Given a graph $G_{X}$, we generate samples of $X$ such that
$X_{ij}(t)=b^{\prime}(t)^{\top}\delta^{X}_{ij}$. The coefficients
$\delta^{X}_{i}=((\delta^{X}_{i1})^{\top},\ldots,(\delta^{X}_{ip})^{\top})^{\top}\in{\mathbb{R}^{mp}}$
are drawn from $N\left(0,(\Omega^{X})^{-1}\right)$ where $\Omega_{X}$ is
described below. In all cases, $b^{\prime}(t)$ is an $m$-dimensional basis
with disjoint support over $[0,1]$ such that for $k=1,\ldots m$:
$b^{\prime}_{k}(t)=\begin{cases}\cos\left(10\pi\left(x-(2k-1)/10\right)\right)+1&\text{if
}(k-1)/m\leq{x}<k/m;\\\ 0&\text{otherwise}.\end{cases}$ (51)
To generate noisy observations at discrete time points, we sample data
$h^{X}_{ijk}=X_{ij}(t_{k})+e_{ijk},\quad e_{ijk}\sim N(0,0.5^{2}),$
for $200$ evenly spaced time points $0=t_{1}\leq\ldots\leq t_{200}=1$.
$Y_{ij}(t)$ and $h^{Y}_{ijk}$ are sampled in an analogous procedure. We use
$m=5$ for the experiments below, except for the simulation where we explore
the effect of $m$ on empirical performance.
We consider three different simulation settings for constructing $G_{X}$ and
$G_{Y}$. In each setting, we let $n_{X}=n_{Y}=100$ and $p=30,60,90,120$, and
we replicate the procedure 30 times for each $p$ and model setting.
Model 1: This model is similar to the setting considered in Zhao et al.
(2014), but modified to the functional case. We generate the support of
$\Omega^{X}$ according to a graph with $p(p-1)/10$ edges and a power-law
degree distribution with an expected power parameter of 2. Although the graph
is sparse with only 20% of all possible edges present, the power-law structure
mimics certain real-world graphs by creating hub nodes with large degree
(Newman, 2003). For each nonzero block, we set
$\Omega^{X}_{jl}=\delta^{\prime}I_{5}$, where $\delta^{\prime}$ is sampled
uniformly from $\pm[0.2,0.5]$. To ensure positive definiteness, we further
scale each off-diagonal block by $1/2,1/3,1/4,1/5$ for $p=30,60,90,120$
respectively. Each diagonal element of $\Omega^{X}$ is set to $1$ and the
matrix is symmetrized by averaging it with its transpose. To get $\Omega^{Y}$,
we first select the top 2 hub nodes in $G_{X}$ (i.e., the nodes with top 2
largest degree), and for each hub node we select the top (by magnitude) 20% of
edges. For each selected edge, we set $\Omega^{Y}_{jl}=\Omega^{X}_{jl}+W$
where $W_{kk^{\prime}}=0$ for $|k-k^{\prime}|\leq{2}$, and $W_{kk^{\prime}}=c$
otherwise, where $c$ is generated in the same way as $\delta^{\prime}$. For
all other blocks, $\Omega^{Y}_{jl}=\Omega^{X}_{jl}$.
Model 2: We first generate a tridiagonal block matrix $\Omega^{*}_{X}$ with
$\Omega^{*}_{X,jj}=I_{5}$,
$\Omega^{*}_{X,j,j+1}=\Omega^{*}_{X,j+1,j}=0.6I_{5}$, and
$\Omega^{*}_{X,j,j+2}=\Omega^{*}_{X,j+2,j}=0.4I_{5}$ for $j=1,\ldots,p$. All
other blocks are set to 0. We form $G_{Y}$ by adding four edges to $G_{X}$.
Specifically, we first let $\Omega^{*}_{Y,jl}=\Omega^{*}_{X,jl}$ for all
blocks, then for $j=1,2,3,4$, we set
$\Omega^{*}_{Y,j,j+3}=\Omega^{*}_{Y,j+3,j}=W$, where $W_{kk^{\prime}}=0.1$ for
all $1\leq k,k^{\prime}\leq M$. Finally, we set
$\Omega^{X}=\Omega^{*}_{X}+\delta I$, $\Omega^{Y}=\Omega^{*}_{Y}+\delta I$,
where
$\delta=\max\left\\{|\min(\lambda_{\min}(\Omega^{*}_{X}),0)|,|\min(\lambda_{\min}(\Omega^{*}_{Y}),0)|\right\\}+0.05$.
Model 3: We generate $\Omega^{*}_{X}$ according to an Erdös-Rényi graph. We
first set $\Omega^{*}_{X,jj}=I_{5}$. With probability $.8$, we set
$\Omega^{*}_{X,jl}=\Omega^{*}_{X,lj}=0.1I_{5}$, and set it to $0$ otherwise.
Thus, we expect 80% of all possible edges to be present. Then, we form $G_{Y}$
by randomly adding $s$ new edges to $G_{X}$, where $s=3$ for $p=30$, $s=4$ for
$p=60$, $s=5$ for $p=90$, and $s=6$ for $p=120$. We set each corresponding
block $\Omega^{*}_{Y,jl}=W$, where $W_{kk^{\prime}}=0$ when
$|k-k^{\prime}|\leq{1}$ and $W_{kk^{\prime}}=c$ otherwise. We let $c=2/5$ for
$p=30$, $c=4/15$ for $p=60$, $c=1/5$ for $p=90$, and $c=4/25$ for $p=120$.
Finally, we set $\Omega^{X}=\Omega^{*}_{X}+\delta I$,
$\Omega^{Y}=\Omega^{*}_{Y}+\delta I$, where
$\delta=\max\left\\{|\min(\lambda_{\min}(\Omega^{*}_{X}),0)|,|\min(\lambda_{\min}(\Omega^{*}_{Y}),0)|\right\\}+0.05$.
Figure 4: Average ROC curves across 30 simulations. Different columns
correspond to different models, different rows correspond to different
dimensions.
We compare FuDGE with four competing methods. The first competing method
(denoted by _multiple_ in Figure 4) ignores the functional nature of the data.
We select 15 equally spaced time points, and at each time point, we implement
a direct difference estimation procedure (Zhao et al., 2014) to estimate the
graph at that time point. Specifically, for each $t$, $X_{i}(t)$ and
$Y_{i}(t)$ are simply $p$-dimensional random vectors, and we use their sample
covariances in (25) to obtain a $p\times p$ matrix $\hat{\Delta}$. This
produces 15 differential graphs, and we use a majority vote to form a single
differential graph. The ROC curve is obtained by changing the $L_{1}$ penalty,
$\lambda_{n}$, used for all time points.
The other three competing methods all estimate two functional graphical models
using either the Joint Graphical Lasso or Functional Joint Graphical Lasso
introduced in Section 5. For each method, we first estimate the sample
covariances of the FPCA scores for $X$ and $Y$. The second competing method
(denoted as _FGL_) ignores the block structure in precision matrices and
applies the fused graphical lasso method directly. The third and fourth
competing methods do account for the block structure and apply FFGL and FFGL2
defined in Section 5. To draw an ROC curve, we follow the same approach as in
Zhao et al. (2014). We first fix $\lambda_{1}=0.1$, which controls the overall
sparsity in each graph; we then form an ROC curve by varying across
$\lambda_{2}$, which controls the similarity between two graphs.
For each setting and method, the ROC curve averaged across the $30$
replications is shown in Figure 4. We see that FuDGE clearly has the best
overall performance in recovering the support of the differential graph for
all cases. We also note that the explicit consideration of block structure in
the joint graphical methods does not seem to make a substantial difference as
the performance of FGL is comparable to FFGL and FFGL2.
##### The effect of the number of basis functions:
To examine how the estimation accuracy is associated with the dimension of the
functional data, we repeat the experiment under Model 1 with $p=30$ and vary
the number of basis functions used to generate the data in (51). In each case,
the number of principal components selected by the cross-validation is $M=4$.
In Figure 5, we see that as the gap between the true dimension $m$ and the
number of dimensions used $M$ increases, the performance of FuDGE degrades
slightly, but is still relatively robust. This is because the FPCA procedure
is data adaptive and produces an eigenfunction basis that approximates the
true functions well with a relatively small number of basis functions.
Figure 5: ROC curves for Model 1 with $p=30$ and changing number of basis
functions $m$. Each curve is drawn by averaging across 30 simulations. The
number of eigenfunctions, $M$, selected by the cross-validation is 4 in each
replication.
### 6.2 Neuroscience Application
We apply our method to electroencephalogram (EEG) data obtained from a study
(Zhang et al., 1995; Ingber, 1997), which included 122 total subjects; 77
individuals with alcohol use disorder (AUD) and 45 in the control group.
Specifically, the EEG data was measured by placing $p=64$ electrodes on
various locations on the subject’s scalp and measuring voltage values across
time. We follow the preprocessing procedure in Knyazev (2007) and Zhu et al.
(2016), which filters the EEG signals at $\alpha$ frequency bands between 8
and 12.5 Hz.
Qiao et al. (2019) estimate separate functional graphs for each group, but we
directly estimate the differential graph using FuDGE. We choose $\lambda_{n}$
so that the estimated differential graph has approximately 1% of possible
edges. The estimated edges of the differential graph are shown in Figure 6.
In this setting, an edge in the differential graph suggests that the
communication pattern between two different regions of the brain may be
affected by alcohol use disorder. However, the differential graph does not
indicate exactly how the communication pattern has changed. For instance, the
edge between P4 and P6 suggests that AUD affects the communication pattern
between those two regions; however, it could be that those two regions are
associated (conditionally) in the control group, but not the AUD group or vice
versa. It could also be that the two regions are associated (conditionally) in
both groups, but the conditional covariance is different. Nonetheless, many
interesting observations can be gleaned from the results and may generate
interesting hypotheses that could be investigated more thoroughly in an
experimental setting.
Figure 6: Estimated differential graph for EEG data. The anterior region is
the top of the figure and the posterior region is the bottom of the figure.
We give two specific observations. First, edges are generally between nodes
located in the same region—either the anterior region or the posterior
region—and there is no edge that crosses between regions. This observation is
consistent with the result in Qiao et al. (2019) where there are no
connections between the anterior and posterior regions for both groups. We
also note that electrode X, lying in the middle left region has a high degree
in the estimated differential graph. While there is no direct connection
between the anterior and posterior regions, this region may play a role in
helping the two parts communicate and may be heavily affected by AUD.
Similarly, P08 in the anterior region also has a high degree and is connected
to other nodes in the anterior region, which may indicate that this region can
be an information exchange center for anterior regions, which, at the same
time, may be heavily affected by AUD.
## 7 Discussion
We proposed a method to directly estimate the differential graph for
functional graphical models. In certain settings, direct estimation allows for
the differential graph to be recovered consistently, even if each underlying
graph cannot be consistently recovered. Experiments on simulated data also
show that preserving the functional nature of the data rather than treating
the data as multivariate scalars can also result in better estimation of the
differential graph.
A key step in the procedure is first representing the functions with an
$M$-dimensional basis using FPCA. Definition 1 ensures that there exists some
$M$ large enough so that the signal, $\nu_{1}(M)$, is larger than the bias,
$\nu_{2}(M)$, due to using a finite dimensional representation. Intuitively,
$\tau=\nu_{1}(M)-\nu_{2}(M)$ is tied to the eigenvalue decay rate; however, we
defer derivation of the explicit connection for future work. In addition, we
have provided a method for direct estimation of the differential graph, but
the development of methods that allow for inference and hypothesis testing in
functional differential graphs would be fruitful avenues for future work. For
example, Kim et al. (2019) has developed inferential tools for high-
dimensional Markov networks, and future work may extend their results to the
functional graph setting.
## Acknowledgements
We thank the associate editor and reviewers for their helpful feedback which
has greatly improved the manuscript. This work is partially supported by the
William S. Fishman Faculty Research Fund at the University of Chicago Booth
School of Business. This work was completed in part with resources provided by
the University of Chicago Research Computing Center.
## Appendix A Derivation of Optimization Algorithm
In this section, we derive the key steps for the optimization algorithms.
### A.1 Optimization Algorithm for FuDGE
We derive the closed-form updates for the proximal method stated in (29). In
particular, recall that for all $1\leq{j,l}\leq{p}$, we have
$\Delta^{\text{new}}_{jl}\;=\;\left[\left(\|A^{\text{old}}_{jl}\|_{F}-\lambda_{n}\eta\right)/\|A^{\text{old}}_{jl}\|_{F}\right]_{+}\times
A^{\text{old}}_{jl},$
where $A^{\text{old}}=\Delta^{\text{old}}-\eta\nabla L(\Delta^{\text{old}})$
and $x_{+}=\max\\{0,x\\}$, $x\in{\mathbb{R}}$ represents the positive part of
$x$.
Proof [Proof of (29)] Let $A^{\text{old}}=\Delta^{\text{old}}-\eta\nabla
L(\Delta^{\text{old}})$ and let $f_{jl}$ denote the loss decomposed over each
$j,l$ block so that
${}f_{jl}(\Delta_{jl})\;=\;\frac{1}{2\lambda_{n}\eta}\|\Delta_{jl}-A^{\text{old}}_{jl}\|^{2}_{F}+\|\Delta_{jl}\|_{F}$
(A.1)
and
$\Delta^{\text{new}}_{jl}\;=\;\operatorname*{arg\,min}_{\Delta_{jl}\in{\mathbb{R}^{M\times{M}}}}f_{jl}(\Delta_{jl}).$
(A.2)
The loss $f_{jl}(\Delta_{jl})$ is convex, so the first-order optimality
condition implies that:
${}0\in\partial f_{jl}\left(\Delta^{\text{new}}_{jl}\right),$ (A.3)
where $\partial f_{jl}\left(\Delta_{jl}\right)$ is the subdifferential of
$f_{jl}$ at $\Delta_{jl}$:
${}\partial
f_{jl}(\Delta_{jl})\;=\;\frac{1}{\lambda_{n}\eta}\left(\Delta_{jl}-A^{\text{old}}_{jl}\right)+Z_{jl},$
(A.4)
where
${}Z_{jl}\;=\;\begin{cases}\frac{\Delta_{jl}}{\|\Delta_{jl}\|_{F}}\qquad&\text{
if }\Delta_{jl}\neq{0}\\\\[10.0pt]
\left\\{Z_{jl}\in{\mathbb{R}^{M\times{M}}}\colon\|Z_{jl}\|_{F}\leq{1}\right\\}\qquad&\text{
if }\Delta_{jl}=0.\end{cases}$ (A.5)
Claim 1 If $\|A^{\text{old}}_{jl}\|_{F}>\lambda_{n}\eta>0$, then
$\Delta^{\text{new}}_{jl}\neq{0}$.
We verify this claim by proving the contrapositive. Suppose
$\Delta^{\text{new}}_{jl}={0}$. Then by (A.3) and (A.5), there exists a
$Z_{jl}\in{\mathbb{R}^{M\times{M}}}$ such that $\|Z_{jl}\|_{F}\leq{1}$ and
$0=-\frac{1}{\lambda_{n}\eta}A^{\text{old}}_{jl}+Z_{jl}.$
Thus, $\|A^{\text{old}}_{jl}\|_{F}=\|\lambda_{n}\eta\cdot
Z_{jl}\|_{F}\leq{\lambda_{n}\eta}$, so that Claim 1 holds.
Combining Claim 1 with (A.3) and (A.5), for any $j,l$ such that
$\|A^{\text{old}}_{jl}\|_{F}>\lambda_{n}\eta$, we have
$0=\frac{1}{\lambda_{n}\eta}\left(\Delta^{\text{new}}_{jl}-A^{\text{old}}_{jl}\right)+\frac{\Delta^{\text{new}}_{jl}}{\|\Delta^{\text{new}}_{jl}\|_{F}},$
which is solved by
${}\Delta^{\text{new}}_{jl}=\frac{\|A^{\text{old}}_{jl}\|_{F}-\lambda_{n}\eta}{\|A^{\text{old}}_{jl}\|_{F}}A^{\text{old}}_{jl}.$
(A.6)
Claim 2 If $\|A^{\text{old}}_{jl}\|_{F}\leq\lambda_{n}\eta$, then
$\Delta^{\text{new}}_{jl}=0$.
Again, we verify the claim by proving the contrapositive. Suppose
$\Delta^{\text{new}}_{jl}\neq 0$. Then the first-order optimality implies the
updates in (A.6). However, taking the Frobenius norm on both sides of the
equation gives
$\|\Delta^{\text{new}}_{jl}\|_{F}=\|A^{\text{old}}_{jl}\|_{F}-\lambda_{n}\eta$,
which implies that $\|A^{\text{old}}_{jl}\|_{F}-\lambda_{n}\eta\geq{0}$.
The updates in (29) immediately follow by combining Claim 2 and (A.6). [2mm]
### A.2 Solving the Joint Functional Graphical Lasso
As in Danaher et al. (2014), we use the alternating directions method of
multipliers (ADMM) algorithm to solve (45); see Boyd et al. (2011) for a
detailed exposition of ADMM.
To solve (45), we first rewrite the problem as:
$\max_{\\{\Theta\\},\\{Z\\}}\left\\{-\sum^{Q}_{q=1}n_{q}\left(\log\text{det}\Theta^{(q)}-\text{trace}\left(S^{(q)}\Theta^{(q)}\right)\right)+P(\\{Z\\})\right\\},$
subject to $\Theta^{(q)}\succ 0$ and $Z^{(q)}=\Theta^{(q)}$, where
$\\{Z\\}=\\{Z^{(1)},Z^{(2)},\dots,Z^{(Q)}\\}$. The scaled augmented Lagrangian
(Boyd et al., 2011) is given by
$L_{\rho}\left(\\{\Theta\\},\\{Z\\},\\{U\\}\right)=-\sum^{Q}_{q=1}n_{q}\left(\log\text{det}\Theta^{(q)}-\text{trace}\left(S^{(q)}\Theta^{(q)}\right)\right)+P(\\{Z\\})\\\
+\frac{\rho}{2}\sum^{Q}_{q=1}\|\Theta^{(q)}-Z^{(q)}+U^{(q)}\|^{2}_{\text{F}},$
(A.7)
where $\rho>0$ is a tuning parameter and
$\\{U\\}=\\{U^{(1)},U^{(2)},\dots,U^{(Q)}\\}$ are dual variables. The ADMM
algorithm will then solve (A.7) by iterating the following three steps. At the
$i$-th iteration, they are as follows:
1. 1.
$\\{\Theta_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{\Theta\\}}L_{\rho}\left(\\{\Theta\\},\\{Z_{(i-1)}\\},\\{U_{(i-1)}\\}\right)$.
2. 2.
$\\{Z_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{Z\\}}L_{\rho}\left(\\{\Theta_{(i)}\\},\\{Z\\},\\{U_{(i-1)}\\}\right)$.
3. 3.
$\\{U_{(i)}\\}\leftarrow\\{U_{(i-1)}\\}+(\\{\Theta_{(i)}\\}-\\{Z_{(i)}\\})$.
We now give more details for the above three steps.
ADMM algorithm for solving the joint functional graphical lasso problem
(a) Initialize the variables: $\Theta^{(q)}_{(0)}=I_{pM}$,
$U^{(q)}_{(0)}=0_{pM}$, and $Z^{(q)}_{(0)}=0_{pM}$ for $q=1,\ldots,Q$.
(b) Select a scalar $\rho>0$.
(c) For $i=1,2,3,\dots$ until convergence
(i) For $q=1,\ldots,Q$, update $\Theta^{(q)}_{(i)}$ as the minimizer (with
respect to $\Theta^{(q)}$) of
$-n_{q}\left(\log\text{det}\Theta^{(q)}-\text{trace}\left(S^{(q)}\Theta^{(q)}\right)\right)+\frac{\rho}{2}\|\Theta^{(q)}-Z^{(q)}_{(i-1)}+U^{(q)}_{(i-1)}\|^{2}_{\text{F}}$
Letting $VDV^{\top}$ denote the eigendecomposition of $S^{(q)}-\rho
Z^{(q)}_{(i-1)}/n_{q}+\rho U^{(q)}_{(i-1)}/n_{q}$, then the solution is given
by $V\tilde{D}V^{\top}$ (Witten and Tibshirani, 2009), where $\tilde{D}$ is
the diagonal matrix with $j$-th diagonal element being
$\frac{n_{q}}{2\rho}\left(-D_{jj}+\sqrt{D^{2}_{jj}+4\rho/n_{q}}\right),$
where $D_{jj}$ is the $(j,j)$-th entry of $D$.
(ii) Update $\\{Z_{(i)}\\}$ as the minimizer (with respect to $\\{Z\\}$) of
$\min_{\\{Z\\}}\frac{\rho}{2}\sum^{Q}_{q=1}\|Z^{(q)}-A^{(q)}\|^{2}_{\text{F}}+P(\\{Z\\}),$
(A.8)
where $A^{(q)}=\Theta^{(q)}_{(i)}+U^{(q)}_{(i-1)}$, $q=1,\ldots,Q$.
(iii) $U^{(q)}_{(i)}\leftarrow
U^{(q)}_{(i-1)}+(\Theta^{(q)}_{(i)}-Z^{(q)}_{(i)})$, $q=1,\ldots,Q$.
There are three things worth noticing. 1. The key step is to solve (A.8),
which depends on the form of penalty term $P(\cdot)$; 2. This algorithm is
guaranteed to converge to the global optimum when $P(\cdot)$ is convex (Boyd
et al., 2011); 3. The positive-definiteness constraint on $\\{\hat{\Theta}\\}$
is naturally enforced by step (c) (i).
### A.3 Solutions to (A.8) for Joint Functional Graphical Lasso
We provide solutions to (A.8) for three problems (GFGL, FFGL, FFGL2) defined
by (48), (49) and (50).
#### A.3.1 Solution to (A.8) for GFGL
Let the solution for
$\min_{\\{Z\\}}\frac{\rho}{2}\sum^{Q}_{q=1}\|Z^{(q)}-A^{(q)}\|^{2}_{\text{F}}+\lambda_{1}\sum^{Q}_{q=1}\sum_{j\neq
l}\|Z^{(q)}_{jl}\|_{\text{F}}+\lambda_{2}\sum_{j\neq
l}\left(\sum^{Q}_{q=1}\|Z^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2}$ (A.9)
be denoted as
$\\{\hat{Z}\\}=\\{\hat{Z}^{(1)},\hat{Z}^{(2)},\dots,\hat{Z}^{(Q)}\\}$. Let
$Z^{(q)}_{jl}$, $\hat{Z}^{(q)}_{jl}$ be $(j,l)$-th $M\times M$ block of
$Z^{(q)}$ and $\hat{Z}^{(q)}$, $q=1,\ldots,Q$. Then, for $j=1,\ldots,p$, we
have
$\hat{Z}^{(q)}_{jj}=A^{(q)}_{jj},\qquad q=1,\ldots,Q,$ (A.10)
and, for $j\neq l$, we have
$\hat{Z}^{(q)}_{jl}=\left(\frac{\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho}{\|A^{(q)}_{jl}\|_{\text{F}}}\right)_{+}\left(1-\frac{\lambda_{2}}{\rho\sqrt{\sum^{Q}_{q=1}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}}\right)_{+}A^{(q)}_{jl},$
(A.11)
where $q=1,\ldots,Q$. Details of the update are given in Appendix A.4.
#### A.3.2 Solution to (A.8) for FFGL
For FFGL, there is no simple closed form solution. When $Q=2$, (A.8) becomes
$\min_{\\{Z\\}}\;\frac{\rho}{2}\sum^{2}_{q=1}\|Z^{(q)}-A^{(q)}\|^{2}_{\text{F}}+\lambda_{1}\left(\sum^{2}_{q=1}\sum_{j\neq
l}\|Z^{(q)}_{jl}\|_{\text{F}}\right)+\lambda_{2}\sum_{j,l}\|Z^{(1)}_{jl}-Z^{(2)}_{jl}\|_{\text{F}}.$
For each $1\leq j,l\leq p$, we compute $\hat{Z}^{(1)}_{jl}$,
$\hat{Z}^{(2)}_{jl}$ by solving
$\min_{\\{Z^{(1)}_{jl},Z^{(2)}_{jl}\\}}\;\frac{1}{2}\sum^{2}_{q=1}\|Z^{(q)}_{jl}-A^{(q)}_{jl}\|^{2}_{\text{F}}+\frac{\lambda_{1}}{\rho}\mathbbm{1}_{j\neq
l}\sum^{2}_{q=1}\|Z^{(q)}_{jl}\|_{\text{F}}+\frac{\lambda_{2}}{\rho}\|Z^{(1)}_{jl}-Z^{(2)}_{jl}\|_{\text{F}},$
(A.12)
where $\mathbbm{1}_{j\neq l}=1$ when $j\neq l$ and $0$ otherwise.
When $j=l$, by Lemma 6, we have the following closed form updates for
$\\{\hat{Z}^{(1)}_{jj},\hat{Z}^{(2)}_{jj}\\}$, $j=1,\ldots,p$. If
$\|A^{(1)}_{jj}-A^{(2)}_{jj}\|_{\text{F}}\leq 2\lambda_{2}/\rho$, then
$\hat{Z}^{(1)}_{jj}=\hat{Z}^{(2)}_{jj}=\frac{1}{2}\left(A^{(1)}_{jj}+A^{(2)}_{jj}\right).$
If $\|A^{(1)}_{jj}-A^{(2)}_{jj}\|_{\text{F}}>2\lambda_{2}/\rho$, then
$\displaystyle\hat{Z}^{(1)}_{jj}$
$\displaystyle=A^{(1)}_{jj}-\frac{\lambda_{2}/\rho}{\|A^{(1)}_{jj}-A^{(2)}_{jj}\|_{\text{F}}}\left(A^{(1)}_{jj}-A^{(2)}_{jj}\right),$
$\displaystyle\hat{Z}^{(2)}_{jj}$
$\displaystyle=A^{(2)}_{jj}+\frac{\lambda_{2}/\rho}{\|A^{(1)}_{jj}-A^{(2)}_{jj}\|_{\text{F}}}\left(A^{(1)}_{jj}-A^{(2)}_{jj}\right).$
For $j\neq l$, we get $\\{\hat{Z}^{(1)}_{jl},\hat{Z}^{(2)}_{jl}\\}$ using the
ADMM algorithm again. We construct the scaled augmented Lagrangian as:
${L^{\prime}}_{\rho^{\prime}}\left(\\{W\\},\\{R\\},\\{V\\}\right)=\frac{1}{2}\sum^{2}_{q=1}\|W^{(q)}-B^{(q)}\|_{\text{F}}+\frac{\lambda_{1}}{\rho}\sum^{2}_{q=1}\|W^{(q)}\|_{\text{F}}\\\
+\frac{\lambda_{2}}{\rho}\|R^{(1)}-R^{(2)}\|_{\text{F}}+\frac{\rho^{\prime}}{2}\sum^{2}_{q=1}\|W^{(q)}-R^{(q)}+V^{(q)}\|^{2}_{\text{F}},$
where $\rho^{\prime}>0$ is a tuning parameter, $B^{(q)}=A^{(q)}_{jl}$,
$q=1,2$, and $W^{q},R^{(q)},V^{(q)}\in\mathbb{R}^{M\times M}$, $q=1,2$.
$\\{W\\}=\\{W^{(1)},W^{(2)}\\}$, $\\{R\\}=\\{R^{(1)},R^{(2)}\\}$, and
$\\{V\\}=\\{V^{(1)},V^{(2)}\\}$. The detailed ADMM algorithm is described as
below:
ADMM algorithm for solving (A.12) for $j\neq l$
(a) Initialize the variables: $W^{(q)}_{(0)}=I_{M}$, $R^{(q)}_{(0)}=0_{M}$,
and $V^{(q)}_{(0)}=0_{M}$ for $q=1,2$. Let $B^{(q)}=A^{(q)}_{jl}$, $q=1,2$.
(b) Select a scalar $\rho^{\prime}>0$.
(c) For $i=1,2,3,\dots$ until convergence
(i)
$\\{W_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{W\\}}{L^{\prime}}_{\rho^{\prime}}\left(\\{W\\},\\{R_{(i-1)}\\},\\{V_{(i-1)}\\}\right)$.
This is equivalent to
$\\{W_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{W\\}}\frac{1}{2}\sum^{2}_{q=1}\|W^{(q)}-C^{(q)}\|^{2}_{\text{F}}+\frac{\lambda_{1}}{\rho(1+\rho^{\prime})}\sum^{2}_{q=1}\|W^{(q)}\|_{\text{F}},$
where
$C^{(q)}=\frac{1}{1+\rho^{\prime}}\left[B^{(q)}+\rho^{\prime}\left(R^{(q)}_{(i-1)}-V^{(q)}_{(i-1)}\right)\right].$
Similar to (28), we have
$W^{(q)}_{(i)}\leftarrow\left(\frac{\|C^{(q)}\|_{\text{F}}-\lambda_{1}/(\rho(1+\rho^{\prime}))}{\|C^{(q)}\|_{\text{F}}}\right)_{+}\cdot
C^{(q)},\qquad q=1,2.$
(ii)
$\\{R_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{R\\}}{L^{\prime}}_{\rho^{\prime}}\left(\\{W_{(i)}\\},\\{R\\},\\{V_{(i-1)}\\}\right)$.
This is equivalent to
$\\{R_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{R\\}}\frac{1}{2}\sum^{2}_{q=1}\|R^{(q)}-D^{(q)}\|^{2}_{\text{F}}+\frac{\lambda_{2}}{\rho\rho^{\prime}}\|R^{(1)}-R^{(2)}\|_{\text{F}},$
where $D^{(q)}=W^{(q)}_{(i)}+V^{(q)}_{(i-1)}$. By Lemma 6, if
$\|D^{(1)}-D^{(2)}\|_{\text{F}}\leq 2\lambda_{2}/(\rho\rho^{\prime})$, then
$R^{(1)}_{(i)}=R^{(2)}_{(i)}\leftarrow\frac{1}{2}\left(D^{(1)}+D^{(2)}\right),$
and if $\|D^{(1)}-D^{(2)}\|_{\text{F}}>2\lambda_{2}/(\rho\rho^{\prime})$, then
$\displaystyle R^{(1)}\leftarrow
D^{(1)}-\frac{\lambda_{2}/(\rho\rho^{\prime})}{\|D^{(1)}-D^{(2)}\|_{\text{F}}}\left(D^{(1)}-D^{(2)}\right),$
$\displaystyle R^{(2)}\leftarrow
D^{(2)}+\frac{\lambda_{2}/(\rho\rho^{\prime})}{\|D^{(1)}-D^{(2)}\|_{\text{F}}}\left(D^{(1)}-D^{(2)}\right).$
(iii) $V^{(q)}_{(i)}\leftarrow V^{(q)}_{(i-1)}+W^{(q)}_{(i)}-R^{(q)}_{(i)}$,
$q=1,2$.
#### A.3.3 Solution to (A.8) for FFGL2
For FFGL2, there is also no closed form solution. Similar to Section A.3.2, we
compute a closed form solution for
$\\{\hat{Z}^{(1)}_{jj},\hat{Z}^{(2)}_{jj}\\}$, $j=1,\ldots,p$, and use an ADMM
algorithm to compute $\\{\hat{Z}^{(1)}_{jl},\hat{Z}^{(2)}_{jl}\\}$, $1\leq
j\neq l\leq p$.
For any $1\leq j,l\leq p$, we solve:
$\min_{\\{Z^{(1)}_{jl},Z^{(2)}_{jl}\\}}\;\frac{1}{2}\sum^{2}_{q=1}\|Z^{(q)}_{jl}-A^{(q)}_{jl}\|^{2}_{\text{F}}+\frac{\lambda_{1}}{\rho}\mathbbm{1}_{j\neq
l}\sum^{2}_{q=1}\|Z^{(q)}_{jl}\|_{\text{F}}+\frac{\lambda_{2}}{\rho}\sum_{1\leq
a,b\leq M}|Z^{(1)}_{jl,ab}-Z^{(2)}_{jl,ab}|,$ (A.13)
where $\mathbbm{1}_{j\neq l}=1$ when $j\neq l$ and $0$ otherwise.
By Lemma 6, when $j=l$ we have
$\left(\hat{Z}^{(1)}_{jj,ab},\hat{Z}^{(2)}_{jj,ab}\right)=\left\\{\begin{aligned}
&\left(A^{(1)}_{jl,ab}-\lambda_{2}/\rho,A^{(2)}_{jl,ab}+\lambda_{2}/\rho\right)\quad\text{if}\;A^{(1)}_{jl,ab}>A^{(2)}_{jl,ab}+2\lambda_{2}/\rho\\\
&\left(A^{(1)}_{jl,ab}+\lambda_{2}/\rho,A^{(2)}_{jl,ab}-\lambda_{2}/\rho\right)\quad\text{if}\;A^{(1)}_{jl,ab}<A^{(2)}_{jl,ab}-2\lambda_{2}/\rho\\\
&\left(\left(A^{(1)}_{jl,ab}+A^{(2)}_{jl,ab}\right)/2,\left(A^{(1)}_{jl,ab}+A^{(2)}_{jl,ab}\right)/2\right)\quad\text{if}\;\left|A^{(1)}_{jl,ab}-A^{(2)}_{jl,ab}\right|\leq
2\lambda_{2}/\rho,\end{aligned}\right.$
where subscripts $(a,b)$ denote the $(a,b)$-th entry, $1\leq a,b\leq M$ and
$j=1,\ldots,p$.
For $j\neq l$, we get $\\{\hat{Z}^{(1)}_{jl},\hat{Z}^{(2)}_{jl}\\}$, $1\leq
j\neq l\leq p$ by using an ADMM algorithm. Let $B^{(q)}=A^{(q)}_{jl}$,
$q=1,2$. We first construct the scaled augmented Lagrangian:
${L^{\prime}}_{\rho^{\prime}}\left(\\{W\\},\\{R\\},\\{V\\}\right)=\frac{1}{2}\sum^{2}_{q=1}\|W^{(q)}-B^{(q)}\|_{\text{F}}+\frac{\lambda_{1}}{\rho}\sum^{2}_{q=1}\|W^{(q)}\|_{\text{F}}\\\
+\frac{\lambda_{2}}{\rho}\sum_{a,b}|R^{(1)}_{a,b}-R^{(2)}_{a,b}|+\frac{\rho^{\prime}}{2}\sum^{2}_{q=1}\|W^{(q)}-R^{(q)}+V^{(q)}\|^{2}_{\text{F}},$
where $\rho^{\prime}>0$ is a tuning parameter,
$W^{q},R^{(q)},V^{(q)}\in\mathbb{R}^{M\times M}$, $q=1,2$,
$\\{W\\}=\\{W^{(1)},W^{(2)}\\}$, $\\{R\\}=\\{R^{(1)},R^{(2)}\\}$, and
$\\{V\\}=\\{V^{(1)},V^{(2)}\\}$. The detailed ADMM algorithm is described as
below:
ADMM algorithm for solving (A.13) for $j\neq l$
(a) Initialize the variables: $W^{(q)}_{(0)}=I_{M}$, $R^{(q)}_{(0)}=0_{M}$,
and $V^{(q)}_{(0)}=0_{M}$ for $q=1,2$. Let $B^{(q)}=A^{(q)}_{jl}$, $q=1,2$.
(b) Select a scalar $\rho^{\prime}>0$.
(c) For $i=1,2,3,\dots$ until convergence
(i)
$\\{W_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{W\\}}.{L^{\prime}}_{\rho^{\prime}}\left(\\{W\\},\\{R_{(i-1)}\\},\\{V_{(i-1)}\\}\right)$
This is equivalent to
$\\{W_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{W\\}}\frac{1}{2}\sum^{2}_{q=1}\|W^{(q)}-C^{(q)}\|^{2}_{\text{F}}+\frac{\lambda_{1}}{\rho(1+\rho^{\prime})}\sum^{2}_{q=1}\|W^{(q)}\|_{\text{F}},$
where
$C^{(q)}=\frac{1}{1+\rho^{\prime}}\left[B^{(q)}+\rho^{\prime}\left(R^{(q)}_{(i-1)}-V^{(q)}_{(i-1)}\right)\right].$
Similar to (28), we have
$W^{(q)}_{(i)}\leftarrow\left(\frac{\|C^{(q)}\|_{\text{F}}-\lambda_{1}/(\rho(1+\rho^{\prime}))}{\|C^{(q)}\|_{\text{F}}}\right)_{+}\cdot
C^{(q)},\qquad q=1,2.$
(ii)
$\\{R_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{R\\}}{L^{\prime}}_{\rho^{\prime}}\left(\\{W_{(i)}\\},\\{R\\},\\{V_{(i-1)}\\}\right)$
This is equivalent to
$\\{R_{(i)}\\}\leftarrow\operatorname*{arg\,min}_{\\{R\\}}\frac{1}{2}\sum^{2}_{q=1}\|R^{(q)}-D^{(q)}\|^{2}_{\text{F}}+\frac{\lambda_{2}}{\rho\rho^{\prime}}\sum_{a,b}\left|R^{(1)}_{ab}-R^{(2)}_{ab}\right|,$
where $D^{(q)}=W^{(q)}_{(i)}+V^{(q)}_{(i-1)}$. Then by Lemma 6, we have
$\left(R^{(1)}_{(i),ab},R^{(2)}_{(i),ab}\right)=\left\\{\begin{aligned}
&\left(D^{(1)}_{ab}-\lambda_{2}/(\rho\rho^{\prime}),D^{(2)}_{ab}+\lambda_{2}/(\rho\rho^{\prime})\right)\quad\text{if}\;D^{(1)}_{ab}>D^{(2)}_{ab}+2\lambda_{2}/(\rho\rho^{\prime})\\\
&\left(D^{(1)}_{ab}+\lambda_{2}/(\rho\rho^{\prime}),D^{(2)}_{ab}-\lambda_{2}/(\rho\rho^{\prime})\right)\quad\text{if}\;D^{(1)}_{ab}<D^{(2)}_{ab}-2\lambda_{2}/(\rho\rho^{\prime})\\\
&\left(\left(D^{(1)}_{ab}+D^{(2)}_{ab}\right)/2,\left(D^{(1)}_{ab}+D^{(2)}_{ab}\right)/2\right)\quad\text{if}\;\left|D^{(1)}_{ab}-D^{(1)}_{ab}\right|\leq
2\lambda_{2}/(\rho\rho^{\prime}),\end{aligned}\right.$
where subscripts $(a,b)$ denote the $(a,b)$-th entry, $1\leq a,b\leq M$ and
$1\leq j,l\leq p$.
(iii) $V^{(q)}_{(i)}\leftarrow V^{(q)}_{(i-1)}+W^{(q)}_{(i)}-R^{(q)}_{(i)}$,
$q=1,2$.
### A.4 Derivation of (A.10) and (A.11)
We provide proof of (A.10) and (A.11).
Note that for any $1\leq j,l\leq p$, we can obtain
$\hat{Z}^{(1)}_{jl},\hat{Z}^{(2)}_{jl},\dots,\hat{Z}^{(Q)}_{jl}$ by solving
$\operatorname*{arg\,min}_{Z^{(1)}_{jl},Z^{(2)}_{jl},\dots,Z^{(Q)}_{jl}}\frac{\rho}{2}\sum^{Q}_{q=1}\|Z^{(q)}_{jl}-A^{(q)}_{jl}\|^{2}_{\text{F}}+\lambda_{1}\mathbbm{1}_{j\neq
l}\sum^{Q}_{q=1}\|Z^{(q)}_{jl}\|_{\text{F}}+\lambda_{2}\mathbbm{1}_{j\neq
l}\left(\sum^{Q}_{q=1}\|Z^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2},$ (A.14)
where $\mathbbm{1}_{j\neq l}=1$ when $j\neq l$ and $0$ otherwise. By (A.14),
we have that $\hat{Z}^{(q)}_{jj}=A^{(q)}_{jj}$ for any $j=1,\ldots,p$ and
$q=1,\ldots,Q$, which is (A.10). We then prove (A.11). Denote the objective
function in (A.14) as $\tilde{L}_{jl}$. Then, for $j\neq l$, the
subdifferential of $\tilde{L}_{jl}$ with respect to $Z^{(q)}_{jl}$ is
$\partial_{Z^{(q)}_{jl}}\tilde{L}_{jl}=\rho(Z^{(q)}_{jl}-A^{(q)}_{jl})+\lambda_{1}G^{(q)}_{jl}+\lambda_{2}D^{(q)}_{jl},$
where
$G^{(q)}_{jl}=\left\\{\begin{aligned}
&\frac{Z^{(q)}_{jl}}{\|Z^{(q)}_{jl}\|_{\text{F}}}\quad\text{when}\;Z^{(q)}_{jl}\neq
0\\\ &\\{G^{(q)}_{jl}\in\mathbb{R}^{M\times M}:\|G^{(q)}_{jl}\|_{\text{F}}\leq
1\\}\quad\text{otherwise}\end{aligned}\right.,$
and
$D^{(q)}_{jl}=\left\\{\begin{aligned}
&\frac{Z^{(q)}_{jl}}{\left(\sum^{Q}_{q=1}\|Z^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2}}\quad\text{when}\;\sum^{Q}_{q=1}\|Z^{(q)}_{jl}\|^{2}_{\text{F}}>0\\\
&\\{D^{(q)}_{jl}\in\mathbb{R}^{M\times
M}:\sum^{Q}_{q=1}\|D^{(q)}_{jl}\|^{2}_{\text{F}}\leq
1\\}\quad\text{otherwise}\end{aligned}\right..$
To obtain the optimum, we need
$0\in\partial_{Z^{(q)}_{jl}}\tilde{L}_{jl}(\hat{Z}^{(q)}_{jl})$
for all $q=1,\ldots,Q$. We now split our discussion into two cases.
(a) When $\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}=0$, or
equivalently, $\hat{Z}^{(q)}_{jl}=0$ for all $q=1,\ldots,Q$.
In this case, there exists $G^{(q)}_{jl}$, where
$\|G^{(q)}_{jl}\|_{\text{F}}\leq 1$, for all $q=1,\ldots,Q$; and also
$D^{(q)}_{jl}$, where $\sum^{Q}_{q=1}\|D^{(q)}_{jl}\|^{2}_{\text{F}}\leq 1$,
such that
$0=-\rho\cdot A^{(q)}_{jl}+\lambda_{1}G^{(q)}_{jl}+\lambda_{2}D^{(q)}_{jl},$
which implies that
$D^{(q)}_{jl}=\frac{\rho}{\lambda_{2}}\left(A^{(q)}_{jl}-\frac{\lambda_{1}}{\rho}G^{(q)}_{jl}\right).$
Thus, we have
$\displaystyle\|D^{(q)}_{jl}\|_{\text{F}}$
$\displaystyle=\frac{\rho}{\lambda_{2}}\left\|A^{(q)}_{jl}-\frac{\lambda_{1}}{\rho}G^{(q)}_{jl}\right\|_{\text{F}}\geq\frac{\rho}{\lambda_{2}}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\frac{\lambda_{1}}{\rho}\|G^{(q)}_{jl}\|_{\text{F}}\right)_{+}$
$\displaystyle\geq\frac{\rho}{\lambda_{2}}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\frac{\lambda_{1}}{\rho}\right)_{+},$
which implies that
$\frac{\rho^{2}}{\lambda^{2}_{2}}\sum^{Q}_{q=1}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\frac{\lambda_{1}}{\rho}\right)^{2}_{+}\leq\sum^{Q}_{q=1}\|D^{(q)}_{jl}\|^{2}_{\text{F}}\leq
1,$
and then we have
$\sqrt{\sum^{Q}_{q=1}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}\leq\lambda_{2}/\rho.$
(A.15)
(b) When $\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}>0$.
For those $q$’s such that $\hat{Z}^{(q)}_{jl}=0$, there exists $G^{(q)}_{jl}$,
where $\|G^{(q)}_{jl}\|_{\text{F}}=1$, such that
$0=-\rho A^{(q)}_{jl}+\lambda_{1}G^{(q)}_{jl}.$
Thus, we have
$\|A^{(q)}_{jl}\|_{\text{F}}=\frac{\lambda_{1}}{\rho}\|G^{(q)}_{jl}\|_{\text{F}}\leq\frac{\lambda_{1}}{\rho},$
which implies that
$\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)_{+}=0.$ (A.16)
On the other hand, for those $q$’s such that $\hat{Z}^{(q)}_{jl}\neq 0$, we
have
$0=\rho\left(\hat{Z}^{(q)}_{jl}-A^{(q)}_{jl}\right)+\lambda_{1}\frac{\hat{Z}^{(q)}_{jl}}{\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}}+\lambda_{2}\frac{\hat{Z}^{(q)}_{jl}}{\left(\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2}},$
which implies that
$A^{(q)}_{jl}=\hat{Z}^{(q)}_{jl}\left(1+\frac{\lambda_{1}}{\rho\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}}+\frac{\lambda_{2}}{\rho\left(\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2}}\right),$
(A.17)
and
$\|A^{(q)}_{jl}\|_{\text{F}}=\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}+\lambda_{1}/\rho+(\lambda_{2}/\rho)\cdot\frac{\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}}{\left(\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2}}.$
(A.18)
By (A.18), we have
$\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)_{+}>\frac{\lambda_{2}}{\rho}\cdot\frac{\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}}{\sqrt{\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}}}>0.$
(A.19)
By (A.16) and (A.19), we have
$\displaystyle\sum^{Q}_{q=}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}$
$\displaystyle=\sum_{q:\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}\neq
0}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}$ (A.20)
$\displaystyle>\frac{\lambda^{2}_{2}}{\rho^{2}}\sum_{q:\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}\neq
0}\frac{\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}}{\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}}$
$\displaystyle>\lambda^{2}_{2}/\rho^{2}.$
We now make the following claims.
Claim 1.
$\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}=0\Leftrightarrow\sqrt{\sum^{Q}_{q=}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}\leq\lambda_{2}/\rho$.
This claim is easily shown by (A.15) and (A.20).
Claim 2. When $\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}>0$, we have
$\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}=0\Leftrightarrow\|A^{(q)}_{jl}\|_{\text{F}}\leq\lambda_{1}/\rho$.
This claim is easily shown by (A.16) and (A.19).
Claim 3. When $\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}\neq 0$, then we have
$\hat{Z}^{(q)}_{jl}=\left(\frac{\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho}{\|A^{(q)}_{jl}\|_{\text{F}}}\right)\left(1-\frac{\lambda_{2}}{\rho\sqrt{\sum^{Q}_{q=}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}}\right)A^{(q)}_{jl}.$
To prove this claim, note that by Claim 2 and (A.18), we have
$\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)_{+}=\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}\left(1+\frac{\lambda_{2}}{\rho\left(\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}\right)^{1/2}}\right)$
for $q=1,\ldots,Q$. Thus,
$\sqrt{\sum^{Q}_{q=1}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}=\sqrt{\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}}+\lambda_{2}/\rho,$
which implies that
$\sqrt{\sum^{Q}_{q=1}\|\hat{Z}^{(q)}_{jl}\|^{2}_{\text{F}}}=\sqrt{\sum^{Q}_{q=1}\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}-\lambda_{2}/\rho.$
Thus, by (A.18), we have
$\displaystyle\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}$
$\displaystyle=\frac{\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho}{1+\frac{\lambda_{2}/\rho}{\sqrt{\sum^{Q}_{q^{\prime}=1}\left(\|A^{(q^{\prime})}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}-\lambda_{2}/\rho}}$
$\displaystyle=\left(1-\frac{\lambda_{2}}{\rho\sqrt{\sum^{Q}_{q^{\prime}=1}\left(\|A^{(q^{\prime})}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}}\right)\left(\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right).$
This way, combined with (A.17), we then have
$\hat{Z}^{(q)}_{jl}=\frac{\|\hat{Z}^{(q)}_{jl}\|_{\text{F}}}{\|A^{(q)}_{jl}\|_{\text{F}}}A^{(q)}_{jl}=\left(\frac{\|A^{(q)}_{jl}\|_{\text{F}}-\lambda_{1}/\rho}{\|A^{(q)}_{jl}\|_{\text{F}}}\right)\left(1-\frac{\lambda_{2}}{\rho\sqrt{\sum^{Q}_{q^{\prime}=1}\left(\|A^{(q^{\prime})}_{jl}\|_{\text{F}}-\lambda_{1}/\rho\right)^{2}_{+}}}\right)A^{(q)}_{jl}.$
Finally, combining Claims 1-3, we obtain (A.11).
## Appendix B Main Technical Proofs
We give proofs of the results given in the main text.
### B.1 Proof of Lemma 2
We only need to prove that when we use two sets of orthonormal function basis
$e^{M}(t)=\\{e^{M}_{j}(t)\\}^{p}_{j=1}$ and
$\tilde{e}^{M}(t)=\\{\tilde{e}^{M}_{j}(t)\\}^{p}_{j=1}$ to expand the same
subspace $\mathbb{V}^{M}_{[p]}$, the definition of $E^{\pi}_{\Delta}$ will not
be changed. Since both
$e^{M}_{j}(t)=(e^{M}_{j1}(t),e^{M}_{j2}(t),\dots,e^{M}_{jM}(t))^{\top}$ and
$\tilde{e}^{M}_{j}(t)=(\tilde{e}^{M}_{j1}(t),\tilde{e}^{M}_{j2}(t),\dots,\tilde{e}^{M}_{jM}(t))^{\top}$
are orthonormal function basis of $\mathbb{V}^{M}_{j}$, there must exist an
orthonormal matrix $U_{j}\in\mathbb{R}^{M\times M}$ satisfying
$U^{\top}_{j}U_{j}=U_{j}U^{\top}_{j}=I_{M}$, such that
$\tilde{e}^{M}_{j}(t)=U_{j}e^{M}_{j}(t)$. Let $a^{X,M}_{ij}$ be the projection
score vectors of $X_{ij}(t)$ onto $e^{M}_{j}(t)$ and $\tilde{a}^{X,M}_{ij}$ be
the projection score vectors of $X_{ij}(t)$ onto $\tilde{e}^{M}_{j}(t)$. Then
$\tilde{a}^{X,M}_{ij}=U_{j}a^{X,M}_{ij}$. Denote
$U={\rm diag}\\{U_{1},U_{2},\dots,U_{p}\\}\in\mathbb{R}^{pM\times pM}.$
We then have
$\displaystyle\tilde{a}^{X,M}_{i}$
$\displaystyle=((\tilde{a}^{X,M}_{i1})^{\top},(\tilde{a}^{X,M}_{i2})^{\top},\dots,(\tilde{a}^{X,M}_{ip})^{\top})^{\top}$
$\displaystyle=((a^{X,M}_{i1})^{\top}U^{\top}_{1},(a^{X,M}_{i2})^{\top}U^{\top}_{2},\dots,(a^{X,M}_{ip})^{\top}U^{\top}_{p})^{\top}=Ua^{X,M}_{i}$
and
$\tilde{\Sigma}^{X,M}={\rm Cov}\left(\tilde{a}^{X,M}\right)=U{\rm
Cov}\left(\tilde{a}^{X,M}\right)U^{\top}=U\Sigma^{X,M}U^{\top}.$
Thus
$\tilde{\Theta}^{X,M}=\left(\tilde{\Sigma}^{X,M}\right)^{-1}=U\left(\Sigma^{X,M}\right)^{-1}U^{\top}=U\Theta^{X,M}U^{\top}.$
Therefore, $\tilde{\Theta}^{X,M}_{jl}=U_{j}\Theta^{X,M}_{jl}U^{\top}_{l}$ for
all $j,l\in V^{2}$ and, thus,
$\|\tilde{\Theta}^{X,M}_{jl}\|_{\text{F}}=\|\Theta^{X,M}_{jl}\|_{\text{F}}$
for all $j,l\in V^{2}$. This implies the final result.
### B.2 Proof of Lemma 3
We first show that $X_{ij},Y_{ij}\in{\rm
Span}\left\\{\phi_{j1},\dots,\phi_{jM^{\star}_{j}}\right\\}$ almost surely.
Let
$M^{X}_{j}=\sup\\{M\in\mathbb{N}^{+}:\lambda^{X}_{jM}>0\\}.$
By Karhunen–Loève theorem, we have $X_{ij}=\sum^{M^{X}_{j}}_{k=1}\langle
X_{ij},\phi^{X}_{jk}\rangle\phi^{X}_{jk}$ almost surely. Thus, we have
$X_{ij}\in{\rm
Span}\left\\{\phi^{X}_{j1},\dots,\phi^{X}_{j,M^{X}_{j}}\right\\}$ almost
surely. For any $1\leq k\leq M^{X}_{j}$, we have that
$\int_{\mathcal{T}}K_{jj}(s,t)\phi^{X}_{k}(s)\phi^{X}_{k}(t)dsdt\geq\int_{\mathcal{T}}K^{X}_{jj}(s,t)\phi^{X}_{k}(s)\phi^{X}_{k}(t)dsdt=\lambda^{X}_{jk}>0,$
which implies that $\phi^{X}_{k}\in{\rm
Span}\left\\{\phi_{j1},\dots,\phi_{jM^{\star}_{j}}\right\\}$. Thus, we have
${\rm
Span}\left\\{\phi^{X}_{j1},\dots,\phi^{X}_{j,M^{X}_{j}}\right\\}\subseteq{\rm
Span}\left\\{\phi_{j1},\dots,\phi_{jM^{\star}_{j}}\right\\}$ and
$X_{ij}\in{\rm Span}\left\\{\phi_{j1},\dots,\phi_{jM^{\star}_{j}}\right\\}$
almost surely. Similarly, we have that $Y_{ij}\in{\rm
Span}\left\\{\phi_{j1},\dots,\phi_{jM^{\star}_{j}}\right\\}$ almost surely.
Next, we show that $M^{\prime}_{j}=M^{\star}_{j}$ by contradiction. By the
definition of $M^{\prime}_{j}$, we have that $M^{\prime}_{j}\leq
M^{\star}_{j}$. If $M^{\prime}_{j}\neq M^{\star}_{j}$, then we have
$\mathbb{V}^{M^{\prime}_{j}}_{j}\subseteq\mathbb{H}$ such that
$M^{\prime}_{j}<M^{\star}_{j}$ and
$X_{ij},Y_{ij}\in\mathbb{V}^{M^{\prime}_{j}}_{j}$ almost surely. This implies
that there exists $\phi\in{\rm
Span}\left\\{\phi_{j1},\dots,\phi_{jM^{\star}_{j}}\right\\}\setminus\mathbb{V}^{M^{\prime}_{j}}_{j}$
such that
$\displaystyle\mathbb{E}\left[\left(\langle\phi_{jk}(t),X_{ij}(t)\rangle\right)^{2}\right]=0\quad\text{and}\quad\mathbb{E}\left[\left(\langle\phi_{jk}(t),Y_{ij}(t)\rangle\right)^{2}\right]=0$
$\displaystyle\Rightarrow$
$\displaystyle\int_{\mathcal{T}}K^{X}_{jj}(s,t)\phi_{jk}(s)\phi_{jk}(t)dsdt=0\quad\text{and}\quad\int_{\mathcal{T}}K^{Y}_{jj}(s,t)\phi_{jk}(s)\phi_{jk}(t)dsdt=0$
$\displaystyle\Rightarrow$
$\displaystyle\int_{\mathcal{T}}K_{jj}(s,t)\phi_{jk}(s)\phi_{jk}(t)dsdt=0,$
$\displaystyle\Rightarrow$ $\displaystyle\lambda_{jk}=0,$
which contradicts the definition of $M^{\star}_{j}$. Thus, we must have
$M^{\prime}_{j}=M^{\star}_{j}$.
### B.3 Proof of Lemma 4
Let $U=V\backslash\\{j,l\\}$, and $a^{X,M}_{U}=\left((a^{X,M}_{j})^{\top},j\in
U\right)^{\top}$. Without loss of generality, assume that $\Sigma^{X,M}$ and
$\Theta^{X,M}$ take the following block structure:
$\displaystyle\Sigma^{X,M}=\left[\begin{matrix}\Sigma^{X,M}_{jj}&\Sigma^{X,M}_{jl}&\Sigma^{X,M}_{jU}\\\
\Sigma^{X,M}_{lj}&\Sigma^{X,M}_{ll}&\Sigma^{X,M}_{lU}\\\
\Sigma^{X,M}_{Uj}&\Sigma^{X,M}_{Ul}&\Sigma^{X,M}_{UU}\\\
\end{matrix}\right],\quad\Theta^{X,M}=\left[\begin{matrix}\Theta^{X,M}_{jj}&\Theta^{X,M}_{jl}&\Theta^{X,M}_{jU}\\\
\Theta^{X,M}_{lj}&\Theta^{X,M}_{ll}&\Theta^{X,M}_{lU}\\\
\Theta^{X,M}_{Uj}&\Theta^{X,M}_{Ul}&\Theta^{X,M}_{UU}\\\ \end{matrix}\right].$
Let $P$ denote the submatrix:
$P=\left[\begin{matrix}\Theta^{X,M}_{jj}&\Theta^{X,M}_{jl}\\\
\Theta^{X,M}_{lj}&\Theta^{X,M}_{ll}\end{matrix}\right].$
By standard results for the multivariate Gaussian (Heckler, 2005), we have
$\displaystyle\mathrm{Var}\left(a^{X,M}_{j}\mid a^{X,M}_{k},k\neq
j\right)=H^{X,M}_{jj}=(\Theta^{X,M}_{jj})^{-1},$
$\displaystyle\mathrm{Var}\left(\left[\begin{matrix}a^{X,M}_{j}\\\
a^{X,M}_{l}\end{matrix}\right]\mid
a^{X,M}_{U}\right)=P^{-1}=\left[\begin{matrix}(P^{-1})_{11}&(P^{-1})_{12}\\\
(P^{-1})_{21}&(P^{-1})_{22}\end{matrix}\right].$
Thus, the first statement directly follows from the first equation. To prove
the second statement, we only need to note that
$\displaystyle H^{X,M}_{jl}$
$\displaystyle=\mathrm{Cov}\left(a^{X,M}_{j},a^{X,M}_{l}\mid
a^{X,M}_{U}\right)$ $\displaystyle=(P^{-1})_{12}$
$\displaystyle=-(\Theta^{X,M}_{jj})^{-1}\Theta^{X,M}_{jl}(P^{-1})_{22}$
$\displaystyle=-H^{X,M}_{jj}\Theta_{jl}^{X,M}H^{\backslash j,X,M}_{ll},$
where the second to last equation follows from the $2\times 2$ block matrix
inverse and the last equation follows from the property of multivariate
Gaussian. This completes the proof.
### B.4 Proof of Theorem 1
We provide the proof of Theorem 1, following the framework introduced in
Negahban et al. (2012). We start by introducing some notation.
We use $\otimes$ to denote the Kronecker product. For
$\Delta\in\mathbb{R}^{pM\times pM}$, let
$\theta=\operatorname{vec}(\Delta)\in{\mathbb{R}^{p^{2}M^{2}}}$ and
$\theta^{*}=\operatorname{vec}({\Delta^{M}})$, where $\Delta^{M}$ is defined
in Section 2.2. Let $\mathcal{G}=\\{G_{t}\\}_{t=1,\ldots,N_{\mathcal{G}}}$ be
a set of indices, where $N_{\mathcal{G}}=p^{2}$ and
$G_{t}\subset\\{1,2,\cdots,p^{2}M^{2}\\}$ is the set of indices for $\theta$
that correspond to the $t$-th $M\times M$ submatrix of $\Delta^{M}$. Thus, if
$t=(j-1)p+l$, then
$\theta_{G_{t}}=\operatorname{vec}{(\Delta_{jl})}\in{\mathbb{R}^{M^{2}}}$,
where $\Delta_{jl}$ is the $(j,l)$-th $M\times{M}$ submatrix of $\Delta$.
Denote the group indices of $\theta^{*}$ that belong to blocks corresponding
to $E_{\Delta}$ as
$S_{\mathcal{G}}\subseteq{\\{1,2,\cdots,N_{\mathcal{G}}\\}}$. Note that we
define $S_{\mathcal{G}}$ using $E_{\Delta}$ and not $E_{\Delta^{M}}$.
Therefore, as stated in Assumption 2, $|S_{\mathcal{G}}|=s$. We further define
the subspace $\mathcal{M}$ as
${}\mathcal{M}\coloneqq{\\{\theta\in{\mathbb{R}^{p^{2}M^{2}}}\mid\theta_{G_{t}}=0\text{
for all }t\notin{S_{\mathcal{G}}}\\}}.$ (B.1)
Its orthogonal complement with respect to the Euclidean inner product is
$\mathcal{M}^{\bot}\coloneqq{\\{\theta\in{\mathbb{R}^{p^{2}M^{2}}}\mid\theta_{G_{t}}=0\text{
for all }t\in{S_{\mathcal{G}}}\\}}.$ (B.2)
For a vector $\theta$, let $\theta_{\mathcal{M}}$ and
$\theta_{\mathcal{M}^{\bot}}$ be the projection of $\theta$ on the subspaces
$\mathcal{M}$ and $\mathcal{M}^{\bot}$, respectively. Let
$\langle\cdot,\cdot\rangle$ represent the Euclidean inner product. Let
${}\mathcal{R}(\theta)\coloneqq{\sum_{t=1}^{N_{\mathcal{G}}}|\theta_{G_{t}}|_{2}}\triangleq{|\theta|_{1,2}}.$
(B.3)
For any $v\in{\mathbb{R}^{p^{2}M^{2}}}$, the dual norm of $\mathcal{R}$ is
given by
${}\mathcal{R}^{*}(v)\coloneqq\sup_{u\in{\mathbb{R}^{p^{2}M^{2}}\backslash{\\{0\\}}}}\frac{\langle{u},{v}\rangle}{\mathcal{R}(u)}=\sup_{\mathcal{R}(u)\leq{1}}\langle{u},{v}\rangle.$
(B.4)
The subspace compatibility constant of $\mathcal{M}$ with respect to
$\mathcal{R}$ is defined as
${}\Psi(\mathcal{M})\coloneqq{\sup_{u\in{\mathcal{M}\backslash\\{0\\}}}}\frac{\mathcal{R}(u)}{|u|_{2}}.$
(B.5)
Proof By Lemma 5 and Assumption 1, we have
$|(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{Y,M}\otimes{\Sigma^{X,M}})|_{\infty}\leq\delta_{n}^{2}+2\delta_{n}\sigma_{\max}$
(B.6)
and
$|\operatorname{vec}{(S^{Y,M}-S^{X,M})}-\operatorname{vec}{(\Sigma^{Y,M}-\Sigma^{X,M})}|_{\infty}\leq
2\delta_{n}.$ (B.7)
Problem (25) can be written in the following form:
$\hat{\theta}_{\lambda_{n}}\in\operatorname*{arg\,min}_{\theta\in{\mathbb{R}^{p^{2}M^{2}}}}\mathcal{L}(\theta)+\lambda_{n}\mathcal{R}(\theta),$
(B.8)
where
${}\mathcal{L}(\theta)=\frac{1}{2}\theta^{\top}(S^{Y,M}\otimes{S^{X,M}})\theta-\theta^{\top}\operatorname{vec}({S^{Y,M}-S^{X,M}}).$
(B.9)
The loss $\mathcal{L}(\theta)$ is convex and differentiable with respect to
$\theta$, and it can be easily verified that $\mathcal{R}(\cdot)$ defines a
vector norm. For $h\in\mathbb{R}^{p^{2}M^{2}}$, the error of the first-order
Taylor series expansion of $\mathcal{L}$ is:
$\displaystyle\delta{\mathcal{L}}(h,\theta^{*})\coloneqq\mathcal{L}(\theta^{*}+h)-\mathcal{L}(\theta^{*})-\langle\nabla\mathcal{L}(\theta^{*}),h\rangle=\frac{1}{2}h^{\top}(S^{Y,M}\otimes{S^{X,M}})h.$
(B.10)
From (B.9), we see that
$\nabla{\mathcal{L}}(\theta)=(S^{Y,M}\otimes{S^{X,M}})\theta-\operatorname{vec}({S^{Y,M}-S^{X,M}})$.
By Lemma 9, we have
${}\mathcal{R}^{*}(\nabla{\mathcal{L}}(\theta^{*}))=\max_{t=1,2,\cdots,N_{\mathcal{G}}}\left|\left[(S^{Y,M}\otimes{S^{X,M}})\theta^{*}-\operatorname{vec}({S^{Y,M}-S^{X,M}})\right]_{G_{t}}\right|_{2}.$
(B.11)
We now establish an upper bound for
$\mathcal{R}^{*}(\nabla{\mathcal{L}}(\theta^{*}))$. First, note that
$(\Sigma^{Y,M}\otimes{\Sigma^{X,M}})\theta^{*}-\operatorname{vec}({\Sigma^{Y,M}-\Sigma^{X,M}})=\operatorname{vec}({\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M}-(\Sigma^{Y,M}-\Sigma^{X,M})})=0.$
Letting $(\cdot)_{jl}$ denote the $(j,l)$-th submatrix, we have
$\displaystyle\left|\left[(S^{Y,M}\otimes{S^{X,M}})\theta^{*}-\operatorname{vec}({S^{Y,M}-S^{X,M}})\right]_{G_{t}}\right|_{2}$
(B.12)
$\displaystyle=\left|\left[(S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}})\theta^{*}-\operatorname{vec}{((S^{Y,M}-\Sigma^{Y,M})-(S^{X,M}-\Sigma^{X,M}))}\right]_{G_{t}}\right|_{2}$
$\displaystyle={\|(S^{X,M}\Delta^{M}S^{Y,M}-\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M})_{jl}-(S^{Y,M}-\Sigma^{Y,M})_{jl}-(S^{X,M}-\Sigma^{X,M})_{jl}\|_{F}}$
$\displaystyle\leq{\|(S^{X,M}\Delta^{M}S^{Y,M}-\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M})_{jl}\|_{F}+\|(S^{Y,M}-\Sigma^{Y,M})_{jl}\|_{F}+\|(S^{X,M}-\Sigma^{X,M})_{jl}\|_{F}}.$
For any $M\times{M}$ matrix $A$, $\|A\|_{F}\leq{M|A|_{\infty}}$, so
$\displaystyle\left|\left[(S^{Y,M}\otimes{S^{X,M}})\theta^{*}-\operatorname{vec}({S^{Y,M}-S^{X,M}})\right]_{G_{t}}\right|_{2}$
$\displaystyle\leq
M\left[\left|(S^{X,M}\Delta^{M}S^{Y,M}-\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M})_{jl}\right|_{\infty}+\left|(S^{Y,M}-\Sigma^{Y,M})_{jl}\right|_{\infty}+\left|(S^{X,M}-\Sigma^{X,M})_{jl}\right|_{\infty}\right]$
$\displaystyle\leq
M\left[\left|S^{X,M}\Delta^{M}S^{Y,M}-\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M}\right|_{\infty}+|S^{Y,M}-\Sigma^{Y,M}|_{\infty}+|S^{X,M}-\Sigma^{X,M}|_{\infty}\right].$
For any $A\in{\mathbb{R}^{k\times{k}}}$ and $v\in{\mathbb{R}^{k}}$, we have
$|Av|_{\infty}\leq{|A|_{\infty}|v|_{1}}$. Thus, we further have
$\displaystyle|S^{X,M}\Delta^{M}S^{Y,M}-\Sigma^{X,M}\Delta^{M}\Sigma^{Y,M}|_{\infty}$
$\displaystyle=|[(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{X,M}\otimes{\Sigma^{Y,M}})]\operatorname{vec}{(\Delta^{M})}|_{\infty}$
$\displaystyle\leq{|(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{X,M}\otimes{\Sigma^{Y,M}})|_{\infty}}|\operatorname{vec}{(\Delta^{M})}|_{1}$
$\displaystyle=|(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{X,M}\otimes{\Sigma^{Y,M}})|_{\infty}|\Delta^{M}|_{1}.$
Combining the inequalities gives an upper bound uniform over $\mathcal{G}$
(i.e., for all $G_{t}$):
$\displaystyle\left|\left[(S^{Y,M}\otimes{S^{X,M}})\theta^{*}-\operatorname{vec}({S^{Y,M}-S^{X,M}})\right]_{G_{t}}\right|_{2}$
$\displaystyle\leq
M\left[|(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{X,M}\otimes{\Sigma^{Y,M}})|_{\infty}|\Delta^{M}|_{1}+|S^{Y,M}-\Sigma^{Y,M}|_{\infty}+|S^{X,M}-\Sigma^{X,M}|_{\infty}\right],$
which implies
$\displaystyle\mathcal{R}^{*}\left(\nabla{\mathcal{L}}(\theta^{*})\right)\leq$
$\displaystyle
M[|(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{X,M}\otimes{\Sigma^{Y,M}})|_{\infty}|\Delta^{M}|_{1}+$
(B.13)
$\displaystyle|S^{Y,M}-\Sigma^{Y,M}|_{\infty}+|S^{X,M}-\Sigma^{X,M}|_{\infty}].$
Assuming $|S^{X,M}-\Sigma^{X,M}|_{\infty}\leq{\delta_{n}}$ and
$|S^{Y,M}-\Sigma^{Y,M}|_{\infty}\leq\delta_{n}$ implies
${}\mathcal{R}^{*}\left(\nabla{\mathcal{L}}(\theta^{*})\right)\leq{M[(\delta_{n}^{2}+2\delta_{n}\sigma_{\max})|\Delta^{M}|_{1}+2\delta_{n}]}.$
(B.14)
Setting
${}\lambda_{n}=2M\left[\left(\delta_{n}^{2}+2\delta_{n}\sigma_{\max}\right)\left|\Delta^{M}\right|_{1}+2\delta_{n}\right],$
(B.15)
then implies that
$\lambda_{n}\geq{2\mathcal{R}^{*}\left(\nabla{\mathcal{L}}(\theta^{*})\right)}$.
Thus, invoking Lemma 1 in Negahban et al. (2012),
$h=\hat{\theta}_{\lambda_{n}}-\theta^{*}$ must satisfy
${}\mathcal{R}(h_{\mathcal{M}^{\bot}})\leq{3\mathcal{R}(h_{\mathcal{M}})}+4\mathcal{R}(\theta^{*}_{\mathcal{M}^{\bot}}),$
(B.16)
where $\mathcal{M}$ is defined in (B.1). Equivalently,
${}|h_{\mathcal{M}^{\bot}}|_{1,2}\leq{3|h_{\mathcal{M}}|_{1,2}}+4|\theta^{*}_{\mathcal{M}^{\bot}}|_{1,2}.$
(B.17)
By the definition of $\nu_{2}$, we have
${}|\theta^{*}_{\mathcal{M}^{\bot}}|_{1,2}=\sum_{t\notin{\mathcal{S}_{\mathcal{G}}}}|\theta^{*}_{G_{t}}|_{2}\leq\left(p(p+1)/2-s\right)\nu_{2}\leq
p^{2}\nu_{2}.$ (B.18)
Next, we show that $\delta\mathcal{L}(h,\theta^{*})$, as defined in (B.10),
satisfies the Restricted Strong Convexity property defined in definition 2 in
Negahban et al. (2012). That is, we show an inequality of the form:
$\delta\mathcal{L}(h,\theta^{*})\geq{\kappa_{\mathcal{L}}|h|^{2}_{2}}-\omega^{2}_{\mathcal{L}}\left(\theta^{*}\right)$
whenever $h$ satisfies (B.17).
By using Lemma 7, we have
$\displaystyle\theta^{\top}(S^{Y,M}\otimes{S^{X,M}})\theta$
$\displaystyle=\theta^{\top}(\Sigma^{Y,M}\otimes{\Sigma^{X,M}})\theta+\theta^{\top}(S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}})\theta$
$\displaystyle\geq{\theta^{\top}(\Sigma^{Y,M}\otimes{\Sigma^{X,M}})\theta-|\theta^{\top}(S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}})\theta|}$
$\displaystyle\geq{\lambda^{*}_{\min}}|\theta|^{2}_{2}-M^{2}|S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}}|_{\infty}|\theta|^{2}_{1,2},$
where the last inequality holds because Lemma 7 and
$\lambda^{*}_{\min}=\lambda_{\min}(\Sigma^{X,M})\times{\lambda_{\min}(\Sigma^{Y,M})}=\lambda_{\min}(\Sigma^{Y,M}\otimes{\Sigma^{X,M}})>0$.
Thus,
$\displaystyle\delta\mathcal{L}(h,\theta^{*})$
$\displaystyle=\frac{1}{2}h^{\top}(S^{Y,M}\otimes{S^{X,M}})h$
$\displaystyle\geq{\frac{1}{2}\lambda^{*}_{\min}}|h|^{2}_{2}-\frac{1}{2}M^{2}|S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}}|_{\infty}|h|^{2}_{1,2}.$
By Lemma 8 and (B.17), we have
$\displaystyle|h|^{2}_{1,2}$
$\displaystyle=(|h_{\mathcal{M}}|_{1,2}+|h_{\mathcal{M}^{\bot}}|_{1,2})^{2}\leq
16({|h_{\mathcal{M}}|_{1,2}}+|\theta^{*}_{\mathcal{M}^{\bot}}|_{1,2})^{2}$
$\displaystyle\leq 16(\sqrt{s}|h|_{2}+p^{2}\nu_{2})^{2}\leq
32s|h|^{2}_{2}+32p^{4}\nu_{2}^{2}.$
Combining with the equation above, we get
$\displaystyle\delta\mathcal{L}(h,\theta^{*})$
$\displaystyle\geq{\left[\frac{1}{2}\lambda^{*}_{\min}-16M^{2}s|S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}}|_{\infty}\right]}|h|^{2}_{2}$
(B.19)
$\displaystyle\qquad\qquad-16M^{2}p^{4}\nu_{2}^{2}|S^{Y,M}\otimes{S^{X,M}}-\Sigma^{Y,M}\otimes{\Sigma^{X,M}}|_{\infty}$
$\displaystyle\geq\left[\frac{1}{2}\lambda^{*}_{\min}-8M^{2}s\left(\delta^{2}_{n}+2\delta^{2}_{n}\sigma_{\max}\right)\right]|h|^{2}_{2}$
$\displaystyle\qquad\qquad-16M^{2}p^{4}\nu_{2}^{2}\left(\delta^{2}_{n}+2\delta_{n}\sigma_{\max}\right).$
Thus, appealing to (B.6), the Restricted Strong Convexity property holds with
$\displaystyle\kappa_{\mathcal{L}}$
$\displaystyle\;=\;\frac{1}{2}\lambda^{*}_{\min}-8M^{2}s\left(\delta^{2}+2\delta_{n}\sigma_{\max}\right),$
(B.20) $\displaystyle\omega_{\mathcal{L}}$
$\displaystyle\;=\;4Mp^{2}\nu_{2}\sqrt{\delta_{n}^{2}+2\delta_{n}\sigma_{\max}}.$
When
$\delta_{n}<\frac{1}{4}\sqrt{\frac{\lambda^{*}_{\min}+16M^{2}s(\sigma_{\max})^{2}}{M^{2}s}}-\sigma_{\max}$
as we assumed in the theorem, then $\kappa_{\mathcal{L}}>0$. By Theorem 1 of
Negahban et al. (2012) and Lemma 8, letting
$\lambda_{n}=2M\left[\left(\delta_{n}^{2}+2\delta_{n}\sigma_{\max}\right)|\Delta^{M}|_{1}+2\delta_{n}\right]$,
as in (B.15), ensures
$\displaystyle\|\hat{\Delta}^{M}-\Delta^{M}\|^{2}_{F}$
$\displaystyle=|\hat{\theta}_{\lambda_{n}}-\theta^{*}|^{2}_{2}$ (B.21)
$\displaystyle\leq{9\frac{\lambda^{2}_{n}}{\kappa^{2}_{\mathcal{L}}}}\Psi^{2}(\mathcal{M})+\frac{\lambda_{n}}{\kappa_{\mathcal{L}}}\left(2\omega^{2}_{\mathcal{L}}+4\mathcal{R}(\theta^{*}_{\mathcal{M}^{\bot}})\right)$
$\displaystyle=\frac{9\lambda^{2}_{n}s}{\kappa^{2}_{\mathcal{L}}}+\frac{2\lambda_{n}}{\kappa_{\mathcal{L}}}(\omega^{2}_{\mathcal{L}}+2p^{2}\nu_{2})$
$\displaystyle=\Gamma^{2}_{n}.$
We then prove that $\hat{E}_{\Delta}=E_{\Delta}$. Recall that we have assumed
that $0<\Gamma_{n}<\tau/2=(\nu_{1}-\nu_{2})/2$ and
$\nu_{2}+\Gamma_{n}\leq\epsilon_{n}<\nu_{1}-\Gamma_{n}$. Note that we have
$\|\hat{\Delta}^{M}_{jl}-\Delta^{M}_{jl}\|_{F}\leq{\|\hat{\Delta}^{M}-\Delta^{M}\|_{F}}\leq\Gamma_{n}$
for any $(j,l)\in{V^{2}}$. Recall that
${}E_{\Delta}\;=\;\\{(j,l)\in{V^{2}}:\;j\neq{l},D_{jl}>0\\}.$ (B.22)
We first prove that $E_{\Delta}\subseteq{\hat{E}_{{\Delta}}}$. For any
$(j,l)\in{E_{\Delta}}$, by the definition of $\nu_{1}$ in Section 4.1, we have
$\displaystyle\|\hat{\Delta}^{M}_{jl}\|_{F}$
$\displaystyle\geq{\|\Delta^{M}_{jl}\|_{F}-\|\hat{\Delta}_{jl}^{M}-\Delta^{M}_{jl}\|_{F}}$
$\displaystyle\geq\nu_{1}-\Gamma_{n}$ $\displaystyle>\epsilon_{n}.$
The last inequality holds because we have assumed that
$\epsilon_{n}<\nu_{1}-\Gamma_{n}$. Thus, by definition of $\hat{E}_{{\Delta}}$
in (27), we have $(j,l)\in{\hat{E}_{{\Delta}}}$, which further implies that
$E_{\Delta}\subseteq{\hat{E}_{{\Delta}}}$.
We then show $\hat{E}_{{\Delta}}\subseteq{E_{\Delta}}$. Let
$\hat{E}^{c}_{{\Delta}}$ and $E^{c}_{\Delta}$ denote the complement set of
$\hat{E}_{{\Delta}}$ and $E_{\Delta}$. For any $(j,l)\in{E^{c}_{\Delta}}$,
which also means that $(l,j)\in{E^{c}_{\Delta}}$, by definition of $\nu_{2}$,
we have that
$\displaystyle\|\hat{\Delta}^{M}_{jl}\|_{F}$
$\displaystyle\leq{\|\Delta^{M}_{jl}\|_{F}+\|\hat{\Delta}_{jl}^{M}-\Delta^{M}_{jl}\|_{F}}$
$\displaystyle\leq\nu_{2}+\Gamma_{n}$ $\displaystyle\leq\epsilon_{n}.$
Again, the last inequality holds because because we have assumed that
$\epsilon_{n}\geq\nu_{2}+\Gamma_{n}$. Thus, by definition of
$\hat{E}_{{\Delta}}$, we have $(j,l)\notin{\hat{E}_{{\Delta}}}$ or
$(j,l)\in{\hat{E}^{c}_{{\Delta}}}$. This implies that
$E^{c}_{\Delta}\subseteq{\hat{E}^{c}_{{\Delta}}}$, or
$\hat{E}_{{\Delta}}\subseteq{E_{\Delta}}$. Combing with previous conclusion
that $E_{\Delta}\subseteq{\hat{E}_{{\Delta}}}$, the proof is complete. [2mm]
### B.5 Proof of Theorem 4
We only need to prove that
$\displaystyle P\left(\lvert S^{M}-\Sigma^{M}\rvert_{\infty}>\delta\right)$
$\displaystyle\leq C_{1}np\exp\\{-C_{2}\Phi(T,L)M^{-(1+\beta)}\delta\\}$
(B.23) $\displaystyle+C_{3}(pM)^{2}\exp\\{-C_{4}nM^{-2(1+\beta)}\delta^{2}\\}$
$\displaystyle+C_{5}npL\exp\left\\{-\frac{C_{6}M^{-2(1+\beta)}\delta^{2}}{\tilde{\psi}_{2}(T,L)}\right\\},$
where $S^{M}$ can be understood as either $S^{X,M}$ or $S^{Y,M}$ and
$\Sigma^{M}$ can be understood as either $\Sigma^{X,M}$ or $\Sigma^{Y,M}$,
with $C_{k}=C^{X}_{k}$ or $C_{k}=C^{Y}_{k}$ for $k=1,2,3,4$ accordingly. To
see that (B.23) implies (43), we first note that (B.23) implies that
$\displaystyle P\left(\lvert
S^{X,M}-\Sigma^{X,M}\rvert_{\infty}\leq\delta\,\text{and}\,\lvert
S^{Y,M}-\Sigma^{Y,M}\rvert_{\infty}\leq\delta\right)$ $\displaystyle\geq$
$\displaystyle 1-P\left(\lvert
S^{X,M}-\Sigma^{X,M}\rvert_{\infty}>\delta\right)-P\left(\lvert
S^{Y,M}-\Sigma^{Y,M}\rvert_{\infty}>\delta\right)$ $\displaystyle\geq$
$\displaystyle
1-C^{X}_{1}pM\exp\\{-C^{X}_{2}\Phi(T,L)M^{-(1+\beta)}\delta\\}-C^{X}_{3}(pM)^{2}\exp\\{-C^{X}_{4}nM^{-2(1+\beta)}\delta^{2}\\}-$
$\displaystyle
C^{Y}_{1}pM\exp\\{-C^{Y}_{2}\Phi(T,L)M^{-(1+\beta)}\delta\\}-C^{Y}_{3}(pM)^{2}\exp\\{-C^{Y}_{4}nM^{-2(1+\beta)}\delta^{2}\\}$
$\displaystyle\geq$ $\displaystyle
1-2\bar{C}_{1}pM\exp\\{-\bar{C}_{2}\Phi(T,L)M^{-(1+\beta)}\delta\\}-2\bar{C}_{3}(pM)^{2}\exp\\{-\bar{C}_{4}nM^{-2(1+\beta)}\delta^{2}\\},$
where $\bar{C}_{k}$ for $k=1,2,3,4$ are defined in Theorem 4. Thus, by letting
the last two terms in the last line of the above equation all to be $\iota/2$,
we then have (43). This way, the rest of the proof will focus on proving
(B.23).
Denote $(j,l)$-th submatrix of $S^{M}$ as $S^{M}_{jl}$, and $(k,m)$-th entry
of $S^{M}_{jl}$ as $\hat{\sigma}_{jl,km}$, thus we have
$S^{M}=(\hat{\sigma}_{jl,km})_{1\leq j,l\leq p,\leq k,m\leq M}$; similarly,
let $\Sigma^{M}=(\sigma_{jl,km})_{1\leq j,l\leq p,\leq k,m\leq M}$. Then, by
the definition of $S^{M}$ and $\Sigma^{M}$, we have
$\displaystyle\hat{\sigma}_{jl,km}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\hat{a}_{ijk}\hat{a}_{ilm}$
$\displaystyle\sigma_{jl,km}$
$\displaystyle=\mathbb{E}\left[a_{ijk}a_{ilm}\right].$
Note that
$\displaystyle\hat{a}_{ijk}$
$\displaystyle=\langle\hat{g}_{ij},\hat{\phi}_{jk}\rangle$
$\displaystyle=\langle
g_{ij}+\hat{g}_{ij}-g_{ij},\phi_{jk}+\hat{\phi}_{jk}-\phi_{jk}\rangle$
$\displaystyle=\langle g_{ij},\phi_{jk}\rangle+\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle$
$\displaystyle=a_{ijk}+\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle.$
Thus, we have
$\displaystyle\hat{\sigma}_{jl,km}-\sigma_{jl,km}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\left(\hat{a}_{ijk}\hat{a}_{ilm}-\sigma_{jl,km}\right)$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\left[a_{ijk}+\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\right]\times$
$\displaystyle\left[a_{ijk}+\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle+\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\right]-\sigma_{jl,km}$
$\displaystyle=\sum^{16}_{u=1}I_{u},$
where
$\displaystyle I_{1}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\left(a_{ijk}a_{ilm}-\mathbb{E}(a_{ijk}a_{ilm})\right),$
$\displaystyle I_{2}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}a_{ijk}\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle,$
$\displaystyle I_{3}$ $\displaystyle=\frac{1}{n}\sum^{n}_{i=1}a_{ijk}\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$ $\displaystyle I_{4}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}a_{ijk}\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$
$\displaystyle I_{5}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}a_{ilm}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle,$
$\displaystyle I_{6}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle,$
$\displaystyle I_{7}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$ $\displaystyle I_{8}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$
$\displaystyle I_{9}$ $\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle a_{ilm},$ $\displaystyle I_{10}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle,$
$\displaystyle I_{11}$ $\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$ $\displaystyle I_{12}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$
$\displaystyle I_{13}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle
a_{ilm},$ $\displaystyle I_{14}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle,$
$\displaystyle I_{15}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle,$ $\displaystyle I_{16}$
$\displaystyle=\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle.$
Note that $I_{u}$, $u=1,\ldots,16$ depend on $j,l,k,m$. To simplify the
notation, we do not denote this fact explicitly. Thus, for any $0<\delta\leq
1$, when for any $1\leq j,l\leq p$ and $1\leq k,m\leq M$, if $\lvert
I_{u}\rvert\leq\delta/16$, $u=1,\ldots,16$, we will have $\lvert
S^{M}-\Sigma^{M}\rvert_{\infty}\leq\delta$. This way, for the rest of the
paper, we only need to calculate the probability of $\lvert
I_{u}\rvert\leq\delta/16$, $u=1,\ldots,16$, $1\leq j,l\leq p$ and $1\leq
k,m\leq M$.
Before we proceed to calculate the probability, we need a bit more notation.
By Assumption 3 (i), we have constants $d_{1},d_{2}>0$, such that
$\lambda_{jk}\leq d_{1}k^{-\beta}$, $d_{jk}\leq d_{2}k^{1+\beta}$ for any
$j=1,\ldots,p$ and $k\geq 1$. Let $d_{0}=\max\\{1,\sqrt{d_{1}},d_{2}\\}$, let
$\xi_{ijk}=\lambda^{-1/2}_{jk}a_{ijk}$ so that $\xi_{ijk}\sim N(0,1)$ i.i.d.
for $i=1,\ldots,n$, and denote
$\displaystyle\delta_{1}$
$\displaystyle=\frac{\delta}{144d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}},$
(B.24) $\displaystyle\delta_{2}$
$\displaystyle=9\lambda_{0,\max}\delta_{1}=\frac{\delta}{16d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}},$
where $\lambda_{0,\max}=\max_{j\in V}\sum^{\infty}_{k=1}\lambda_{jk}$. Recall
that $\hat{K}_{jj}$, $j=1,\ldots,p$ are defined as in (24). We define five
events $A_{1}$-$A_{5}$ as below:
$\displaystyle A_{1}$
$\displaystyle:\;\lVert\hat{g}_{ij}-g_{ij}\rVert\leq\delta_{1},\quad\forall
i=1,\ldots,n\ \forall j=1,\ldots,p,$ (B.25) $\displaystyle A_{2}$
$\displaystyle:\;\lVert\hat{K}_{jj}-K_{jj}\rVert_{\text{HS}}\leq\delta_{2}\quad\forall
j=1,\ldots,p,$ $\displaystyle A_{3}$
$\displaystyle:\;\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}\leq\frac{3}{2}\quad\forall
j=1,\ldots,p\ \forall k=1,\ldots,M,$ $\displaystyle A_{4}$
$\displaystyle:\;\frac{1}{n}\sum^{n}_{i=1}\lVert g_{ij}\rVert^{2}\leq
2\lambda_{0,\max}\quad\forall j=1,\ldots,p,$ $\displaystyle A_{5}$
$\displaystyle:\;\lvert\frac{1}{n}\sum^{n}_{i=1}a_{ijk}a_{ilm}-\sigma_{jl,km}\rvert\leq\frac{\delta}{16}\quad\forall
1\leq j,l\leq\ 1\leq k,m\leq M.$
Without loss of generality, we assume that
$\langle\hat{\phi}_{jl},\phi_{jl}\rangle\geq 0$ for any $1\leq j\leq p$ and
$1\leq k\leq M$ (If this is not true, we only need to use $-\phi_{jl}$ to
substitute $\phi_{jl}$). Then, by Lemma 10-Lemma 25, when $A_{1}$-$A_{5}$ hold
simultaneously, we have $\lvert I_{u}\rvert\leq\delta/16$ for all
$u=1,\ldots,16$, $1\leq j,l\leq p$ and $\ 1\leq k,m\leq M$. This way, we have
$\displaystyle P\left(\lvert S^{M}-\Sigma^{M}\rvert_{\infty}\leq\delta\right)$
$\displaystyle\geq P\left(\lvert I_{u}\rvert\leq\delta/16,\;\text{for
all}\;1\leq u\leq 16,1\leq j,l\leq\ 1\leq k,m\leq M\right)$ $\displaystyle\geq
P\left(\bigcap^{5}_{w=1}A_{w}\right).$
Or equivalently,
$P\left(\lvert S^{M}-\Sigma^{M}\rvert_{\infty}>\delta\right)\leq
P\left(\bigcup^{5}_{w=1}\bar{A}_{w}\right)\leq\sum^{5}_{w=1}P\left(\bar{A}_{w}\right),$
(B.26)
where the last inequality follows Boole’s inequality, and $\bar{A}$ means the
complement of $A$. This way, we then only need to give an upper bound for
$P(\bar{A}_{w})$, $w=1,\ldots,5$.
The $P(\bar{A}_{1})$ follows directly from Theorem 5. Note that by Theorem 5
and definition of $\tilde{\psi}_{1}$-$\tilde{\psi}_{4}$, we have
$\displaystyle P(\bar{A}_{1})=$ $\displaystyle
P\left(\lVert\hat{g}_{ij}-g_{ij}\rVert>\delta_{1}\;\exists 1\leq i\leq n,1\leq
j\leq p\right)$ $\displaystyle\leq$ $\displaystyle
2(np)\left\\{\exp\left(-\frac{\delta_{1}^{2}}{72\tilde{\psi}^{2}_{1}(T,L)+6\sqrt{2}\tilde{\psi}_{1}(T,L)\delta_{1}}\right)\right.$
$\displaystyle+$ $\displaystyle
L\exp\left(-\frac{\delta_{1}^{2}}{\tilde{\psi}_{2}(T,L)}\right)$
$\displaystyle+$
$\displaystyle\left.\exp\left(-\frac{\delta_{1}^{2}}{72\lambda_{0,\max}\tilde{\psi}_{3}(L)+6\sqrt{2\lambda_{0,\max}\tilde{\psi}_{3}(L)}\delta_{1}}\right)\right\\}.$
Let $\gamma_{1}=\sqrt{2}/(12\times 144d^{2}_{0}3\sqrt{3\lambda_{0,\max}})$,
and
$\gamma_{3}=1/(72\lambda_{0,\max}\times(144d^{2}_{0}\sqrt{3\lambda_{0,\max}})^{2})$,
then when $\tilde{\psi}_{1}<\gamma_{1}\cdot\delta/M^{1+\beta}$, and
$\tilde{\psi}_{3}<\gamma_{3}\cdot\delta^{2}/M^{2+2\beta}$, we have
$72\tilde{\psi}^{2}_{1}<6\sqrt{2}\tilde{\psi}_{1}\delta_{1}$ and
$72\lambda_{0,\max}\tilde{\psi}_{3}<6\sqrt{2\lambda_{0,\max}\tilde{\psi}_{3}}\delta_{1}$,
which implies that
$\displaystyle P(\bar{A}_{1})$ (B.27) $\displaystyle\leq
2np\left\\{\exp\left(-\frac{\delta_{1}}{12\sqrt{2}\tilde{\psi}_{1}(T,L)}\right)+\exp\left(-\frac{\delta_{1}}{12\sqrt{2\lambda_{0,\max}}\sqrt{\tilde{\psi}_{3}(L)}}\right)+L\exp\left(-\frac{\delta_{1}^{2}}{\tilde{\psi}_{2}(T,L)}\right)\right\\}$
$\displaystyle\overset{(i)}{\leq}2np\left\\{\exp\left(-\frac{\delta_{1}}{12\sqrt{2}}\Phi(T,L)\right)+\exp\left(-\frac{\delta_{1}}{12\sqrt{2\lambda_{0,\max}}}\Phi(T,L)\right)+L\exp\left(-\frac{\delta_{1}^{2}}{\tilde{\psi}_{2}(T,L)}\right)\right\\}$
$\displaystyle\overset{(ii)}{\leq}4np\exp\left(-\frac{\delta_{1}}{12\sqrt{2\lambda_{0,\max}}}\Phi(T,L)\right)+2npL\exp\left(-\frac{\delta_{1}^{2}}{\tilde{\psi}_{2}(T,L)}\right)$
$\displaystyle=4np\exp\left(-\frac{1}{1728\sqrt{6}\lambda_{0,\max}d^{2}_{0}}\cdot\frac{\delta}{M^{1+\beta}}\cdot\Phi(T,L)\right)$
$\displaystyle+2npL\exp\left(-\frac{\delta^{2}}{6228d^{4}_{0}\lambda_{0,\max}M^{2+2\beta}\tilde{\psi}_{2}(T,L)}\right),$
where $(i)$ follows the definition of $\Phi(T,L)$ and $(ii)$ follows the fact
that $\lambda_{0,\max}>1$.
Before we calculate $P(\bar{A}_{2})$, we first compute $P(\bar{A}_{4})$. Note
that by Jensen’s inequality, for any two real values $z_{1},z_{2}$ and any
positive integer $k$, we have
$(z_{1}+z_{2})^{k}\leq\left(|z_{1}|+|z_{2}|\right)^{k}=2^{k}\left(\frac{1}{2}|z_{1}|+\frac{1}{2}|z_{2}|\right)^{k}\leq
2^{k-1}\left(|z_{1}|+|z_{2}|\right),$
where the last line is because Jensen’s inequality with convex function
$\varphi(t)=t^{k}$, $k$ is a positive integer. Since for any $i=1,\ldots,n$
and $j=1,2\dots,p$, we have $\mathbb{E}[\|g_{ij}\|^{2}]=\lambda_{j0}$. Then,
by Jensen’s inequality and Lemma 31, for any $k\geq 2$, we have
$\displaystyle\mathbb{E}\left[\left(\|g_{ij}\|^{2}-\lambda_{j0}\right)^{k}\right]$
$\displaystyle\leq
2^{k-1}\left(\mathbb{E}\left[\|g_{ij}\|^{2k}+\lambda^{k}_{j0}\right]\right)$
$\displaystyle\leq 2^{k-1}\left((2\lambda_{j0})^{k}k!+\lambda^{k}_{j0}\right)$
$\displaystyle\leq(4\lambda_{j0})^{k}k!,$
where the second inequality is because Lemma 31. Thus,
$\sum^{n}_{i=1}\mathbb{E}\left[\left(\|g_{ij}\|^{2}-\lambda_{j0}\right)^{k}\right]\leq\frac{k!}{2}n\times(32\lambda^{2}_{j0})\times(4\lambda_{j0})^{k-2}.$
Then by Lemma 29, for any $\epsilon>0$, we have
$P\left(\left|\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|^{2}-\lambda_{j0}\right|>\epsilon\right)\leq
2\exp\left(-\frac{n\epsilon^{2}}{64\lambda^{2}_{j0}+8\lambda_{j0}\epsilon}\right).$
This way, we further get
$\displaystyle
P\left(\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|^{2}>2\lambda_{0,\max}\right)$
$\displaystyle\leq
P\left(\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|^{2}>2\lambda_{j0}\right)$
$\displaystyle\leq
P\left(\left|\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|^{2}-\lambda_{j0}\right|>\lambda_{j0}\right)$
$\displaystyle\leq 2\exp\left(-\frac{n}{72}\right).$
Since the above inequality holds for any $j=1,\ldots,p$, we then have
$P(\bar{A}_{4})=P\left(\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|^{2}>2\lambda_{0,\max},\;\exists
j=1,\ldots,p\right)\leq 2p\exp\left(-\frac{n}{72}\right).$ (B.28)
For $P(\bar{A}_{2})$, we first let
$\hat{K}^{g}_{jj}(s,t)=\frac{1}{n}\sum^{n}_{i=1}g_{ij}(s)g_{ij}(t),$
for all $j\in V$ and $K_{jj}(s,t)=\mathbb{E}[g_{ij}(s)g_{ij}(t)]$, and also
let
$A^{\prime}_{2}:\;\lVert\hat{K}^{g}_{jj}-K^{g}_{jj}\rVert_{\text{HS}}\leq\delta_{2}\quad\forall
j=1,\ldots,p.$
Note that
$\displaystyle\|\hat{K}^{g}_{jj}(s,t)-K^{g}_{jj}(s,t)\|_{\text{HS}}$
$\displaystyle=\left\|\frac{1}{n}\sum^{n}_{i=1}\left[\hat{g}_{ij}(s)-g_{ij}(s)+g_{ij}(s)\right]\left[\hat{g}_{ij}(t)-g_{ij}(t)+g_{ij}(t)\right]-K^{g}_{jj}(s,t)\right\|_{\text{HS}}$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\|\hat{g}_{ij}-g_{ij}\|^{2}+\frac{2}{n}\sum^{n}_{i=1}\|\hat{g}_{ij}-g_{ij}\|\cdot\|g_{ij}\|+\left\|\frac{1}{n}\sum^{n}_{i=1}\left[g_{ij}(s)g_{ij}(t)-K^{g}_{jj}(s,t)\right]\right\|_{\text{HS}}.$
Let
$\displaystyle A_{6}$
$\displaystyle:\;\left\|\frac{1}{n}\sum^{n}_{i=1}\left[g_{ij}(s)g_{ij}(t)-K^{g}_{jj}(s,t)\right]\right\|_{\text{HS}}\leq
4\lambda_{0,\max}\delta_{1},\;\forall j=1,\ldots,p.$
We claim that when $A_{1}\cap A_{4}\cap A_{6}\Rightarrow A^{\prime}_{2}$. To
prove it, note that by Jensen’s inequality, we have
$\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\left\|g_{ij}\right\|^{2}},$
thus, when $A_{4}$ holds, we have
$(1/n)\sum^{n}_{i=1}\left\|g_{ij}\right\|\leq\sqrt{2\lambda_{0,\max}}$ for any
$j=1,\ldots,p$. This way, when $A_{1}$, $A_{4}$ and $A_{6}$ hold
simultaneously, we have
$\|\hat{K}^{g}_{jj}(s,t)-K^{g}_{jj}(s,t)\|_{\text{HS}}\leq\delta^{2}_{1}+2\sqrt{2\lambda_{0,\max}}\delta_{1}+4\lambda_{0,\max}\delta_{1}\leq
9\lambda_{0,\max}\delta_{1},$
which is $A_{2}$. This way, we have proved $A_{1}\cap A_{4}\cap
A_{6}\Rightarrow A^{\prime}_{2}$, which implies that
$\bar{A^{\prime}}_{2}\Rightarrow\bar{A}_{1}\cup\bar{A}_{4}\cup\bar{A}_{6}$,
and thus $P(\bar{A^{\prime}}_{2})\leq
P(\bar{A}_{1})+P(\bar{A}_{4})+P(\bar{A}_{6})$. $P(\bar{A}_{1})$ has been given
by (B.27) and $P(\bar{A}_{4})$ has been given by (B.28), thus we only need to
compute $P(\bar{A}_{6})$.
By Lemma 32, for any $j=1,\ldots,p$, we have
$P\left(\left\|\frac{1}{n}\sum^{n}_{i=1}\left[g_{ij}(s)g_{ij}(t)-K^{g}(s,t)\right]\right\|_{\text{HS}}>4\lambda_{0,\max}\delta_{1}\right)\leq
2\exp\left(-\frac{n\delta^{2}_{1}}{6}\right),$
thus
$P(\bar{A}_{6})\leq
2p\exp\left(-\frac{n\delta^{2}_{1}}{6}\right)=2p\exp\left(-\frac{1}{373248d^{4}_{0}\lambda^{2}_{0,\max}}\times
n\frac{\delta^{2}}{M^{2+2\beta}}\right).$ (B.29)
This way, by combining (B.27), (B.28) and (B.29), we have
$\displaystyle P(\bar{A^{\prime}}_{2})\leq$ $\displaystyle
4pM\exp\left(-\frac{1}{1728\sqrt{6}\lambda_{0,\max}d^{2}_{0}}\cdot\frac{\delta}{M^{1+\beta}}\cdot\Phi(T,L)\right)+2p\exp\left(-\frac{n}{72}\right)$
$\displaystyle+$ $\displaystyle
2p\exp\left(-\frac{1}{373248d^{4}_{0}\lambda^{2}_{0,\max}}\times
n\frac{\delta^{2}}{M^{2+2\beta}}\right).$
Finally, since
$\|\hat{K}_{j}j(s,t)-K_{jj}(s,t)\|_{\text{HS}}\leq\|\hat{K}^{X}_{j}j(s,t)-K^{X}_{jj}(s,t)\|_{\text{HS}}+\|\hat{K}^{Y}_{j}j(s,t)-K^{Y}_{jj}(s,t)\|_{\text{HS}}$,
we have $P(\bar{A}_{2})\leq
P(\bar{A^{\prime}}_{X,2})+P(\bar{A^{\prime}}_{Y,2})$, where $A^{\prime}_{X,2}$
and $A^{\prime}_{Y,2}$ are defined similarly as $A^{\prime}_{2}$ with $g$ to
be $X$ and $Y$. Thus, we have
$\displaystyle P(\bar{A}_{2})\leq$ $\displaystyle
8pM\exp\left(-\frac{1}{1728\sqrt{6}\lambda_{0,\max}d^{2}_{0}}\cdot\frac{\delta}{M^{1+\beta}}\cdot\Phi(T,L)\right)+4p\exp\left(-\frac{n}{72}\right)$
$\displaystyle+$ $\displaystyle
4p\exp\left(-\frac{1}{373248d^{4}_{0}\lambda^{2}_{0,\max}}\times
n\frac{\delta^{2}}{M^{2+2\beta}}\right).$
For $P(\bar{A}_{3})$, by Page 28-29 of Boucheron et al. (2013), and note that
$\sum^{n}_{i=1}\xi^{2}_{ijk}\sim\chi^{2}_{n}$ for any $j=1,\ldots,p$ and
$k=1,\ldots,M$, we have that for any $\epsilon>0$, we have
$P\left(\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}-1>\epsilon\right)\leq\exp\left(-\frac{n\epsilon^{2}}{4+4\epsilon}\right).$
Thus, by letting $\epsilon=1/2$, we have
$P\left(\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}>\frac{3}{2}\right)\leq\exp\left(-\frac{n}{24}\right),$
which implies that
$P(\bar{A}_{3})\leq pM\exp\left(-\frac{n}{24}\right).$ (B.30)
Finally, for $P(\bar{A}_{5})$, we first claim that for any $\epsilon>0$ and
$1\leq j,l\leq p$, $1\leq k,m\leq M$, we have
$P\left(\left|\frac{1}{n}\sum^{n}_{i=1}a_{ijk}a_{ilm}-\sigma_{jl,km}\right|>\epsilon\right)\leq
2\exp\left(-\frac{n\epsilon^{2}}{64d^{2}_{0}+8d_{0}\epsilon}\right).$
We now prove this claim. Note that
$\displaystyle\mathbb{E}\left[\left(a_{ijk}a_{ilm}-\mathbb{E}(a_{ijk}a_{ilm})\right)^{k}\right]$
$\displaystyle=\lambda^{k/2}_{jk}\lambda^{k/2}_{lm}\mathbb{E}\left[\left(\xi_{ijk}\xi_{ilm}-\mathbb{E}(\xi_{ijk}\xi_{ilm})\right)^{k}\right]$
$\displaystyle\leq
d^{k}_{0}\mathbb{E}\left[\left(\xi_{ijk}\xi_{ilm}-\mathbb{E}(\xi_{ijk}\xi_{ilm})\right)^{k}\right],$
and
$\displaystyle\mathbb{E}\left[\left(\xi_{ijk}\xi_{ilm}-\mathbb{E}(\xi_{ijk}\xi_{ilm})\right)^{k}\right]$
$\displaystyle\leq
2^{k-1}\left(\mathbb{E}\left[|\xi_{ijk}\xi_{ilm}|^{k}\right]+|\mathbb{E}(\xi_{ijk}\xi_{ilm})|^{k}\right)$
$\displaystyle\leq 2^{k-1}\left(\mathbb{E}[\xi^{2k}_{ij1}]+1\right)$
$\displaystyle\leq 2^{k-1}(2^{k}k!+1)$ $\displaystyle\leq 4^{k}k!,$
thus
$\mathbb{E}\left[\left(a_{ijk}a_{ilm}-\mathbb{E}(a_{ijk}a_{ilm})\right)^{k}\right]\leq(4d_{0})^{k}k!.$
The claim then follows directly from Lemma 29. By letting
$\epsilon=\delta/16$,
$P\left(\left|\frac{1}{n}\sum^{n}_{i=1}a_{ijk}a_{ilm}-\sigma_{jl,km}\right|>\frac{\delta}{16}\right)\leq
2\exp\left(-\frac{n\delta^{2}}{16^{2}\times 64\times
d^{2}_{0}+128d_{0}\delta}\right)\leq
2\exp\left(-\frac{n\delta^{2}}{16512d^{2}_{0}}\right)$
holds for any $1\leq j,l\leq p$ and $1\leq k,m\leq M$, which further implies
that
$P\left(\bar{A}_{5}\right)\leq
2(pM)^{2}\exp\left(-\frac{n\delta^{2}}{16512d^{2}_{0}}\right).$ (B.31)
Let $C_{1}=12$, $C_{2}=1/(1728\sqrt{6}\lambda_{0,\max})$, $C_{3}=9$,
$C_{4}=1/(373248d^{4}_{0}\lambda^{2}_{0,\max})$, $C_{5}=2$, and
$C_{6}=1/(6228d^{4}_{0}\lambda_{0,\max})$, then the final result follows by
combining (B.27)-(B.31).
## Appendix C More Theorems
In this section, we introduce more theorems along with their proofs.
### C.1 Theorem 5 and Its Proof
In this section, we give a non-asymptotic error bound for our basis expansion
estimated function. This theorem is used in proving Theorem 4.
For a random function $g(t)$, where $t\in\mathcal{T}$, a closed interval of
real line, and lying in a separable Hilbert space $\mathbb{H}$, we have noisy
discrete observations at time points $t_{1},t_{2},\dots,t_{T}$ generated from
the model below:
$h_{k}=g(t_{k})+\epsilon_{k},$ (C.1)
where $\epsilon_{k}\overset{\text{i.i.d.}}{\sim}N(0,\sigma^{2}_{0})$ for
$k=1,\ldots,T$. Let $b(t)=(b_{1}(t),b_{2}(t),\dots,b_{L}(t))^{\top}$ be basis
function vector. We use basis expansion to get
$\hat{g}(t)=\hat{\beta}^{\top}b(t)$, the estimator of $g(t)$, where
$\hat{\beta}\in\mathbb{R}^{L}$ is obtained by minimizing the least square
loss:
$\hat{\beta}=\operatorname*{arg\,min}_{\beta\in\mathbb{R}^{L}}\sum^{T}_{k=1}\left(\beta^{\top}b(t_{k})-h_{k}\right)^{2}.$
(C.2)
We define the design matrix $B$ as
$B=\left[\begin{matrix}b_{1}(t_{1})&\cdots&b_{L}(t_{1})\\\
\vdots&\ddots&\vdots\\\
b_{1}(t_{T})&\cdots&b_{L}(t_{T})\end{matrix}\right]\in\mathbb{R}^{T\times L},$
(C.3)
so that
$\hat{\beta}=\left(B^{\top}B\right)^{-1}B^{\top}h,$ (C.4)
where $h=(h_{1},h_{2},\dots,h_{T})^{\top}\in\mathbb{R}^{T}$.
We assume that $g(t)=\sum^{\infty}_{m=1}\beta^{*}_{m}b_{m}(t)$, and we can
decompose $g(t)$ as $g=g^{\shortparallel}+g^{\bot}$, where
$g^{\shortparallel}\in{\rm Span}(b)$ and $g^{\bot}\in{\rm Span}(b)^{\bot}$.
Let $\lambda_{0}\coloneqq\mathbb{E}[\|g\|^{2}]$ and
$\lambda^{\bot}_{0}\coloneqq\mathbb{E}[\|g^{\bot}\|^{2}]$. Then it is easy to
check that $\lambda_{0}=\sum^{\infty}_{m=1}\mathbb{E}[(\beta^{*}_{m})^{2}]$
and $\lambda^{\bot}_{0}=\sum^{\infty}_{m>L}\mathbb{E}[(\beta^{*}_{m})^{2}]$.
We assume that the basis functions $\\{b_{l}(t)\\}^{\infty}_{l=1}$ compose a
complete orthonormal system (CONS) of $\mathbb{H}$, that is, $\overline{{\rm
Span}\left(\\{b_{l}\\}^{\infty}_{l=1}\right)}=\mathbb{H}$ (see Definition
2.4.11 of Hsing and Eubank (2015)), and have continuous derivative functions
with
$D_{0,b}\coloneqq\sup_{l\geq 1}\sup_{t\in\mathcal{T}}\lvert
b_{l}(t)\rvert<\infty,\qquad D_{1,b}(l)\coloneqq\sup_{t\in\mathcal{T}}\lvert
b^{\prime}_{l}(t)\rvert<\infty,\qquad D_{1,b,L}\coloneqq\max_{1\leq l\leq
L}D_{1,b}(l).$ (C.5)
We further assume that the observation time points $\\{t_{k}:1\leq k\leq T\\}$
satisfy
$\max_{1\leq k\leq
T+1}\left|\frac{t_{k}-t_{(k-1)}}{\lvert\mathcal{T}\rvert}-\frac{1}{T}\right|\leq\frac{\zeta_{0}}{T^{2}},$
(C.6)
where $t_{0}$ and $t_{(T+1)}$ are endpoints of $\mathcal{T}$ and $\zeta_{0}$
is a positive constant. Besides, we assume that
$\sum^{\infty}_{m=1}\mathbb{E}\left[(\beta^{*}_{m})^{2}\right]D^{2}_{1,b}(m)<\infty$,
we then define
$\psi_{4}(L)=\sum_{m>L}\mathbb{E}\left[(\beta^{*}_{m})^{2}\right]D^{2}_{1,b}(m).$
Let
$\displaystyle\psi_{1}(T,L)$
$\displaystyle=\frac{\sigma_{0}L}{\sqrt{\lambda_{\min}\left(B^{\top}B\right)}},\qquad\psi_{3}(L)=\lambda^{\bot}_{0}/\lambda_{0},$
and
$\displaystyle\psi_{2}(T,L)$
$\displaystyle=\frac{1}{(\lambda^{B}_{\min})^{2}}\left(18\lambda_{0}\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b,L}+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]L^{2}\psi_{3}(L)\right.$
$\displaystyle\left.\qquad\qquad+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}L^{2}\psi_{4}(L)\right),$
We then have the following theorem.
###### Theorem 5
For any $\delta>0$, we have
$P\left(\lVert g-\hat{g}\rVert>\delta\right)\leq
2\exp\left(-\frac{\delta^{2}}{72\psi^{2}_{1}(T,L)+6\sqrt{2}\psi_{1}(T,L)\delta}\right)+L\exp\left(-\frac{\delta^{2}}{\psi_{2}(T,L)}\right)\\\
+2\exp\left(-\frac{\delta^{2}}{72\lambda_{0}\psi_{3}(L)+6\sqrt{2\lambda_{0}}\sqrt{\psi_{3}(L)}\delta}\right).$
(C.7)
Proof Throughout the proof, we often use the technique to first treat $g$ as
a fixed function, that is, we consider probability conditioned on $g$, so the
only randomness comes from $\epsilon_{k}$, $k=1,\ldots,T$. We will then
include the randomness from $g$. Note that since $\epsilon_{k}$ is independent
of $g$, thus the conditional distribution of $\epsilon_{k}$ is the same with
unconditional distribution.
For a fixed $g$, since $\overline{{\rm
Span}\left(\\{b_{l}\\}^{\infty}_{l=1}\right)}=\mathbb{H}$, we can assume that
$g(t)=\sum^{\infty}_{l=1}\beta^{*}_{l}b_{l}(t)$ where $\beta^{*}_{l}=\langle
g,b_{l}\rangle=\int_{\mathcal{T}}g(t)b_{l}(t)dt$. Let
$\beta^{*}=(\beta^{*}_{1},\cdots,\beta^{*}_{L})^{\top}\in\mathbb{R}^{L}$, we
then have
$g^{\shortparallel}(t)=(\beta^{*})^{\top}b(t)=\sum^{L}_{l=1}\beta^{*}_{l}b_{l}(t)$
and $g^{\bot}(t)=\sum_{l>L}\beta^{*}_{l}b_{l}(t)$. Thus, we have
$h_{k}=g(t_{k})+\epsilon_{k}=(\beta^{*})^{\top}b(t_{k})+g^{\bot}(t_{k})+\epsilon_{k}.$
Let
$h^{\bot}=\left(g^{\bot}(t_{1}),g^{\bot}(t_{2}),\dots,g^{\bot}(t_{T})\right)^{\top}$,
$\epsilon=\left(\epsilon_{1},\epsilon_{2},\dots,\epsilon_{T}\right)^{\top}$,
we then have
$h=B\beta^{*}+h^{\bot}+\epsilon.$
Thus,
$\mathbb{E}(\mathbb{\hat{\beta}})=\beta^{*}+\left(B^{\top}B\right)^{-1}B^{\top}h^{\bot},$
and
$\displaystyle\hat{g}(t)-g(t)$
$\displaystyle=\hat{g}(t)-g^{\shortparallel}(t)-g^{\bot}(t)$
$\displaystyle=\hat{g}(t)-(\beta^{*})^{\top}b(t)-g^{\bot}(t)$
$\displaystyle=\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)^{\top}b(t)+\left(\left(B^{\top}B\right)^{-1}B^{\top}h^{\bot}\right)^{\top}b(t)-g^{\bot}(t).$
By Lemma 26, we then have
$\displaystyle\lVert\hat{g}-g\rVert$
$\displaystyle\leq\lVert\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)^{\top}b(t)\rVert+\lVert\left(\left(B^{\top}B\right)^{-1}B^{\top}h^{\bot}\right)^{\top}b(t)\rVert+\lVert
g^{\bot}\rVert$
$\displaystyle\leq\lvert\hat{\beta}-\mathbb{E}(\hat{\beta})\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}+\lvert\left(B^{\top}B\right)^{-1}B^{\top}h^{\bot}\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}+\lVert g^{\bot}\rVert$
$\displaystyle\leq\lvert\hat{\beta}-\mathbb{E}(\hat{\beta})\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}+\frac{1}{\lambda_{\min}(B^{\top}B)}\times\left\lvert
B^{\top}h^{\bot}\right\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}+\lVert g^{\bot}\rVert.$
Let
$\displaystyle J_{1}$
$\displaystyle=\lvert\hat{\beta}-\mathbb{E}(\hat{\beta})\rvert_{2}\times\lVert_{2}b\rVert_{\mathcal{L}^{2},2}$
(C.8) $\displaystyle J_{2}$
$\displaystyle=\frac{1}{\lambda_{\min}(B^{\top}B)}\times\lvert
B^{\top}h^{\bot}\rvert_{2}\times\lVert b\rVert_{\mathcal{L}^{2},2}$
$\displaystyle J_{3}$ $\displaystyle=\lVert g^{\bot}\rVert,$
where $\lvert\mathcal{T}\rvert$ denotes the length of the interval, then
$\lVert\hat{g}-g\rVert\leq J_{1}+J_{2}+J_{3}.$ (C.9)
Since this equation holds for any $g\in\mathbb{H}$, thus when we include the
randomness from $g$, the above equation holds with probability one. We then
bound $J_{1}$, $J_{2}$ and $J_{3}$ individually.
First, for $J_{1}$, recall that $\lVert b\rVert_{\mathcal{L}^{2},2}=\sqrt{L}$
and $\psi_{1}(T,L)=\sigma_{0}\lVert
b\rVert_{\mathcal{L}^{2},2}\sqrt{L}/\sqrt{\lambda_{\min}\left(B^{\top}B\right)}$,
then for any $\delta>0$, we claim that
$P\left(J_{1}>\delta\right)\leq
2\exp\left(-\frac{\delta^{2}}{8\psi^{2}_{1}(T,L)+2\sqrt{2}\psi_{1}(T,L)\delta}\right).$
(C.10)
To prove this result, we first treat $g$ as fixed, then note that by standard
linear regression theory, we have
$\hat{\beta}\sim
N_{L}\left(\mathbb{E}(\hat{\beta}),\sigma^{2}_{0}\left(B^{\top}B\right)^{-1}\right).$
Thus,
$\frac{1}{\sigma_{0}}\left(B^{\top}B\right)^{1/2}\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)\sim
N_{L}\left(0,I_{L}\right)$
Since
$\displaystyle J_{1}$
$\displaystyle=\lvert\hat{\beta}-\mathbb{E}(\hat{\beta})\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}$
$\displaystyle=\lvert\left(B^{\top}B\right)^{-1/2}\left(B^{\top}B\right)^{1/2}\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}$
$\displaystyle\leq\frac{1}{\sqrt{\lambda_{\min}\left(B^{\top}B\right)}}\lvert\left(B^{\top}B\right)^{1/2}\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)\rvert_{2}\times\lVert
b\rVert_{\mathcal{L}^{2},2}$ $\displaystyle=\frac{\sigma_{0}\lVert
b\rVert_{\mathcal{L}^{2},2}}{\sqrt{\lambda_{\min}\left(B^{\top}B\right)}}\lvert\frac{1}{\sigma_{0}}\left(B^{\top}B\right)^{1/2}\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)\rvert_{2},$
we have
$\displaystyle P(J_{1}>\delta)$ $\displaystyle\leq
P\left(\frac{\sigma_{0}\lVert
b\rVert_{\mathcal{L}^{2},2}}{\sqrt{\lambda_{\min}\left(B^{\top}B\right)}}\lvert\frac{1}{\sigma_{0}}\left(B^{\top}B\right)^{1/2}\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)\rvert_{2}>\delta\right)$
$\displaystyle=P\left(\lvert\frac{1}{\sigma_{0}}\left(B^{\top}B\right)^{1/2}\left(\hat{\beta}-\mathbb{E}(\hat{\beta})\right)\rvert_{2}>\frac{\delta}{\sigma_{0}\lVert
b\rVert_{\mathcal{L}^{2},2}/\sqrt{\lambda_{\min}\left(B^{\top}B\right)}}\right)$
$\displaystyle\overset{(i)}{\leq}2\exp\left(-\frac{\left(\delta/\left(\sigma_{0}\lVert
b\rVert_{\mathcal{L}^{2},2}/\sqrt{\lambda_{\min}\left(B^{\top}B\right)}\right)\right)^{2}}{8L+2\sqrt{2}\left(\left(\delta/\left(\sigma_{0}\lVert
b\rVert_{\mathcal{L}^{2},2}/\sqrt{\lambda_{\min}\left(B^{\top}B\right)}\right)\right)\right)}\right)$
$\displaystyle=2\exp\left(-\frac{\delta^{2}}{8\psi^{2}_{1}(T,L)+2\sqrt{2}\psi_{1}(T,L)\delta}\right),$
where $(i)$ follows Lemma 28. Now if we treat $g$ as random, we only need to
note that
$\displaystyle P\left(J_{1}>\delta\right)$
$\displaystyle=\mathbb{E}_{g}\left[P\left(J_{1}>\delta_{2}|g\right)\right]$
$\displaystyle=\mathbb{E}_{g}\left[2\exp\left(-\frac{\delta^{2}}{8\psi^{2}_{1}(T,L)+2\sqrt{2}\psi_{1}(T,L)\delta}\right)\right]$
$\displaystyle=2\exp\left(-\frac{\delta^{2}}{8\psi^{2}_{1}(T,L)+2\sqrt{2}\psi_{1}(T,L)\delta}\right).$
Next, for $J_{2}$, we claim that for any $\delta>0$, we have
$\mathbb{P}\left(J_{2}>\delta\right)\leq
L\exp\left(-\frac{9\delta^{2}}{\psi_{2}(T,L)}\right).$ (C.11)
We use $(B^{\top}h^{\bot})_{l}$ to denote the $l$-th element of vector
$B^{\top}h^{\bot}$, then we have
$(B^{\top}h^{\bot})_{l}=\sum^{T}_{k=1}b_{l}(t_{k})g^{\bot}(t_{k})=\sum_{m>L}\beta^{*}_{m}\sum^{T}_{k=1}b_{l}(t_{k})b_{m}(t_{k}).$
Since $g$ is a Gaussian random function with mean zero, we then have
$(B^{\top}h^{\bot})_{l}$ to be a Gaussian random variable. Besides, we have
$\mathbb{E}\left[(B^{\top}h^{\bot})_{l}\right]=0$ and
$\mathbb{E}\left[(B^{\top}h^{\bot})^{2}_{l}\right]=\sum_{m>L}\mathbb{E}\left[\beta^{*2}_{m}\right]\left(\sum^{T}_{k=1}b_{l}(t_{k})b_{m}(t_{k})\right)^{2}$
(C.12)
By definition of $D_{0,b}$, $D_{1,b}(\cdot)$, for any $l<m$, we have that
$\sup_{t\in\mathcal{T}}(b_{l}(t)b_{m}(t))\leq D^{2}_{0,b}$, and
$\sup_{t\in\mathcal{T}}(b_{l}(t)b_{m}(t))^{\prime}=\sup_{t\in\mathcal{T}}\\{b^{\prime}_{l}(t)b_{m}(t)+b_{l}(t)b^{\prime}_{m}(t)\\}\leq
D_{0,b}(D_{1,b}(l)+D_{1,b}(m))$. Note that
$\int_{\mathcal{T}}b_{l}(t)b_{m}(t)dt=0$ for any $l<m$, then by Lemma 30, we
have
$\displaystyle\left|\frac{1}{T}\sum^{T}_{k=1}b_{l}(t_{k})b_{m}(t_{k})\right|$
$\displaystyle=$
$\displaystyle\left|\frac{1}{T}\sum^{T}_{k=1}b_{l}(t_{k})b_{m}(t_{k})-\frac{1}{|\mathcal{T}|}\int_{\mathcal{T}}b_{l}(t)b_{m}(t)dt\right|$
$\displaystyle\leq$
$\displaystyle\frac{D_{0,b}(D_{1,b}(l)+D_{1,b}(m))(\zeta_{0}+1)^{2}|\mathcal{T}|/2+D^{2}_{0,b}(2\zeta_{0}+1)}{T}$
for all $1\leq l<m<\infty$, which implies that
$\left|\sum^{T}_{k=1}b_{l}(t_{k})b_{m}(t_{k})\right|\leq\frac{1}{2}D_{0,b}(\zeta_{0}+1)^{2}|\mathcal{T}|(D_{1,b}(l)+D_{1,b}(m))+D^{2}_{0,b}(2\zeta_{0}+1).$
Then we have
$\displaystyle\left(\sum^{T}_{k=1}b_{l}(t_{k})b_{m}(t_{k})\right)^{2}$
$\displaystyle\leq\left(\frac{1}{2}D_{0,b}(\zeta_{0}+1)^{2}|\mathcal{T}|(D_{1,b}(l)+D_{1,b}(m))+D^{2}_{0,b}(2\zeta_{0}+1)\right)^{2}$
$\displaystyle\leq\frac{1}{2}D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}(D_{1,b}(l)+D_{1,b}(m))^{2}+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}$
$\displaystyle\leq
D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}(D^{2}_{1,b}(l)+D^{2}_{1,b}(m))+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}.$
By (C.12), we then have
$\displaystyle\mathbb{E}\left[(B^{\top}h^{\bot})^{2}_{l}\right]$
$\displaystyle\leq\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b}(l)+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]\sum_{m>L}\mathbb{E}\left[\beta^{*2}_{m}\right]$
$\displaystyle+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}\sum_{m>L}\mathbb{E}\left[\beta^{*2}_{m}\right]D^{2}_{1,b}(m)$
$\displaystyle\leq\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b}(l)+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]\lambda^{\bot}_{0}+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}\psi_{4}(L)$
$\displaystyle\leq\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b,L}+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]\lambda^{\bot}_{0}+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}\psi_{4}(L)$
$\displaystyle=\lambda_{0}\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b,L}+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]\psi_{3}(L)+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}\psi_{4}(L)$
Thus, by tail bound of Gaussian random variable (Section 2.1.2 of Wainwright
(2019)), we have
$\displaystyle\mathbb{P}\left((B^{\top}h^{\bot})_{l}>\delta\right)\leq$
$\displaystyle\exp\left(-\frac{\delta^{2}}{2\lambda_{0}\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b,L}+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]\psi_{3}(L)+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}\psi_{4}(L)}\right).$
Recall that
$\displaystyle\psi_{2}(T,L)$
$\displaystyle=\frac{1}{(\lambda^{B}_{\min})^{2}}\left(18\lambda_{0}\left[D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}D^{2}_{1,b,L}+2D^{4}_{0,b}(2\zeta_{0}+1)^{2}\right]L^{2}\psi_{3}(L)\right.$
$\displaystyle\left.\qquad\qquad+D^{2}_{0,b}(\zeta_{0}+1)^{4}|\mathcal{T}|^{2}L^{2}\psi_{4}(L)\right),$
and note that $\|b\|_{\mathcal{L}^{2},2}=\sqrt{L}$, then we have
$\displaystyle\mathbb{P}\left(J_{2}>\delta\right)$
$\displaystyle\leq\mathbb{P}\left(\lvert
B^{\top}h^{\bot}\rvert_{2}>\frac{\lambda^{B}_{\min}\delta}{\sqrt{L}}\right)\leq\mathbb{P}\left(\max_{1\leq
l\leq L}(B^{\top}h^{\bot})_{l}>\frac{\lambda^{B}_{\min}\delta}{L}\right)$
(C.13) $\displaystyle\leq
L\exp\left(-\frac{9\delta^{2}}{\psi_{2}(T,L)}\right).$
Finally, for $J_{3}$, by Lemma 31 and definition of $\psi_{3}(L)$, we have
$\mathbb{E}\left[\|g^{\bot}\|^{2k}\right]\leq(2\lambda_{0}\psi_{3}(L))^{k}k!.$
This way, by Jensesn’s inequality, we have
$\mathbb{E}\left[\|g^{\bot}\|^{k}\right]=\mathbb{E}\left[\sqrt{\|g^{\bot}\|^{2k}}\right]\leq\sqrt{\mathbb{E}\left[\|g^{\bot}\|^{2k}\right]}\leq\left(\sqrt{2\lambda_{0}\psi_{3}(L)}\right)^{k}k!.$
Thus, by Lemma 29, we have
$P\left(J_{3}>\delta\right)=P\left(\|g^{\bot}\|>\delta\right)\leq
2\exp\left(-\frac{\delta^{2}}{8\lambda_{0}\psi_{3}(L)+2\sqrt{2\lambda_{0}}\sqrt{\psi_{3}(L)}\delta}\right).$
(C.14)
The final result then follows (C.10), (C.13) and (C.14), and the fact that
$\mathbb{P}\left(J_{1}+J_{2}+J_{3}>\delta\right)\leq\mathbb{P}\left(J_{1}>\delta/3\right)+\mathbb{P}\left(J_{2}>\delta/3\right)+\mathbb{P}\left(J_{3}>\delta/3\right).$
[2mm]
## Appendix D Lemmas and their proofs
In this section, we introduce some useful lemmas along with their proofs.
###### Lemma 5
Let $\sigma_{\max}=\max\\{|\Sigma^{X,M}|_{\infty},\
|\Sigma^{Y,M}|_{\infty}\\}$. Suppose that
$|S^{X,M}-\Sigma^{X,M}|_{\infty}\leq\delta,\qquad|S^{Y,M}-\Sigma^{Y,M}|_{\infty}\leq\delta,$
(D.1)
for some $\delta\geq 0$. Then
$\displaystyle|(S^{Y,M}\otimes{S^{X,M}})-(\Sigma^{Y,M}\otimes{\Sigma^{X,M}})|_{\infty}\leq\delta^{2}+2\delta\sigma_{\max},$
(D.2) and
$\displaystyle|\operatorname{vec}{(S^{Y,M}-S^{X,M})}-\operatorname{vec}{(\Sigma^{Y,M}-\Sigma^{X,M})}|_{\infty}\leq
2\delta.$ (D.3)
Proof Note that for any $(j,l),(j^{\prime},l^{\prime})\in V^{2}$ and $1\leq
k,k^{\prime},m,m^{\prime}\leq M$, by (D.1), we have
$\displaystyle\left|S^{X,M}_{jl,km}S^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}-\Sigma^{X,M}_{jl,km}\Sigma^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}\right|$
$\displaystyle\leq\left|S^{X,M}_{jl,km}-\Sigma^{X,M}_{jl,km}\right|\cdot\left|S^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}-\Sigma^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}\right|+\left|\Sigma^{X,M}_{jl,km}\right|\cdot\left|S^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}-\Sigma^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}\right|$
$\displaystyle\quad+\left|\Sigma^{Y,M}_{j^{\prime}l^{\prime},k^{\prime}m^{\prime}}\right|\cdot\left|S^{X,M}_{jl,km}-\Sigma^{X,M}_{jl,km}\right|$
$\displaystyle\leq\left|S^{X,M}-\Sigma^{X,M}\right|_{\infty}\left|S^{Y,M}-\Sigma^{Y,M}\right|_{\infty}+\sigma_{\max}\left|S^{Y,M}-\Sigma^{Y,M}\right|_{\infty}+\sigma_{\max}\left|S^{X,M}-\Sigma^{X,M}\right|_{\infty}$
$\displaystyle\leq\delta^{2}+2\delta\sigma_{\max}.$
For (D.3), note that
$\displaystyle\left|\operatorname{vec}{(S^{Y,M}-S^{X,M})}-\operatorname{vec}{(\Sigma^{Y,M}-\Sigma^{X,M})}\right|_{\infty}$
$\displaystyle=\left|(S^{X,M}-\Sigma^{X,M})-(S^{Y,M}-\Sigma^{Y,M})\right|_{\infty}$
$\displaystyle\leq|S^{X,M}-\Sigma^{X,M}|_{\infty}+|S^{Y,M}-\Sigma^{Y,M}|_{\infty}$
$\displaystyle\leq 2\delta.$
[2mm]
###### Lemma 6
For $Z^{(1)},Z^{(2)},A^{(1)},A^{(2)}\in\mathbb{R}^{M\times M}$. Denote the
solution of
$\operatorname*{arg\,min}_{\\{Z^{(1)},Z^{(2)}\\}}\;\frac{1}{2}\sum^{2}_{q=1}\|Z^{(q)}-A^{(q)}\|^{2}_{\text{F}}+\lambda\|Z^{(1)}-Z^{(2)}\|_{\text{F}}$
(D.4)
as $\\{\hat{Z}^{(1)},\hat{Z}^{(2)}\\}$, where $\lambda>0$ is a constant. Then
when $\|A^{(1)}-A^{(2)}\|_{\text{F}}\leq 2\lambda$, we have
$\hat{Z}^{(1)}=\hat{Z}^{(2)}=\frac{1}{2}\left(A^{(1)}+A^{(2)}\right),$ (D.5)
and when $\|A^{(1)}-A^{(2)}\|_{\text{F}}>2\lambda$, we have
$\displaystyle\hat{Z}^{(1)}=A^{(1)}-\frac{\lambda}{\|A^{(1)}-A^{(2)}\|_{\text{F}}}\left(A^{(1)}-A^{(2)}\right)$
(D.6)
$\displaystyle\hat{Z}^{(2)}=A^{(2)}+\frac{\lambda}{\|A^{(1)}-A^{(2)}\|_{\text{F}}}\left(A^{(1)}-A^{(2)}\right).$
Proof The subdifferential of the objective function in (D.4) is
$G^{(1)}(Z^{(1)},Z^{(2)})\coloneq\partial_{Z^{(1)}}=Z^{(1)}-A^{(1)}+\lambda
T(Z^{(1)},Z^{(2)}),$ (D.7)
$G^{(2)}(Z^{(1)},Z^{(2)})\coloneq\partial_{Z^{(2)}}=Z^{(2)}-A^{(2)}-\lambda
T(Z^{(1)},Z^{(2)}),$ (D.8)
where
$T(Z^{(1)},Z^{(2)})=\left\\{\begin{aligned}
&\frac{Z^{(1)}-Z^{(2)}}{\|Z^{(1)}-Z^{(2)}\|_{\text{F}}}\quad\text{if}\;Z^{(1)}\neq
Z^{(2)}\\\ &\left\\{T\in\mathbb{R}^{M\times M}:\|T\|_{\text{F}}\leq
1\right\\}\quad\text{if}\;Z^{(1)}=Z^{(2)}\end{aligned}\right..$ (D.9)
The optimal condition is:
$0\in G^{(q)}(Z^{(1)},Z^{(2)})\quad q=1,2.$ (D.10)
Claim $\hat{Z}^{(1)}\neq\hat{Z}^{(2)}$ if and only if
$\|A^{(1)}-A^{(2)}\|_{\text{F}}>2\lambda$.
We first prove the necessaity, that is, when $\hat{Z}^{(1)}\neq\hat{Z}^{(2)}$,
we prove that $\|A^{(1)}-A^{(2)}\|_{\text{F}}>2\lambda$. By (D.7)-(D.10), we
have
$\hat{Z}^{(1)}-\hat{Z}^{(2)}-\left(A^{(1)}-A^{(2)}\right)-2\lambda\frac{\hat{Z}^{(1)}-\hat{Z}^{(2)}}{\|\hat{Z}^{(1)}-\hat{Z}^{(2)}\|_{\text{F}}}=0,$
which implies that
$\|A^{(1)}-A^{(2)}\|_{\text{F}}=2\lambda+\|\hat{Z}^{(1)}-\hat{Z}^{(2)}\|_{\text{F}}>2\lambda.$
We then prove the sufficiency, that is, when
$\|A^{(1)}-A^{(2)}\|_{\text{F}}>2\lambda$, we prove
$\hat{Z}^{(1)}\neq\hat{Z}^{(2)}$. Note that by (D.7)-(D.10), we have
$\hat{Z}^{(1)}+\hat{Z}^{(2)}=A^{(1)}+A^{(2)}.$
If $\hat{Z}^{(1)}=\hat{Z}^{(2)}$, we then have
$\hat{Z}^{(1)}=\hat{Z}^{(2)}=\frac{A^{(1)}+A^{(2)}}{2}.$
By (D.7) and (D.10), we have
$\|\hat{Z}^{(1)}-A^{(1)}\|_{\text{F}}=\frac{1}{2}\|A^{(1)}-A^{(2)}\|_{\text{F}}=\lambda\|T(\hat{Z}^{(1)},\hat{Z}^{(2)})\|_{\text{F}}\leq\lambda,$
which implies that
$\|A^{(1)}-A^{(2)}\|_{\text{F}}\leq 2\lambda,$
and this contradicts the assumption that
$\|A^{(1)}-A^{(2)}\|_{\text{F}}>2\lambda$. Thus, we must have
$\hat{Z}^{(1)}\neq\hat{Z}^{(2)}$.
Note that by this claim and the argument proving this claim, we have already
proved (D.5). We then prove (D.6). When
$\|A^{(1)}-A^{(2)}\|_{\text{F}}>2\lambda$, by the claim above, we must have
$\hat{Z}^{(1)}\neq\hat{Z}^{(2)}$. Then by (D.7)-(D.10), we have
$\hat{Z}^{(1)}-A^{(1)}+\frac{\lambda}{\|\hat{Z}^{(1)}-\hat{Z}^{(2)}\|_{\text{F}}}\left(\hat{Z}^{(1)}-\hat{Z}^{(2)}\right)=0,$
(D.11)
$\hat{Z}^{(2)}-A^{(2)}-\frac{\lambda}{\|\hat{Z}^{(1)}-\hat{Z}^{(2)}\|_{\text{F}}}\left(\hat{Z}^{(1)}-\hat{Z}^{(2)}\right)=0.$
(D.12)
(D.11) and (D.12) implies that
$\hat{Z}^{(1)}-\hat{Z}^{(2)}-\left(A^{(1)}-A^{(2)}\right)+\frac{2\lambda}{\|\hat{Z}^{(1)}-\hat{Z}^{(2)}\|_{\text{F}}}\left(\hat{Z}^{(1)}-\hat{Z}^{(2)}\right)=0,$
which implies that
$\hat{Z}^{(1)}-\hat{Z}^{(2)}=\alpha\cdot\left(A^{(1)}-A^{(2)}\right),$ (D.13)
where $\alpha$ is a constant. We then substitue (D.13) back to (D.11) and
(D.12), we then have (D.6). [2mm]
###### Lemma 7
For a set of indices $\mathcal{G}=\\{G_{t}\\}_{t=1,\ldots,N_{\mathcal{G}}}$,
suppose $|\cdot|_{1,2}$ is defined in (B.3). Then for any matrix
$A\in{\mathbb{R}^{p^{2}M^{2}\times{p^{2}M^{2}}}}$ and
$\theta\in{\mathbb{R}^{p^{2}M^{2}}}$
$|\theta^{\top}A\theta|\leq{M^{2}|A|_{\infty}|\theta|^{2}_{1,2}}.$ (D.14)
Proof By direct calculation, we have
$\displaystyle|\theta^{\top}A\theta|$
$\displaystyle=\left|\sum_{i}\sum_{j}A_{ij}\theta_{i}\theta_{j}\right|$
$\displaystyle\leq{\sum_{i}\sum_{j}|A_{ij}\theta_{i}\theta_{j}|}$
$\displaystyle\leq{|A|_{\infty}\left(\sum_{i}|\theta_{i}|\right)^{2}}$
$\displaystyle=|A|_{\infty}\left(\sum_{t=1}^{N_{\mathcal{G}}}\sum_{k\in{G_{t}}}|\theta_{k}|\right)^{2}$
$\displaystyle=|A|_{\infty}\left(\sum_{t=1}^{N_{\mathcal{G}}}|\theta_{G_{t}}|_{1}\right)^{2}$
$\displaystyle\leq{|A|_{\infty}\left(\sum_{t=1}^{N_{\mathcal{G}}}M|\theta_{G_{t}}|_{2}\right)^{2}}$
$\displaystyle=M^{2}|A|_{\infty}|\theta|^{2}_{1,2},$
where in the penultimate line, we use that for any vector
$v\in{\mathbb{R}^{n}}$, $|v|_{1}\leq{\sqrt{n}|v|_{2}}$. [2mm]
###### Lemma 8
Suppose $\mathcal{M}$ is defined as in (B.1). For any
$\theta\in{\mathcal{M}}$, we have $|\theta|_{1,2}\leq{\sqrt{s}}|\theta|_{2}$.
Furthermore, for $\Psi(\mathcal{M})$ as defined in (B.5), we have
$\Psi(\mathcal{M})=\sqrt{s}$.
Proof By definition of $\mathcal{M}$ and $|\cdot|_{1,2}$, we have
$\displaystyle|\theta|_{1,2}$
$\displaystyle=\sum_{t\in{S_{\mathcal{G}}}}|\theta_{G_{t}}|_{2}+\sum_{t\notin{S_{\mathcal{G}}}}|\theta_{G_{t}}|_{2}$
$\displaystyle=\sum_{t\in{S_{\mathcal{G}}}}|\theta_{G_{t}}|_{2}$
$\displaystyle\leq{\sqrt{s}}\left(\sum_{t\in{S_{\mathcal{G}}}}|\theta_{G_{t}}|^{2}_{2}\right)^{\frac{1}{2}}$
$\displaystyle=\sqrt{s}|\theta|_{2}.$
In the penultimate line, we appeal to the Cauchy-Schwartz inequality. To show
$\Psi(\mathcal{M})=\sqrt{s}$, it suffices to show that the upper bound above
can be achieved. Select $\theta\in{\mathbb{R}^{p^{2}M^{2}}}$ such that
$|\theta_{G_{t}}|_{2}=c$, $\forall{t\in{S_{\mathcal{G}}}}$, where $c$ is some
positive constant. This implies that $|\theta|_{1,2}=sc$ and
$|\theta|_{2}=\sqrt{s}c$ so that $|\theta|_{1,2}=\sqrt{s}|\theta|_{2}$. Thus,
$\Psi(\mathcal{M})=\sqrt{s}$. [2mm]
###### Lemma 9
For $\mathcal{R}(\cdot)$ norm defined in (B.3), its dual norm
$\mathcal{R}^{*}(\cdot)$, defined in (B.4), is
$\mathcal{R}^{*}(v)\;=\;\max_{t=1,\ldots,N_{\mathcal{G}}}|v_{G_{t}}|_{2}.$
(D.15)
Proof For any $u:|u|_{1,2}\leq{1}$ and $v\in{\mathbb{R}^{p^{2}M^{2}}}$, we
have
$\displaystyle\langle{v,u}\rangle$
$\displaystyle=\sum_{t=1}^{N_{\mathcal{G}}}\langle{v_{G_{t}},u_{G_{t}}}\rangle$
$\displaystyle\leq{\sum_{t=1}^{N_{\mathcal{G}}}|v_{G_{t}}|_{2}|u_{G_{t}}|_{2}}$
$\displaystyle\leq\left(\max_{t=1,2,\cdots,N_{\mathcal{G}}}|v_{G_{t}}|_{2}\right)\sum_{t=1}^{N_{\mathcal{G}}}|u_{G_{t}}|_{2}$
$\displaystyle=\left(\max_{t=1,2,\cdots,N_{\mathcal{G}}}|v_{G_{t}}|_{2}\right)|u|_{1,2}$
$\displaystyle\leq{\max_{t=1,2,\cdots,N_{\mathcal{G}}}|v_{G_{t}}|_{2}}.$
To complete the proof, we to show that this upper bound can be obtained. Let
$t^{*}=\operatorname*{arg\,max}_{t=1,2,\cdots,N_{\mathcal{G}}}|v_{G_{t}}|$,
and select $u$ such that
$\displaystyle u_{G_{t}}$ $\displaystyle=0$
$\displaystyle\qquad{\forall{t\neq{t^{*}}}},$ $\displaystyle u_{G_{t}}$
$\displaystyle=\frac{v_{G_{t^{*}}}}{|v_{G_{t^{*}}}|_{2}}$
$\displaystyle\qquad{t={t^{*}}}.$
It follows that $|u|_{1,2}=1$ and
$\langle{v,u}\rangle=|v_{G_{t^{*}}}|_{2}=\max_{t=1,\ldots,N_{\mathcal{G}}}|v_{G_{t}}|_{2}$.
[2mm]
###### Lemma 10
Given that $A1$-$A5$ hold, we have $\lvert I_{1}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This directly follows the assumption that $A_{5}$ holds. [2mm]
###### Lemma 11
Given that $A1$-$A5$ hold, we have $\lvert I_{2}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{2}\rvert$
$\displaystyle=\lvert\langle\frac{1}{n}\sum^{n}_{i=1}a_{ijk}(\hat{g}_{il}-g_{il}),\phi_{lm}\rangle\rvert$
$\displaystyle\leq\lVert\frac{1}{n}\sum^{n}_{i=1}a_{ijk}(\hat{g}_{il}-g_{il})\rVert$
$\displaystyle\overset{(i)}{\leq}\sqrt{\frac{1}{n}\sum^{n}_{i=1}a^{2}_{ijk}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}}$
$\displaystyle\overset{(ii)}{\leq}\delta_{1}\sqrt{\frac{1}{n}\sum^{n}_{i=1}a^{2}_{ijk}}$
$\displaystyle=\delta_{1}\lambda^{1/2}_{jk}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}}$
$\displaystyle\overset{(iii)}{\leq}\sqrt{\frac{3}{2}}\delta_{1}\lambda^{1/2}_{jk}$
$\displaystyle\leq\sqrt{\frac{3}{2}}\sqrt{d_{1}}\delta_{1}k^{-\beta/2}$
$\displaystyle\leq\sqrt{\frac{3}{2}}\sqrt{d_{1}}\delta_{1},$
where $(i)$ follows Lemma 26, $(ii)$ follows $A_{1}$, $(iii)$ follows $A_{3}$.
Note the definition of $d_{0}$, we thus have
$\lvert I_{2}\rvert\leq\sqrt{\frac{3}{2}}d_{0}\delta_{1}.$ (D.16)
Since
$\delta_{1}=\delta/\left(144d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}\right)\leq\delta/(8\sqrt{6}d_{0}),$
(D.17)
we have
$\sqrt{\frac{3}{2}}d_{0}\delta_{1}\leq\sqrt{\frac{3}{2}}d_{0}\cdot\frac{\delta}{8\sqrt{6}d_{0}}=\frac{\delta}{16}.$
(D.18)
Thus,
$\lvert I_{2}\rvert\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 12
Given that $A1$-$A5$ hold, we have $\lvert I_{3}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{3}\rvert$
$\displaystyle=\lvert\langle\frac{1}{n}\sum^{n}_{i=1}a_{ijk}g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\lVert\frac{1}{n}\sum^{n}_{i=1}a_{ijk}g_{il}\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle=\lambda^{1/2}_{jk}\lVert\frac{1}{n}\sum^{n}_{i=1}\xi_{ijk}g_{il}\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(i)}{\leq}\lambda^{1/2}_{jk}\left(\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}\right)^{1/2}\left(\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{il}\rVert^{2}\right)^{1/2}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(ii)}{\leq}\lambda^{1/2}_{jk}\left(\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}\right)^{1/2}\left(\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{il}\rVert^{2}\right)^{1/2}d_{lm}\lVert\hat{K}_{ll}-K_{ll}\rVert_{\text{HS}},$
where $(i)$ follows Lemma 26, and $(ii)$ follows Lemma 27. Note that
$\lambda^{1/2}_{jk}\leq\sqrt{d_{1}}k^{-\beta/2}$, $d_{lm}\leq
d_{2}m^{1+\beta}$ and $A_{2}$-$A_{4}$ hold, thus we have
$\displaystyle\lvert I_{3}\rvert$
$\displaystyle\leq\sqrt{d_{1}}d_{2}k^{-\beta/2}m^{1+\beta}\sqrt{\frac{3}{2}}\sqrt{2\lambda_{0,\max}}\delta_{2}$
(D.19) $\displaystyle\leq
d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}\delta_{2}.$
By definition of $\delta_{2}$, we have
$d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}\delta_{2}\leq
d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}\times\frac{\delta}{16d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}}=\frac{\delta}{16}.$
(D.20)
Thus,
$\lvert I_{3}\rvert\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 13
Given that $A1$-$A5$ hold, we have $\lvert I_{4}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{4}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}a_{ijk}\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\lVert\sum^{n}_{i=1}a_{ijk}\left(\hat{g}_{il}-g_{il}\right)\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle=\leq\lambda^{1/2}_{jk}\frac{1}{n}\lVert\sum^{n}_{i=1}\xi_{ijk}\left(\hat{g}_{il}-g_{il}\right)\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(i)}{\leq}\lambda^{1/2}_{jk}\left(\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}\right)^{1/2}\left(\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}\right)^{1/2}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(ii)}{\leq}\lambda^{1/2}_{jk}d_{lm}\left(\frac{1}{n}\sum^{n}_{i=1}\xi^{2}_{ijk}\right)^{1/2}\left(\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}\right)^{1/2}\lVert\hat{K}_{ll}-K_{ll}\rVert_{\text{HS}},$
where $(i)$ follows Lemma 26, and $(ii)$ follows Lemma 27. Note that
$\lambda^{1/2}_{jk}\leq\sqrt{d_{1}}k^{-\beta/2}$, $d_{lm}\leq
d_{2}m^{1+\beta}$ and $A_{1}$-$A_{3}$ hold, thus we have
$\displaystyle\lvert I_{4}\rvert$
$\displaystyle\leq\sqrt{\frac{3}{2}}\sqrt{d_{1}}d_{2}k^{-\beta/2}m^{1+\beta}\delta_{1}\delta_{2}$
$\displaystyle\leq\sqrt{\frac{3}{2}}d^{2}_{0}M^{1+\beta}\delta_{1}\delta_{2}$
$\displaystyle\overset{(iii)}{\leq}\frac{\delta}{16}\times\frac{\sqrt{\frac{3}{2}}d^{2}_{0}M^{1+\beta}\delta_{1}\delta_{2}}{\sqrt{\frac{3}{2}}d_{0}\delta_{1}}$
$\displaystyle\leq\frac{\delta}{16}\times d_{0}M^{1+\beta}\delta_{2}$
$\displaystyle\leq\frac{\delta}{16}\times
d_{0}M^{1+\beta}\times\frac{\delta}{16d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}}$
$\displaystyle=\frac{\delta}{16}\times\frac{\delta}{16d_{0}\sqrt{3\lambda_{0,\max}}}$
$\displaystyle\leq\frac{\delta}{16},$
where $(iii)$ follows (D.18). [2mm]
###### Lemma 14
Given that $A1$-$A5$ hold, we have $\lvert I_{5}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This proof is similar to the proof of Lemma 11, thus is omitted. [2mm]
###### Lemma 15
Given that $A1$-$A5$ hold, we have $\lvert I_{6}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{6}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\rvert\lvert\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle\rvert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\rvert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{il}-g_{il},\phi_{lm}\rangle\rvert^{2}}$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{ij}-g_{ij}\rVert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}}.$
By the assumption that $A_{1}$ holds, we thus have
$\lvert I_{6}\rvert\leq\delta^{2}_{1}.$ (D.21)
By (D.17),(D.18) and Lemma 11, we have
$\displaystyle\delta^{2}_{1}$
$\displaystyle\leq\frac{\delta}{16}\times\frac{\delta^{2}_{1}}{\sqrt{\frac{3}{2}}d_{0}\delta_{1}}$
(D.22)
$\displaystyle=\frac{\delta}{16}\times\frac{\delta_{1}}{\sqrt{\frac{3}{2}}d_{0}}$
(D.23)
$\displaystyle\leq\frac{\delta}{16}\times\frac{1}{\sqrt{\frac{3}{2}}d_{0}}\times\frac{\delta}{8\sqrt{6}d_{0}}$
(D.24) $\displaystyle=\frac{\delta}{16}\times\frac{\delta}{24d^{2}_{0}}$
(D.25) $\displaystyle\leq\frac{\delta}{16},$ (D.26)
and thus
$\lvert I_{6}\rvert\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 16
Given that $A1$-$A5$ hold, we have $\lvert I_{7}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{7}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\rvert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert^{2}}$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{ij}-g_{ij}\rVert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{il}\rVert^{2}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert^{2}}$
$\displaystyle\overset{(i)}{\leq}\delta_{1}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{il}\rVert^{2}}$
$\displaystyle\overset{(ii)}{\leq}\delta_{1}\sqrt{2\lambda_{0,\max}}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(iii)}{\leq}\delta_{1}\sqrt{2\lambda_{0,\max}}d_{lm}\|\hat{K}_{ll}-K_{ll}\rVert_{\text{HS}}$
$\displaystyle\overset{(iv)}{\leq}\delta_{1}\delta_{2}\sqrt{2\lambda_{0,\max}}d_{lm}$
$\displaystyle\leq\delta_{1}\delta_{2}\sqrt{2\lambda_{0,\max}}d_{2}m^{1+\beta}$
$\displaystyle\leq
d_{0}\sqrt{2\lambda_{0,\max}}M^{1+\beta}\delta_{1}\delta_{2},$
where $(i)$ follows the assumption that $A_{1}$ holds, $(ii)$ follows the
assumption that $A_{4}$ holds, $(iii)$ follows Lemma 27, and $(iv)$ follows
the assumption that $A_{2}$ holds. By (D.17) and (D.20), we have
$\displaystyle\lvert I_{7}\rvert$
$\displaystyle\leq\frac{\delta}{16}\times\frac{d_{0}\sqrt{2\lambda_{0,\max}}M^{1+\beta}\delta_{1}\delta_{2}}{d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}\delta_{2}}$
$\displaystyle=\frac{\delta}{16}\times\sqrt{\frac{2}{3}}\times\frac{\delta_{1}}{d_{0}}$
$\displaystyle\leq\frac{\delta}{16}\times\sqrt{\frac{2}{3}}\times\frac{\delta}{8\sqrt{6}d^{2}_{0}}$
$\displaystyle=\frac{\delta}{16}\times\frac{\delta}{24\delta^{2}_{0}}$
$\displaystyle\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 17
Given that $A1$-$A5$ hold, we have $\lvert I_{8}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{8}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\rvert\lvert\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\phi_{jk}\rangle\rvert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert^{2}}$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{ij}-g_{ij}\rVert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert^{2}}$
$\displaystyle\overset{(i)}{\leq}\delta^{2}_{1}\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(ii)}{\leq}\delta^{2}_{1}d_{lm}\lVert\hat{K}_{ll}-K_{ll}\rVert_{\text{HS}}$
$\displaystyle\leq\delta^{2}_{1}d_{2}m^{1+\beta}\lVert\hat{K}_{ll}-K_{ll}\rVert_{\text{HS}}$
$\displaystyle\leq\delta^{2}_{1}d_{0}M^{1+\beta}\lVert\hat{K}_{ll}-K_{ll}\rVert_{\text{HS}}$
$\displaystyle\overset{(iii)}{\leq}d_{0}M^{1+\beta}\delta^{2}_{1}\delta_{2}$
where $(i)$ follows the assumption that $A_{1}$ holds, $(ii)$ follows the
assumption that Lemma 27 holds, and $(iii)$ follows the assumption that
$A_{2}$ holds. By (D.22), we have
$\displaystyle\lvert I_{8}\rvert$
$\displaystyle\leq\frac{\delta}{16}\times\frac{d_{0}M^{1+\beta}\delta^{2}_{1}\delta_{2}}{\delta^{2}_{1}}$
$\displaystyle=\frac{\delta}{16}\times d_{0}M^{1+\beta}\delta_{2}$
$\displaystyle=\frac{\delta}{16}\times
d_{0}M^{1+\beta}\times\frac{\delta}{16d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}}$
$\displaystyle=\frac{\delta}{16}\times\frac{\delta}{16d_{0}\sqrt{3\lambda_{0,\max}}}$
$\displaystyle\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 18
Given that $A1$-$A5$ hold, we have $\lvert I_{9}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This proof is similar to the proof of Lemma 12, thus is omitted. [2mm]
###### Lemma 19
Given that $A1$-$A5$ hold, we have $\lvert I_{10}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This proof is similar to the proof of Lemma 16, thus is omitted. [2mm]
###### Lemma 20
Given that $A1$-$A5$ hold, we have $\lvert I_{11}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{11}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\lvert\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\rvert\lvert\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\rvert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle
g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert^{2}}$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{ij}\rVert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{il}\rVert^{2}}\lVert\hat{\phi}_{jk}-\phi_{jk}\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(i)}{\leq}2\lambda_{0,\max}\lVert\hat{\phi}_{jk}-\phi_{jk}\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(ii)}{\leq}2\lambda_{0,\max}\delta^{2}_{2}d_{jk}d_{lm}$
$\displaystyle\leq
2\lambda_{0,\max}\delta^{2}_{2}d^{2}_{2}k^{1+\beta}m^{1+\beta},$
where $(i)$ follows because assumption $A_{4}$ holds, $(ii)$ follows Lemma 27.
Then, we have
$\lvert I_{11}\rvert\leq
2d^{2}_{0}\lambda_{0,\max}M^{2+2\beta}\delta^{2}_{2}.$ (D.27)
Thus, by (D.20), we have
$\displaystyle 2d^{2}_{0}\lambda_{0,\max}M^{2+2\beta}\delta^{2}_{2}$
$\displaystyle\leq\frac{\delta}{16}\times\frac{2d^{2}_{0}\lambda_{0,\max}M^{2+2\beta}\delta^{2}_{2}}{d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}\delta_{2}}$
(D.28)
$\displaystyle=\frac{\delta}{16}\times\frac{2}{\sqrt{3}}M^{1+\beta}\sqrt{\lambda_{0,\max}}\delta_{2}$
(D.29)
$\displaystyle=\frac{\delta}{16}\times\frac{2}{\sqrt{3}}M^{1+\beta}\sqrt{\lambda_{0,\max}}\times\frac{\delta}{16d^{2}_{0}M^{1+\beta}\sqrt{3\lambda_{0,\max}}}$
(D.30) $\displaystyle=\frac{\delta}{16}\times\frac{\delta}{24d^{2}_{0}}$
(D.31) $\displaystyle\leq\frac{\delta}{16},$ (D.32)
which implies that
$\lvert I_{11}\rvert\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 21
Given that $A1$-$A5$ hold, we have $\lvert I_{12}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{12}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\lvert\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\rvert\lvert\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle
g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\rvert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert^{2}}$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert
g_{ij}\rVert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}}\lVert\hat{\phi}_{jk}-\phi_{jk}\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(i)}{\leq}\sqrt{2\lambda_{0,\max}}\delta_{1}\delta^{2}_{2}d_{jk}d_{lm}$
$\displaystyle\leq
d^{2}_{2}\sqrt{2\lambda_{0,\max}}k^{1+\beta}m^{1+\beta}\delta_{1}\delta^{2}_{2},$
where $(i)$ follows the assumption that $A_{1}$-$A_{3}$ hold along with Lemma
27. Then, we have
$\lvert I_{12}\rvert\leq
d^{2}_{0}\sqrt{2\lambda_{0,\max}}M^{2+2\beta}\delta_{1}\delta^{2}_{2}.$ (D.33)
By (D.17) and (D.28), we have
$\displaystyle
d^{2}_{0}\sqrt{2\lambda_{0,\max}}M^{2+2\beta}\delta_{1}\delta^{2}_{2}$
$\displaystyle\leq\frac{\delta}{16}\times\frac{d^{2}_{0}\sqrt{2\lambda_{0,\max}}M^{2+2\beta}\delta_{1}\delta^{2}_{2}}{2d^{2}_{0}\lambda_{0,\max}M^{2+2\beta}\delta^{2}_{2}}$
(D.34)
$\displaystyle=\frac{\delta}{16}\times\frac{\delta_{1}}{\sqrt{2\lambda_{0,\max}}}$
(D.35)
$\displaystyle\leq\frac{\delta}{16}\times\frac{1}{\sqrt{2\lambda_{0,\max}}}\times\frac{\delta}{8\sqrt{6}d_{0}}$
(D.36)
$\displaystyle=\frac{\delta}{16}\times\frac{\delta}{16d_{0}\sqrt{3\lambda_{0,\max}}}$
(D.37) $\displaystyle\leq\frac{\delta}{16},$ (D.38)
which implies that
$I_{12}\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 22
Given that $A1$-$A5$ hold, we have $\lvert I_{13}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This proof is similar to the proof of Lemma 13, thus is omitted. [2mm]
###### Lemma 23
Given that $A1$-$A5$ hold, we have $\lvert I_{14}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This proof is similar to the proof of Lemma 17, thus is omitted. [2mm]
###### Lemma 24
Given that $A1$-$A5$ hold, we have $\lvert I_{15}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof This proof is similar to the proof of Lemma 11, thus is omitted. [2mm]
###### Lemma 25
Given that $A1$-$A5$ hold, we have $\lvert I_{16}\rvert\leq\delta/16$ for all
$1\leq j,l\leq p$, $\ 1\leq k,m\leq M$.
Proof For any $1\leq j,l\leq p$, $\ 1\leq k,m\leq M$, assume that $A1$-$A5$
hold, we then have
$\displaystyle\lvert I_{16}\rvert$
$\displaystyle=\lvert\frac{1}{n}\sum^{n}_{i=1}\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\frac{1}{n}\sum^{n}_{i=1}\lvert\langle\hat{g}_{ij}-g_{ij},\hat{\phi}_{jk}-\phi_{jk}\rangle\rvert\lvert\langle\hat{g}_{il}-g_{il},\hat{\phi}_{lm}-\phi_{lm}\rangle\rvert$
$\displaystyle\leq\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{ij}-g_{ij}\rVert^{2}}\sqrt{\frac{1}{n}\sum^{n}_{i=1}\lVert\hat{g}_{il}-g_{il}\rVert^{2}}\lVert\hat{\phi}_{jk}-\phi_{jk}\rVert\lVert\hat{\phi}_{lm}-\phi_{lm}\rVert$
$\displaystyle\overset{(i)}{\leq}\delta^{2}_{1}d_{jk}d_{lm}\delta^{2}_{2}$
$\displaystyle\leq
d^{2}_{2}k^{1+\beta}m^{1+\beta}\delta^{2}_{1}\delta^{2}_{2}$
$\displaystyle\leq d^{2}_{0}M^{2+2\beta}\delta^{2}_{1}\delta^{2}_{2},$
where $(i)$ follows the assumption that $A_{1}$, $A_{2}$ hold along with Lemma
27. Thus, by (D.18) and (D.34), we have
$\displaystyle\lvert I_{16}\rvert$
$\displaystyle\leq\frac{\delta}{16}\times\frac{d^{2}_{0}M^{2+2\beta}\delta^{2}_{1}\delta^{2}_{2}}{d^{2}_{0}\sqrt{2\lambda_{0,\max}}M^{2+2\beta}\delta_{1}\delta^{2}_{2}}$
$\displaystyle=\frac{\delta}{16}\times\frac{\delta_{1}}{\sqrt{2\lambda_{0,\max}}}$
$\displaystyle\leq\frac{\delta}{16}\times\frac{1}{\sqrt{2\lambda_{0,\max}}}\times\frac{\delta}{8\sqrt{6}d_{0}}$
$\displaystyle=\frac{\delta}{16}\times\frac{\delta}{16d_{0}\sqrt{3\lambda_{0,\max}}}$
$\displaystyle\leq\frac{\delta}{16}.$
[2mm]
###### Lemma 26
Suppose $f_{1},f_{2},\dots,f_{n}\in\mathbb{H}$ and
$v_{1},v_{2},\dots,v_{n}\in\mathbb{R}$, we have
$\lVert\sum^{n}_{i=1}v_{i}f_{i}\rVert\leq\sqrt{\sum^{n}_{i=1}v^{2}_{i}}\sqrt{\sum^{n}_{i=1}\lVert
f_{i}\rVert^{2}}$
Proof Note that
$\displaystyle\lVert\sum^{n}_{i=1}v_{i}f_{i}\rVert^{2}$
$\displaystyle=\int\left(\sum^{n}_{i=1}v_{i}f_{i}(t)\right)^{2}dt$
$\displaystyle\overset{(i)}{\leq}\int\left(\sum^{n}_{i=1}v^{2}_{i}\right)\left(\sum^{n}_{i=1}f^{2}_{i}(t)\right)dt$
$\displaystyle=\left(\sum^{n}_{i=1}v^{2}_{i}\right)\left(\sum^{n}_{i=1}\lVert
f_{i}\rVert^{2}\right),$
where $(i)$ follows Cauchy-Schwartz inequality, which directly implies the
result. [2mm]
###### Lemma 27
Suppose that Assumption 3 holds. Denote
$\tilde{\phi}_{jk}=\text{sgn}\left(\langle\hat{\phi}_{jk},\phi_{jk}\rangle\right)\phi_{jk}$,
where $\text{sgn}(t)=1$ if $t\geq 0$ and $\text{sgn}(t)=-1$ if $t<0$. Then we
have
$\lVert\hat{\phi}_{jk}-\tilde{\phi}_{jk}\rVert\leq
d_{jk}\lVert\hat{K}_{jj}-K_{jj}\rVert_{\text{HS}},$
where
$d_{jk}=2\sqrt{2}\max\\{(\lambda_{j(k-1)}-\lambda_{jk})^{-1},(\lambda_{jk}-\lambda_{j(k+1)})^{-1}\\}$
if $k\geq 2$ and $d_{j1}=2\sqrt{2}(\lambda_{j1}-\lambda_{j2})^{-1}$.
Proof This lemma can be found in Lemma 4.3 of Bosq (2000) and hence the proof
is omitted. [2mm]
###### Lemma 28
For $z\sim N_{L}\left(0,I_{L}\right)$, then for any $\delta>0$, we have
$P\left(\lVert z\rVert_{2}>\delta\right)\leq
2\exp\left(-\frac{\delta^{2}}{8L+2\sqrt{2L}\delta}\right).$
Proof Since
$\mathbb{E}\left[\lVert
z\rVert^{2k}_{2}\right]=\frac{\Gamma(\frac{L}{2}+k)}{\Gamma(\frac{L}{2})}\times
2^{k}\leq k!(2L)^{k},$
we have
$\mathbb{E}\left[\lVert z\rVert^{k}_{2}\right]\leq\sqrt{\mathbb{E}\left[\lVert
z\rVert^{2k}_{2}\right]}\leq\sqrt{k!}\left(\sqrt{2L}\right)^{k}\leq\frac{k!}{2}\cdot
4L\cdot(\sqrt{2L})^{k-2}$
for $k\geq 2$. Thus, by Lemma 29, we have proved the result. [2mm]
###### Lemma 29
Let $Z_{1},Z_{2},\dots,Z_{n}$ be independent random variables in a separable
Hilbert space with norm $\lVert\cdot\rVert$. If $\mathbb{E}[Z_{i}]=0$
($i=1,\ldots,n$) and
$\sum^{n}_{i=1}\mathbb{E}\left[\lVert
Z_{i}\rVert^{k}\right]\leq\frac{k!}{2}nL_{1}L^{k-2}_{2},k=2,3,\dots,$
for two positive constants $L_{1}$ and $L_{2}$, then for all $\delta>0$,
$P\left(\lVert\sum^{n}_{i=1}Z_{i}\rVert\geq n\delta\right)\leq
2\exp\left(-\frac{n\delta^{2}}{2L_{1}+2L_{2}\delta}\right).$
Proof This lemma can be derived directly from Theorem 2.5 (2) of Bosq (2000)
and hence its proof is omitted. [2mm]
###### Lemma 30
For a function $f(t)$ defined on $\mathcal{T}$, assuming that $f$ has
continuous derivative, and let $D_{0,f}\coloneqq\sup_{t\in{\mathcal{T}}}\lvert
f(t)\rvert$, $D_{1,f}\coloneqq\sup_{t\in{\mathcal{T}}}\lvert
f^{\prime}(t)\rvert$, assume that $D_{0,f},D_{1,f}<\infty$. Let
$\lvert\mathcal{T}\rvert$ denote the length of interval $\mathcal{T}$, and let
$u_{1}<u_{2}<\dots<u_{T}\in\mathcal{T}$, we denote endpoints of $\mathcal{T}$
as $u_{0}$ and $u_{T+1}$. Assume that there is positive constant $\zeta_{0}$
such that
$\max_{1\leq k\leq
T+1}\left|\frac{u_{k}-u_{k-1}}{\lvert\mathcal{T}\rvert}-\frac{1}{T}\right|\leq\frac{\zeta_{0}}{T^{2}}$
(D.39)
hold. Let $\zeta_{1}=\zeta_{0}+1$, then we have
$\left|\frac{1}{T}\sum^{T}_{k=1}f(u_{k})-\frac{1}{\lvert\mathcal{T}\rvert}\int_{\mathcal{T}}f(t)dt\right|\leq\frac{D_{1,f}\zeta^{2}_{1}\lvert\mathcal{T}\rvert/2+D_{0,f}(\zeta_{1}+\zeta_{0})}{T}.$
Proof Since
$\displaystyle\left|\frac{1}{T}\sum^{T}_{k=1}f(u_{k})-\frac{1}{\lvert\mathcal{T}\rvert}\int_{\mathcal{T}}f(t)dt\right|$
$\displaystyle\leq\left|\frac{1}{T}\sum^{T}_{k=1}f(u_{k})-\frac{1}{\lvert\mathcal{T}\rvert}\sum^{T}_{k=1}f(u_{k})(u_{k}-u_{k-1})\right|$
$\displaystyle+\left|\frac{1}{\lvert\mathcal{T}\rvert}\sum^{T}_{k=1}f(u_{k})(u_{k}-u_{k-1})-\frac{1}{\lvert\mathcal{T}\rvert}\int_{\mathcal{T}}f(t)dt\right|,$
we will first prove the first part is smaller than $D_{0,f}\zeta_{0}/T$, and
then prove the second part is smaller than
$(D_{1,f}\zeta^{2}_{1}\lvert\mathcal{T}\rvert/2+D_{0,f}\zeta_{1})/T$. For
first part, we have
$\displaystyle\left|\frac{1}{T}\sum^{T}_{k=1}f(u_{k})-\frac{1}{\lvert\mathcal{T}\rvert}\sum^{T}_{k=1}f(u_{k})(u_{k}-u_{k-1})\right|$
$\displaystyle=\left|\sum^{T}_{k=1}f(u_{k})\left(\frac{1}{T}-\frac{u_{k}-u_{k-1}}{\lvert\mathcal{T}\rvert}\right)\right|$
$\displaystyle\leq\sum^{T}_{k=1}\left|f(u_{k})\right|\left|\frac{1}{T}-\frac{u_{k}-u_{k-1}}{\lvert\mathcal{T}\rvert}\right|$
$\displaystyle\leq\max_{1\leq k\leq
T}\left|\frac{u_{k}-u_{k-1}}{\lvert\mathcal{T}\rvert}-\frac{1}{T}\right|\sum^{T}_{k=1}\left|f(u_{k})\right|$
$\displaystyle\leq\frac{\zeta_{0}}{T^{2}}\times T\times D_{0,f}$
$\displaystyle=\frac{\zeta_{0}D_{0,f}}{T}.$
To prove second part, we first note that based on (D.39), we have
$\max_{1\leq k\leq T+1}\lvert
u_{k}-u_{k-1}\rvert\leq\frac{\zeta_{1}\lvert\mathcal{T}\rvert}{T}.$
Then, for any $t\in(u_{k},u_{k+1})$, by Taylor’s expansion, we have
$f(t)=f(u_{k})+f^{\prime}(\bar{t})(t-u_{k}),$
where $\bar{t}\in(u_{k},t)$. Thus,
$\lvert f(t)-f(u_{k})\rvert=\lvert f^{\prime}(\bar{t})\rvert(t-u_{k})\leq
D_{1,f}(t-u_{k}).$
This way, we have
$\displaystyle\left|\frac{1}{\lvert\mathcal{T}\rvert}\sum^{T}_{k=1}f(u_{k})(u_{k}-u_{k-1})-\frac{1}{\lvert\mathcal{T}\rvert}\int_{\mathcal{T}}f(t)dt\right|$
$\displaystyle\leq\frac{1}{\lvert\mathcal{T}\rvert}\sum^{T}_{k=1}\int^{u_{k}}_{u_{k-1}}\lvert
f(u_{k})-f(t)\rvert
dt+\frac{1}{\lvert\mathcal{T}\rvert}\int^{u_{T+1}}_{u_{T}}\lvert f(t)\rvert
dt$ $\displaystyle\leq\frac{1}{\lvert\mathcal{T}\rvert}\times T\times
D_{1,f}\times\int^{u_{k}}_{u_{k-1}}(t-u_{k})dt+\frac{1}{\lvert\mathcal{T}\rvert}\times
D_{0,f}\times\frac{\zeta_{1}\lvert\mathcal{T}\rvert}{T}$
$\displaystyle=\frac{1}{\lvert\mathcal{T}\rvert}\times T\times
D_{1,f}\times\frac{(u_{k+1}-u_{k})^{2}}{2}+\frac{1}{\lvert\mathcal{T}\rvert}\times
D_{0,f}\times\frac{\zeta_{1}\lvert\mathcal{T}\rvert}{T}$
$\displaystyle\leq\frac{1}{\lvert\mathcal{T}\rvert}\times
T\times\frac{D_{1,f}}{2}\times\left(\max_{1\leq k\leq T+1}\lvert
u_{k+1}-u_{k}\rvert\right)^{2}+\frac{1}{\lvert\mathcal{T}\rvert}\times
D_{0,f}\times\frac{\zeta_{1}\lvert\mathcal{T}\rvert}{T}$
$\displaystyle\leq\frac{1}{\lvert\mathcal{T}\rvert}\times
T\times\frac{D_{1,f}}{2}\times\left(\frac{\zeta_{1}\lvert\mathcal{T}\rvert}{T}\right)^{2}+\frac{1}{\lvert\mathcal{T}\rvert}\times
D_{0,f}\times\frac{\zeta_{1}\lvert\mathcal{T}\rvert}{T}$
$\displaystyle=\frac{D_{1,f}\zeta^{2}_{1}\lvert\mathcal{T}\rvert/2+D_{0,f}\zeta_{1}}{T}.$
Thus, combining part 1 and part 2, we have
$\left|\frac{1}{T}\sum^{T}_{k=1}f(u_{k})-\frac{1}{\lvert\mathcal{T}\rvert}\int_{\mathcal{T}}f(t)dt\right|\leq\frac{D_{1,f}\zeta^{2}_{1}\lvert\mathcal{T}\rvert/2+D_{0,f}(\zeta_{1}+\zeta_{0})}{T}.$
[2mm]
###### Lemma 31
For Gaussian random function $g$ in Hilbert Space $\mathbb{H}$ with mean zero,
that is, $\mathbb{E}[g]=0$, we have
$\mathbb{E}\left[\|g\|^{2k}\right]\leq(2\lambda_{0})^{k}\cdot k!,$
where $\lambda_{0}=\mathbb{E}\left[\|g\|^{2}\right]$.
Proof Let $\\{\phi_{m}\\}_{m\geq 1}$ be othornormal eigenfunctions of $g$,
and $a_{m}=\langle g,\phi_{m}\rangle$, then $a_{m}\sim N(0,\lambda_{m})$ and
$\lambda_{0}=\sum_{m\geq 1}\lambda_{m}$. Let
$\xi_{m}=\lambda^{-1/2}_{m}a_{m}$, then we have $\xi_{m}\sim N(0,1)$ i.i.d..
By Karhunen–Loève theorem, we have
$g=\sum^{\infty}_{m=1}\lambda_{m}^{1/2}\xi_{m}\phi_{m}.$
Thus, $\|g\|=\left(\sum_{m\geq 1}\lambda_{m}\xi^{2}_{m}\right)^{1/2}$, and
$\|g\|^{2k}=\left(\sum_{m\geq 1}\lambda_{m}\xi^{2}_{m}\right)^{k}$.
Recall Jensen’s inequality, for convex function $\psi(\cdot)$, and real
numbers $x_{1},x_{2},\dots,x_{n}$ in its domain, and positive real numbers
$a_{1},a_{2},\dots,a_{n}$, we have
$\psi\left(\frac{\sum^{n}_{i=1}a_{i}x_{i}}{\sum^{n}_{i=1}a_{i}}\right)\leq\frac{\sum^{n}_{i=1}a_{i}\psi(x_{i})}{\sum^{n}_{i=1}a_{i}}.$
Here, let $\psi(t)=t^{k}$, and we then have
$\displaystyle\|g\|^{2k}$ $\displaystyle=\left(\sum_{m\geq
1}\lambda_{m}\right)^{k}\cdot\left(\frac{\sum_{m\geq
1}\lambda_{m}\xi^{2}_{m}}{\sum_{m\geq 1}\lambda_{m}}\right)^{k}$
$\displaystyle\leq\left(\sum_{m\geq
1}\lambda_{m}\right)^{k}\cdot\frac{\sum_{m\geq
1}\lambda_{m}\xi^{2k}_{m}}{\sum_{m\geq 1}\lambda_{m}}$
$\displaystyle=\left(\sum_{m\geq
1}\lambda_{m}\right)^{k-1}\cdot\left(\sum_{m\geq
1}\lambda_{m}\xi^{2k}_{m}\right).$
Thus,
$\displaystyle\mathbb{E}\left[\|g\|^{2k}\right]$
$\displaystyle\leq\left(\sum_{m\geq
1}\lambda_{m}\right)^{k-1}\cdot\left(\sum_{m\geq
1}\lambda_{m}\mathbb{E}\left[\xi^{2k}_{m}\right]\right)$
$\displaystyle=\left(\sum_{m\geq
1}\lambda_{m}\right)^{k}\mathbb{E}\left[\xi^{2k}_{1}\right]$
$\displaystyle=\left(\sum_{m\geq 1}\lambda_{m}\right)^{k}\cdot\pi^{-1/2}\cdot
2^{k}\cdot\Gamma(k+1/2)$ $\displaystyle\leq\left(\sum_{m\geq
1}\lambda_{m}\right)^{k}\cdot 2^{k}\cdot k!$
$\displaystyle=(2\lambda_{0})^{k}k!$
[2mm]
###### Lemma 32
For any $\delta>0$, we have
$P\left(\left\|\frac{1}{n}\sum^{n}_{i=1}\left[g_{ij}(t)g_{ij}(s)-K_{jj}(s,t)\right]\right\|_{\text{HS}}>\delta\right)\leq
2\exp\left(-\frac{n\delta^{2}}{64\lambda^{2}_{0,\max}+8\lambda_{0,\max}\delta}\right)$
holding for any $j=1,\ldots,p$.
Proof Since $g_{ij}(t)=\sum_{m\geq
1}\lambda^{1/2}_{jm}\xi_{ijm}\phi_{jm}(t)$, and $\xi_{ijm}\sim N(0,1)$ i.i.d.
for $m\geq 1$, we have $g_{ij}(s)g_{ij}(t)=\sum_{m,m^{\prime}\geq
1}\lambda^{1/2}_{jm}\lambda^{1/2}_{jm^{\prime}}\xi_{ijm}\xi_{ijm^{\prime}}\phi_{jm}(s)\phi_{jm^{\prime}}(t)$,
and $K_{jj}(s,t)=\mathbb{E}[g_{ij}(s)g_{ij}(t)]=\sum_{m,m^{\prime}\geq
1}\lambda^{1/2}_{jm}\lambda^{1/2}_{jm^{\prime}}\phi_{jm}(s)\phi_{jm^{\prime}}(t)\mathbbm{1}_{mm^{\prime}}$,
where $\mathbbm{1}_{mm^{\prime}}=\mathbbm{1}(m=m^{\prime})=1$ if
$m=m^{\prime}$ and $0$ if $m\neq m^{\prime}$. Thus,
$\left\|g_{ij}(s)g_{ij}(t)-K_{jj}(s,t)\right\|^{2}_{\text{HS}}=\sum_{m,m^{\prime}\geq
1}\lambda_{jm}\lambda_{jm^{\prime}}(\xi_{ijm}\xi_{ijm^{\prime}}-\mathbbm{1}_{mm^{\prime}})^{2},$
and for any $k\geq 2$, we have
$\displaystyle\mathbb{E}\left[\left\|g_{ij}(s)g_{ij}(t)-K_{jj}(s,t)\right\|^{k}_{\text{HS}}\right]$
$\displaystyle=\mathbb{E}\left[\left\\{\sum_{m,m^{\prime}\geq
1}\lambda_{jm}\lambda_{jm^{\prime}}(\xi_{ijm}\xi_{ijm^{\prime}}-\mathbbm{1}_{mm^{\prime}})^{2}\right\\}^{k/2}\right]$
$\displaystyle\overset{(i)}{\leq}\left(\sum_{m,m^{\prime}\geq
1}\lambda_{jm}\lambda_{jm^{\prime}}\right)^{k/2-1}\sum_{m,m^{\prime}\geq
1}\lambda_{jm}\lambda_{jm^{\prime}}\mathbb{E}\left[\left(\xi_{ijm}\xi_{ijm^{\prime}}-\mathbbm{1}_{mm^{\prime}}\right)^{k}\right],$
where $(i)$ follows Jensen’s inequality with convex function
$\psi(x)=x^{k/2}$. Since
$\displaystyle\mathbb{E}\left[\left(\xi_{ijm}\xi_{ijm^{\prime}}-\mathbbm{1}_{mm^{\prime}}\right)^{k}\right]$
$\displaystyle\leq
2^{k-1}\left(\mathbb{E}\left[(\xi_{ijm}\xi_{ijm^{\prime}})^{k}\right]+1\right)$
$\displaystyle\leq 2^{k-1}\left(\mathbb{E}[\xi^{2k}_{ij1}]+1\right)$
$\displaystyle\leq 2^{k-1}(2^{k}k!+1)$ $\displaystyle\leq 4^{k}k!,$
we then have
$\mathbb{E}\left[\left\|g_{ij}(s)g_{ij}(t)-K_{jj}(s,t)\right\|^{k}_{\text{HS}}\right]\leq(4\lambda_{j0})^{k}k!\leq(4\lambda_{0,\max})^{k}k!.$
The final results then follows directly from Lemma 29. [2mm]
## References
* Ahmed and Xing (2009) A. Ahmed and E. P. Xing. Recovering time-varying networks of dependencies in social and biological studies. _Proc. Natl. Acad. Sci. U.S.A._ , 106(29):11878–11883, 2009.
* Barron and Sheu (1991) A. R. Barron and C.-H. Sheu. Approximation of density functions by sequences of exponential families. _Ann. Statist._ , 19(3):1347–1369, 1991.
* Beck and Teboulle (2009) A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. _SIAM J. Imag. Sci._ , 2:183–202, 2009.
* Bosq (2000) D. Bosq. _Linear processes in function spaces_ , volume 149 of _Lecture Notes in Statistics_. Springer-Verlag, New York, 2000. Theory and applications.
* Boucheron et al. (2013) S. Boucheron, G. Lugosi, and P. Massart. _Concentration inequalities_. Oxford University Press, Oxford, 2013. A nonasymptotic theory of independence, With a foreword by Michel Ledoux.
* Boyd et al. (2011) S. P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. _Found. Trends Mach. Learn._ , 3(1):1–122, 2011\.
* Cai (2017) T. T. Cai. Global testing and large-scale multiple testing for high-dimensional covariance structures. _Annual Review of Statistics and Its Application_ , 4(1):423–446, 2017.
* Cai et al. (2011) T. T. Cai, W. Liu, and X. Luo. A constrained $\ell_{1}$ minimization approach to sparse precision matrix estimation. _J. Am. Stat. Assoc._ , 106(494):594–607, 2011\.
* Chow and Liu (1968) C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. _IEEE Trans. Inf. Theory_ , 14(3):462–467, 1968\.
* Danaher et al. (2014) P. Danaher, P. Wang, and D. M. Witten. The joint graphical lasso for inverse covariance estimation across multiple classes. _J. R. Stat. Soc. B_ , 76(2):373–397, 2014.
* Fazayeli and Banerjee (2016) F. Fazayeli and A. Banerjee. Generalized direct change estimation in Ising model structure. In M. F. Balcan and K. Q. Weinberger, editors, _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pages 2281–2290, New York, New York, USA, 2016. PMLR.
* Ghoshal and Honorio (2019) A. Ghoshal and J. Honorio. Direct estimation of difference between structural equation models in high dimensions. _arXiv preprint arXiv:1906.12024_ , 2019.
* Heckler (2005) C. E. Heckler. Applied multivariate statistical analysis, 2005.
* Hsing and Eubank (2015) T. Hsing and R. Eubank. _Theoretical foundations of functional data analysis, with an introduction to linear operators_. Wiley Series in Probability and Statistics. John Wiley & Sons, Ltd., Chichester, 2015.
* Ingber (1997) L. Ingber. Statistical mechanics of neocortical interactions: Canonical momenta indicators of electroencephalography. _Physical Review E_ , 55(4):4578–4593, 1997.
* Kim et al. (2019) B. Kim, S. Liu, and M. Kolar. Two-sample inference for high-dimensional markov networks. _arXiv preprint arXiv:1905.00466_ , 2019.
* Knyazev (2007) G. G. Knyazev. Motivation, emotion, and their inhibitory control mirrored in brain oscillations. _Neuroscience & Biobehavioral Reviews_, 31(3):377–395, 2007.
* Kokoszka and Reimherr (2017) P. Kokoszka and M. Reimherr. _Introduction to functional data analysis_. Chapman and Hall/CRC, 2017.
* Kolar and Xing (2009) M. Kolar and E. P. Xing. Sparsistent estimation of time-varying discrete markov random fields. _ArXiv e-prints, arXiv:0907.2337_ , 2009.
* Kolar and Xing (2011) M. Kolar and E. P. Xing. On time varying undirected graphs. In _Proc. of AISTATS_ , 2011.
* Kolar and Xing (2012) M. Kolar and E. P. Xing. Estimating networks with jumps. _Electron. J. Stat._ , 6:2069–2106, 2012.
* Kolar et al. (2009) M. Kolar, L. Song, and E. P. Xing. Sparsistent learning of varying-coefficient models with structural changes. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, _Proc. of NIPS_ , pages 1006–1014, 2009.
* Kolar et al. (2010a) M. Kolar, A. P. Parikh, and E. P. Xing. On sparse nonparametric conditional covariance selection. In J. Fürnkranz and T. Joachims, editors, _Proc. 27th Int. Conf. Mach. Learn._ , Haifa, Israel, 2010a.
* Kolar et al. (2010b) M. Kolar, L. Song, A. Ahmed, and E. P. Xing. Estimating Time-varying networks. _Ann. Appl. Stat._ , 4(1):94–123, 2010b.
* Kolar et al. (2013) M. Kolar, H. Liu, and E. P. Xing. Markov network estimation from multi-attribute data. In _Proc. of ICML_ , 2013.
* Kolar et al. (2014) M. Kolar, H. Liu, and E. P. Xing. Graph estimation from multi-attribute data. _J. Mach. Learn. Res._ , 15(1):1713–1750, 2014\.
* Lauritzen (1996) S. L. Lauritzen. _Graphical Models_ , volume 17 of _Oxford Statistical Science Series_. The Clarendon Press Oxford University Press, New York, 1996. Oxford Science Publications.
* Li and Solea (2018) B. Li and E. Solea. A nonparametric graphical model for functional data with application to brain networks based on fMRI. _J. Amer. Statist. Assoc._ , 113(524):1637–1655, 2018.
* Li et al. (2007) K.-C. Li, A. Palotie, S. Yuan, D. Bronnikov, D. Chen, X. Wei, O.-W. Choi, J. Saarela, and L. Peltonen. Finding disease candidate genes by liquid association. _Genome Biology_ , 8(10):R205, 2007.
* Liu et al. (2014) S. Liu, J. A. Quinn, M. U. Gutmann, T. Suzuki, and M. Sugiyama. Direct learning of sparse changes in Markov networks by density ratio estimation. _Neural Comput._ , 26(6):1169–1197, 2014.
* Liu et al. (2019) X. Liu, H. Nassar, and K. Podgórski. Splinets–efficient orthonormalization of the b-splines. _arXiv preprint arXiv:1910.07341_ , 2019.
* Lu et al. (2018) J. Lu, M. Kolar, and H. Liu. Post-regularization inference for time-varying nonparanormal graphical models. _Journal of Machine Learning Research_ , 18(203):1–78, 2018.
* Meinshausen and Bühlmann (2006) N. Meinshausen and P. Bühlmann. High dimensional graphs and variable selection with the lasso. _Ann. Stat._ , 34(3):1436–1462, 2006.
* Na et al. (2019) S. Na, M. Kolar, and O. Koyejo. Estimating differential latent variable graphical models with applications to brain connectivity. _arXiv preprint arXiv:1909.05892_ , 2019, arXiv:1909.05892v1.
* Negahban et al. (2012) S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of $m$-estimators with decomposable regularizers. _Stat. Sci._ , 27(4):538--557, 2012.
* Newman (2003) M. E. J. Newman. The structure and function of complex networks. _SIAM Rev._ , 45(2):167--256, 2003.
* Parikh and Boyd (2014) N. Parikh and S. P. Boyd. Proximal algorithms. _Foundations and Trends in Optimization_ , 1(3):127--239, 2014.
* Qiao et al. (2019) X. Qiao, S. Guo, and G. M. James. Functional Graphical Models. _J. Amer. Statist. Assoc._ , 114(525):211--222, 2019.
* Qiao et al. (2020) X. Qiao, C. Qian, G. M. James, and S. Guo. Doubly functional graphical models in high dimensions. _Biometrika_ , 107(2):415--431, 2020.
* Ramsay and Silverman (2005) J. O. Ramsay and B. W. Silverman. _Functional data analysis_. Springer Series in Statistics. Springer, New York, second edition, 2005\.
* Ramsay et al. (2020) J. O. Ramsay, H. Wickham, S. Graves, and G. Hooker. _fda: Functional Data Analysis_ , 2020. R package version 2.4.8.1.
* Ravikumar et al. (2011) P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing $\ell_{1}$-penalized log-determinant divergence. _Electron. J. Stat._ , 5:935--980, 2011.
* She (2012) Y. She. An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors. _Computational Statistics & Data Analysis_, 56(10):2976--2990, 2012.
* Song et al. (2009a) L. Song, M. Kolar, and E. P. Xing. Keller: Estimating time-varying interactions between genes. _Bioinformatics_ , 25(12):i128--i136, 2009a.
* Song et al. (2009b) L. Song, M. Kolar, and E. P. Xing. Time-varying dynamic bayesian networks. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, _Proc. of NIPS_ , pages 1732--1740, 2009b.
* Spirtes et al. (2000) P. Spirtes, C. Glymour, and R. Scheines. _Causation, Prediction, And Search_. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, second edition, 2000. With additional material by David Heckerman, Christopher Meek, Gregory F. Cooper and Thomas Richardson, A Bradford Book.
* Sugiyama et al. (2008) M. Sugiyama, S. Nakajima, H. Kashima, P. V. Buenau, and M. Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, _Advances in Neural Information Processing Systems 20_ , pages 1433--1440. Curran Associates, Inc., 2008.
* Talih and Hengartner (2005) M. Talih and N. Hengartner. Structural learning with time-varying components: Tracking the cross-section of the financial time series. _J. R. Stat. Soc. B_ , 67(3):321--341, 2005.
* Tibshirani (2010) R. Tibshirani. Proximal gradient descent and acceleration. _Lecture Notes_ , 2010.
* van de Geer and Bühlmann (2009) S. A. van de Geer and P. Bühlmann. On the conditions used to prove oracle results for the lasso. _Electron. J. Stat._ , 3:1360--1392, 2009.
* Vogel and Fried (2011) D. Vogel and R. Fried. Elliptical graphical modelling. _Biometrika_ , 98(4):935--951, 2011.
* Wainwright (2019) M. J. Wainwright. _High-dimensional statistics: A non-asymptotic viewpoint_ , volume 48. Cambridge University Press, 2019.
* Wang and Kolar (2014) J. Wang and M. Kolar. Inference for sparse conditional precision matrices. _ArXiv e-prints, arXiv:1412.7638_ , 2014, arXiv:1412.7638.
* Wang et al. (2018) Y. Wang, C. Squires, A. Belyaeva, and C. Uhler. Direct estimation of differences in causal graphs. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada._ , pages 3774--3785, 2018.
* Wasserman (2006) L. Wasserman. _All of nonparametric statistics_. Springer Science & Business Media, 2006.
* Witten and Tibshirani (2009) D. M. Witten and R. J. Tibshirani. Covariance-regularized regression and classification for high dimensional problems. _J. R. Stat. Soc. B_ , 71(3):615--636, 2009.
* Xu and Gu (2016) P. Xu and Q. Gu. Semiparametric differential graph models. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, _Advances in Neural Information Processing Systems 29_ , pages 1064--1072. Curran Associates, Inc., 2016.
* Xuan and Murphy (2007) X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In _Proc. of ICML_ , ICML ’07, pages 1055--1062, New York, NY, USA, 2007. ACM.
* Yin et al. (2010) J. Yin, Z. Geng, R. Li, and H. Wang. Nonparametric covariance model. _Stat. Sinica_ , 20:469--479, 2010.
* Yu et al. (2016) M. Yu, V. Gupta, and M. Kolar. Statistical inference for pairwise graphical models using score matching. In _Advances in Neural Information Processing Systems 29_. Curran Associates, Inc., 2016.
* Yu et al. (2019) M. Yu, V. Gupta, and M. Kolar. Simultaneous inference for pairwise graphical models with generalized score matching. _arXiv preprint arXiv:1905.06261_ , 2019.
* Yuan et al. (2017) H. Yuan, R. Xi, C. Chen, and M. Deng. Differential network analysis via lasso penalized D-trace loss. _Biometrika_ , 104(4):755--770, 2017.
* Yuan and Lin (2006) M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. _J. R. Stat. Soc. B_ , 68:49--67, 2006.
* Yuan and Lin (2007) M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. _Biometrika_ , 94(1):19--35, 2007.
* Zapata et al. (2019) J. Zapata, S.-Y. Oh, and A. Petersen. Partial separability and functional graphical models for multivariate gaussian processes. _arXiv preprint arXiv:1910.03134_ , 2019.
* Zhang et al. (2018) C. Zhang, H. Yan, S. Lee, and J. Shi. Dynamic multivariate functional data modeling via sparse subspace learning. _CoRR_ , abs/1804.03797, 2018, arXiv:1804.03797.
* Zhang et al. (1995) X. L. Zhang, H. Begleiter, B. Porjesz, W. Wang, and A. Litke. Event related potentials during object recognition tasks. _Brain Research Bulletin_ , 38(6):531--538, 1995\.
* Zhang and Wang (2016) X. Zhang and J.-L. Wang. From sparse to dense functional data and beyond. _Ann. Statist._ , 44(5):2281--2321, 2016.
* Zhao et al. (2019) B. Zhao, Y. S. Wang, and M. Kolar. Direct estimation of differential functional graphical models. _arXiv preprint arXiv:1910.09701_ , 2019.
* Zhao et al. (2014) S. D. Zhao, T. T. Cai, and H. Li. Direct estimation of differential networks. _Biometrika_ , 101(2):253--268, 2014.
* Zhou et al. (2010) S. Zhou, J. D. Lafferty, and L. A. Wasserman. Time varying undirected graphs. _Mach. Learn._ , 80(2-3):295--319, 2010.
* Zhu et al. (2016) H. Zhu, N. Strawn, and D. B. Dunson. Bayesian graphical models for multivariate functional data. _J. Mach. Learn. Res._ , 17:Paper No. 204, 27, 2016.
|
2024-09-04T02:54:59.296596 | 2020-03-11T17:21:15 | 2003.05425 | {
"authors": "Pim de Haan, Maurice Weiler, Taco Cohen and Max Welling",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26172",
"submitter": "Pim de Haan",
"url": "https://arxiv.org/abs/2003.05425"
} | arxiv-papers | # Gauge Equivariant Mesh CNNs
Anisotropic convolutions on geometric graphs
Pim de Haan
Qualcomm AI Research
University of Amsterdam
&Maurice Weiler∗
QUVA Lab
University of Amsterdam
&Taco Cohen
Qualcomm AI Research
&Max Welling
Qualcomm AI Research
University of Amsterdam
Equal ContributionQualcomm AI Research is an initiative of Qualcomm
Technologies, Inc.
###### Abstract
A common approach to define convolutions on meshes is to interpret them as a
graph and apply graph convolutional networks (GCNs). Such GCNs utilize
_isotropic_ kernels and are therefore insensitive to the relative orientation
of vertices and thus to the geometry of the mesh as a whole. We propose Gauge
Equivariant Mesh CNNs which generalize GCNs to apply _anisotropic_ gauge
equivariant kernels. Since the resulting features carry orientation
information, we introduce a geometric message passing scheme defined by
parallel transporting features over mesh edges. Our experiments validate the
significantly improved expressivity of the proposed model over conventional
GCNs and other methods.
## 1 Introduction
Convolutional neural networks (CNNs) have been established as the default
method for many machine learning tasks like speech recognition or planar and
volumetric image classification and segmentation. Most CNNs are restricted to
flat or spherical geometries, where convolutions are easily defined and
optimized implementations are available. The empirical success of CNNs on such
spaces has generated interest to generalize convolutions to more general
spaces like graphs or Riemannian manifolds, creating a field now known as
geometric deep learning (Bronstein et al., 2017).
A case of specific interest is convolution on _meshes_ , the discrete analog
of 2-dimensional embedded Riemannian manifolds. Mesh CNNs can be applied to
tasks such as detecting shapes, registering different poses of the same shape
and shape segmentation. If we forget the positions of vertices, and which
vertices form faces, a mesh $M$ can be represented by a graph ${\mathcal{G}}$.
This allows for the application of _graph convolutional networks_ (GCNs) to
processing signals on meshes.
Figure 1: Two local neighbourhoods around vertices $p$ and their
representations in the tangent planes $T_{p}M$. The distinct geometry of the
neighbourhoods is reflected in the different angles $\theta_{pq_{i}}$ of
incident edges from neighbours $q_{i}$. Graph convolutional networks apply
isotropic kernels and can therefore not distinguish both neighbourhoods. Gauge
Equivariant Mesh CNNs apply anisotropic kernels and are therefore sensitive to
orientations. The arbitrariness of reference orientations, determined by a
choice of neighbour $q_{0}$, is accounted for by the gauge equivariance of the
model.
However, when representing a mesh by a graph, we lose important geometrical
information. In particular, in a graph there is no notion of angle between or
ordering of two of a node’s incident edges (see figure 1). Hence, a GCNs
output at a node $p$ is designed to be independent of relative angles and
_invariant_ to any permutation of its neighbours $q_{i}\in\mathcal{N}(p)$. A
graph convolution on a mesh graph therefore corresponds to applying an
_isotropic_ convolution kernel. Isotropic filters are insensitive to the
orientation of input patterns, so their features are strictly less expressive
than those of orientation aware anisotropic filters.
To address this limitation of graph networks we propose Gauge Equivariant Mesh
CNNs (GEM-CNN)111Implementation at https://github.com/Qualcomm-AI-
research/gauge-equivariant-mesh-cnn, which minimally modify GCNs such that
they are able to use anisotropic filters while sharing weights across
different positions and respecting the local geometry. One obstacle in sharing
anisotropic kernels, which are functions of the angle $\theta_{pq}$ of
neighbour $q$ with respect to vertex $p$, over multiple vertices of a mesh is
that there is no unique way of selecting a reference neighbour $q_{0}$, which
has the direction $\theta_{pq_{0}}=0$. The reference neighbour, and hence the
orientation of the neighbours, needs to be chosen arbitrarily. In order to
guarantee the equivalence of the features resulting from different choices of
orientations, we adapt Gauge Equivariant (or coordinate independent) CNNs
(Cohen et al., 2019b; Weiler et al., 2021) to general meshes. The kernels of
our model are thus designed to be _equivariant under gauge transformations_ ,
that is, to guarantee that the responses for different kernel orientations are
related by a prespecified transformation law. Such features are identified as
geometric objects like scalars, vectors, tensors, etc., depending on the
specific choice of transformation law. In order to compare such geometric
features at neighbouring vertices, they need to be _parallel transported_
along the connecting edge.
In our implementation we first specify the transformation laws of the feature
spaces and compute a space of gauge equivariant kernels. Then we pick
arbitrary reference orientations at each node, relative to which we compute
neighbour orientations and compute the corresponding edge transporters. Given
these quantities, we define the forward pass as a message passing step via
edge transporters followed by a contraction with the equivariant kernels
evaluated at the neighbour orientations. Algorithmically, Gauge Equivariant
Mesh CNNs are therefore just GCNs with anisotropic, gauge equivariant kernels
and message passing via parallel transporters. Conventional GCNs are covered
in this framework for the specific choice of isotropic kernels and trivial
edge transporters, given by identity maps.
In Sec. 2, we will give an outline of our method, deferring details to Secs. 3
and 4. In Sec. 3.2, we describe how to compute general geometric quantities,
not specific to our method, used for the computation of the convolution. In
our experiments in Sec. 6.1, we find that the enhanced expressiveness of Gauge
Equivariant Mesh CNNs enables them to outperform conventional GCNs and other
prior work in a shape correspondence task.
## 2 Convolutions on Graphs with Geometry
We consider the problem of processing signals on discrete 2-dimensional
manifolds, or meshes $M$. Such meshes are described by a set ${\mathcal{V}}$
of vertices in $\mathbb{R}^{3}$ together with a set ${\mathcal{F}}$ of tuples,
each consisting of the vertices at the corners of a face. For a mesh to
describe a proper manifold, each edge needs to be connected to two faces, and
the neighbourhood of each vertex needs to be homeomorphic to a disk. Mesh $M$
induces a graph ${\mathcal{G}}$ by forgetting the coordinates of the vertices
while preserving the edges.
A conventional graph convolution between kernel $K$ and signal $f$, evaluated
at a vertex $p$, can be defined by
$\displaystyle(K\star f)_{p}\ =\
K_{\text{self}}f_{p}\,+\sum\nolimits_{q\in{\mathcal{N}}_{p}}\\!\\!K_{\textup{neigh}}f_{q},$
(1)
where ${\mathcal{N}}_{p}$ is the set of neighbours of $p$ in ${\mathcal{G}}$,
and $K_{\text{self}}\in\mathbb{R}^{C_{\textup{in}}\times C_{\textup{out}}}$
and $K_{\textup{neigh}}\in\mathbb{R}^{C_{\textup{in}}\times C_{\textup{out}}}$
are two linear maps which model a self interaction and the neighbour
contribution, respectively. Importantly, graph convolution does not
distinguish different neighbours, because each feature vector $f_{q}$ is
multiplied by the same matrix $K_{\textup{neigh}}$ and then summed. For this
reason we say the kernel is _isotropic_.
Algorithm 1 Gauge Equivariant Mesh CNN layer
Input: mesh $M$, input/output feature types
$\rho_{\textup{in}},\rho_{\textup{out}}$, reference neighbours
$(q_{0}^{p}\in{\mathcal{N}}_{p})_{p\in M}$.
Compute basis kernels $K^{i}_{\textup{self}},K^{i}_{\textup{neigh}}(\theta)$
$\rhd$ Sec. 3
Initialise weights $w_{\textup{self}}^{i}$ and $w^{i}_{\textup{neigh}}$.
For each neighbour pair, $p\in M,q\in{\mathcal{N}}_{p}$: $\rhd$ App. A.
compute neighbor angles $\theta_{pq}$ relative to reference neighbor compute
parallel transporters $g_{q\to p}$
Forward$\big{(}$input features $(f_{p})_{p\in M}$, weights
$w^{i}_{\textup{self}},w^{i}_{\textup{neigh}}$$\big{)}$:
$f^{\prime}_{p}\leftarrow\sum_{i}w_{\textup{self}}^{i}K^{i}_{\textup{self}}f_{p}+\\!\\!\\!\sum_{i,q\in{\mathcal{N}}_{p}}\\!\\!w^{i}_{\textup{neigh}}K^{i}_{\textup{neigh}}(\theta_{pq})\rho_{\textup{in}}(g_{q\to
p})f_{q}$
Consider the example in figure 1, where on the left and right, the
neighbourhood of one vertex $p$, containing neighbours
$q\in{\mathcal{N}}_{p}$, is visualized. An isotropic kernel would propagate
the signal from the neighbours to $p$ in exactly the same way in both
neighbourhoods, even though the neighbourhoods are geometrically distinct. For
this reason, our method uses direction sensitive (_anisotropic_) kernels
instead of isotropic kernels. Anisotropic kernels are inherently more
expressive than isotropic ones which is why they are used universally in
conventional planar CNNs.
We propose the Gauge Equivariant Mesh Convolution, a minimal modification of
graph convolution that allows for anisotropic kernels $K(\theta)$ whose value
depends on an orientation $\theta\in[0,2\pi)$.222 In principle, the kernel
could be made dependent on the radial distance of neighboring nodes, by
$K_{\textup{neigh}}(r,\theta)=F(r)K_{\textup{neigh}}(\theta)$, where $F(r)$ is
unconstrained and $K_{\textup{neigh}}(\theta)$ as presented in this paper. As
this dependency did not improve the performance in our empirical evaluation,
we omit it. To define the orientations $\theta_{pq}$ of neighbouring vertices
$q\in{\mathcal{N}}_{p}$ of $p$, we first map them to the tangent plane
$T_{p}M$ at $p$, as visualized in figure 1. We then pick an _arbitrary_
reference neighbour $q^{p}_{0}$ to determine a reference orientation333
Mathematically, this corresponds to a choice of _local reference frame_ or
_gauge_. $\theta_{pq^{p}_{0}}:=0$, marked orange in figure 1. This induces a
basis on the tangent plane, which, when expressed in polar coordinates,
defines the angles $\theta_{pq}$ of the other neighbours.
As we will motivate in the next section, features in a Gauge Equivariant CNN
are coefficients of geometric quantities. For example, a tangent vector at
vertex $p$ can be described either geometrically by a 3 dimensional vector
orthogonal to the normal at $p$ or by two coefficients in the basis on the
tangent plane. In order to perform convolution, geometric features at
different vertices need to be linearly combined, for which it is required to
first “parallel transport” the features to the same vertex. This is done by
applying a matrix $\rho(g_{q\to p})\in\mathbb{R}^{C_{\textup{in}}\times
C_{\textup{in}}}$ to the coefficients of the feature at $q$, in order to
obtain the coefficients of the feature vector transported to $p$, which can be
used for the convolution at $p$. The transporter depends on the geometric
_type_ (group representation) of the feature, denoted by $\rho$ and described
in more detail below. Details of how the tangent space is defined, how to
compute the map to the tangent space, angles $\theta_{pq}$, and the parallel
transporter are given in Appendix A.
In combination, this leads to the GEM-CNN convolution
$(K\star f)_{p}\ =\
K_{\textup{self}}f_{p}+\sum\nolimits_{q\in{\mathcal{N}}_{p}}K_{\textup{neigh}}(\theta_{pq})\rho(g_{q\to
p})f_{q}$ (2)
which differs from the conventional graph convolution, defined in Eq. 1 only
by the use of an anisotropic kernel and the parallel transport message
passing.
We require the outcome of the convolution to be _equivalent_ for any choice of
reference orientation. This is not the case for any anisotropic kernel but
only for those which are _equivariant under changes of reference orientations_
(gauge transformations). Equivariance imposes a linear constraint on the
kernels. We therefore solve for complete sets of “basis-kernels”
$K_{\text{self}}^{i}$ and $K_{\text{neigh}}^{i}$ satisfying this constraint
and linearly combine them with parameters $w_{\text{self}}^{i}$ and
$w_{\text{neigh}}^{i}$ such that
$K_{\text{self}}=\sum_{i}w_{\text{self}}^{i}K_{\text{self}}^{i}$ and
$K_{\text{neigh}}=\sum_{i}w_{\text{neigh}}^{i}K_{\text{neigh}}^{i}$. Details
on the computation of basis kernels are given in section 3. The full algorithm
for initialisation and forward pass, which is of time and space complexity
linear in the number of vertices, for a GEM-CNN layer are listed in algorithm
1. Gradients can be computed by automatic differentiation.
The GEM-CNN is gauge equivariant, but furthermore satisfies two important
properties. Firstly, it depends only on the intrinsic shape of the 2D mesh,
not on the embedding of the mesh in $\mathbb{R}^{3}$. Secondly, whenever a map
from the mesh to itself exists that preserves distances and orientation, the
convolution is equivariant to moving the signal along such transformations.
These properties are proven in Appendix D and empirically shown in Appendix
F.2.
$p$$q_{A}$$q_{B}$$q_{C}$$p$$q_{A}$$q_{B}$$q_{C}$$p$$q_{A}$$q_{B}$$q_{C}$$p$$q_{A}$$q_{B}$$q_{C}$pick
gauge $q_{0}=q_{A}$map back to meshpick gauge $q_{0}=q_{B}$map back to
meshgeometric convconv in gauge $A$conv in gauge $B$gauge transformation
$A\\!\to\\!B$gauge transformation $A\\!\to\\!B$
(a) Convolution from scalar to scalar features.
$p$$q_{A}$$q_{B}$$q_{C}$$p$$q_{A}$$q_{B}$$q_{C}$$p$$q_{A}$$q_{B}$$q_{C}$$p$$q_{A}$$q_{B}$$q_{C}$pick
gauge $q_{0}=q_{A}$map back to meshpick gauge $q_{0}=q_{B}$map back to
meshgeometric convconv in gauge $A$conv in gauge $B$gauge transfromation
$A\\!\to\\!B$gauge transfromation $A\\!\to\\!B$
(b) Convolution from scalar to vector features.
Figure 2: Visualization of the Gauge Equivariant Mesh Convolution in two
configurations, scalar to scalar and scalar to vector. The convolution
operates in a gauge, so that vectors are expressed in coefficients in a basis
and neighbours have polar coordinates, but can also be seen as a _geometric
convolution_ , a gauge-independent map from an input signal on the mesh to a
output signal on the mesh. The convolution is equivariant if this geometric
convolution does not depend on the intermediate chosen gauge, so if the
diagram commutes.
## 3 Gauge Equivariance & Geometric Features
On a general mesh, the choice of the reference neighbour, or gauge, which
defines the orientation of the kernel, can only be made arbitrarily. However,
this choice should not arbitrarily affect the outcome of the convolution, as
this would impede the generalization between different locations and different
meshes. Instead, Gauge Equivariant Mesh CNNs have the property that their
output transforms according to a known rule as the gauge changes.
Consider the left hand side of figure 2(a). Given a neighbourhood of vertex
$p$, we want to express each neighbour $q$ in terms of its polar coordinates
$(r_{q},\theta_{q})$ on the tangent plane, so that the kernel value at that
neighbour $K_{\textup{neigh}}(\theta_{q})$ is well defined. This requires
choosing a basis on the tangent plane, determined by picking a neighbour as
reference neighbour (denoted $q_{0}$), which has the zero angle
$\theta_{q_{0}}=0$. In the top path, we pick $q_{A}$ as reference neighbour.
Let us call this gauge A, in which neighbours have angles $\theta^{A}_{q}$. In
the bottom path, we instead pick neighbour $q_{B}$ as reference point and are
in gauge B. We get a different basis for the tangent plane and different
angles $\theta^{B}_{q}$ for each neighbour. Comparing the two gauges, we see
that they are related by a rotation, so that
$\theta^{B}_{q}=\theta^{A}_{q}-\theta^{A}_{q_{B}}$. This change of gauge is
called a gauge transformation of angle $g:=\theta^{A}_{q_{B}}$.
In figure 2(a), we illustrate a gauge equivariant convolution that takes input
and output features such as gray scale image values on the mesh, which are
called scalar features. The top path represents the convolution in gauge A,
the bottom path in gauge B. In either case, the convolution can be interpreted
as consisting of three steps. First, for each vertex $p$, the value of the
scalar features on the mesh at each neighbouring vertex $q$, represented by
colors, is mapped to the tangent plane at $p$ at angle $\theta_{q}$ defined by
the gauge. Subsequently, the convolutional kernel sums for each neighbour $q$,
the product of the feature at $q$ and kernel $K(\theta_{q})$. Finally the
output is mapped back to the mesh. These three steps can be composed into a
single step, which we could call a _geometric convolution_ , mapping from
input features on the mesh to output features on the mesh. The convolution is
_gauge equivariant_ if this geometric convolution does not depend on the gauge
we pick in the interim, so in figure 2(a), if the convolution in the top path
in gauge A has same result the convolution in the bottom path in gauge B,
making the diagram commute. In this case, however, we see that the convolution
output needs to be the same in both gauges, for the convolution to be
equivariant. Hence, we must have that $K(\theta_{q})=K(\theta_{q}-g)$, as the
orientations of the neighbours differ by some angle $g$, and the kernel must
be isotropic.
As we aim to design an anisotropic convolution, the output feature of the
convolution at $p$ can, instead of a scalar, be two numbers
$v\in\mathbb{R}^{2}$, which can be interpreted as coefficients of a tangent
feature vector in the tangent space at $p$, visualized in figure 2(b). As
shown on the right hand side, different gauges induce a different basis of the
tangent plane, so that the _same tangent vector_ (shown on the middle right on
the mesh), is represented by _different coefficients_ in the gauge (shown on
the top and bottom on the right). This gauge equivariant convolution must be
anisotropic: going from the top row to the bottom row, if we change
orientations of the neighbours by $-g$, the coefficients of the output vector
$v\in\mathbb{R}^{2}$ of the kernel must be also rotated by $-g$. This is
written as $R(-g)v$, where $R(-g)\in\mathbb{R}^{2\times 2}$ is the matrix that
rotates by angle $-g$.
Vectors and scalars are not the only type of geometric features that can be
inputs and outputs of a GEM-CNN layer. In general, the coefficients of a
geometric feature of $C$ dimensions changes by an invertible linear
transformation $\rho(-g)\in\mathbb{R}^{C\times C}$ if the gauge is rotated by
angle $g$. The map $\rho:[0,2\pi)\to\mathbb{R}^{C\times C}$ is called the
_type_ of the geometric quantity and is formally known as a group
representation of the planar rotation group $\operatorname{SO}(2)$. Group
representations have the property that $\rho(g+h)=\rho(g)\rho(h)$ (they are
group homomorphisms), which implies in particular that $\rho(0)=\mathbbm{1}$
and $\rho(-g)=\rho(g)^{-1}$. For more background on group representation
theory, we refer the reader to (Serre, 1977) and, specifically in the context
of equivariant deep learning, to (Lang & Weiler, 2020). From the theory of
group representations, we know that any feature type can be composed from
“irreducible representations” (irreps). For $\operatorname{SO}(2)$, these are
the one dimensional invariant scalar representation $\rho_{0}$ and for all
$n\in{\mathbb{N}}_{>0}$, a two dimensional representation $\rho_{n}$,
$\rho_{0}(g)=1,\quad\rho_{n}(g)=\begin{pmatrix}\cos ng&\shortminus\sin ng\\\
\sin ng&\phantom{\shortminus}\cos ng\end{pmatrix}.$
where we write, for example, $\rho=\rho_{0}\oplus\rho_{1}\oplus\rho_{1}$ to
denote that representation $\rho(g)$ is the direct sum (i.e. block-diagonal
stacking) of the matrices $\rho_{0}(g),\rho_{1}(g),\rho_{1}(g)$. Scalars and
tangent vector features correspond to $\rho_{0}$ and $\rho_{1}$ respectively
and we have $R(g)=\rho_{1}(g)$.
The type of the feature at each layer in the network can thus be fully
specified (up to a change of basis) by the number of copies of each irrep.
Similar to the dimensionality in a conventional CNN, the choice of type is a
hyperparameter that can be freely chosen to optimize performance.
### 3.1 Kernel Constraint
Given an input type $\rho_{\textup{in}}$ and output type $\rho_{\textup{out}}$
of dimensions $C_{\textup{in}}$ and $C_{\textup{out}}$, the kernels are
$K_{\textup{self}}\in\mathbb{R}^{C_{\textup{out}}\times C_{\textup{in}}}$ and
$K_{\textup{neigh}}:[0,2\pi)\to\mathbb{R}^{C_{\textup{out}}\times
C_{\textup{in}}}$. However, not all such kernels are equivariant. Consider
again examples figure 2(a) and figure 2(b). If we map from a scalar to a
scalar, we get that $K_{\textup{neigh}}(\theta-g)=K_{\textup{neigh}}(\theta)$
for all angles $\theta,g$ and the convolution is isotropic. If we map from a
scalar to a vector, we get that rotating the angles $\theta_{q}$ results in
the same tangent vector as rotating the output vector coefficients, so that
$K_{\textup{neigh}}(\theta-g)=R(-g)K_{\textup{neigh}}(\theta)$.
$\rho_{\textup{in}}\to\rho_{\textup{out}}$ | linearly independent solutions for $K_{\textup{neigh}}(\theta)$
---|---
$\rho_{0}\to\rho_{0}$ | $(1)$
$\rho_{n}\to\rho_{0}$ | $\begin{pmatrix}\cos n\theta&\sin n\theta\end{pmatrix},\begin{pmatrix}\sin n\theta&\shortminus\cos n\theta\end{pmatrix}$
$\rho_{0}\to\rho_{m}$ | $\begin{pmatrix}\cos m\theta\\\ \sin m\theta\end{pmatrix},\begin{pmatrix}\phantom{\shortminus}\sin m\theta\\\ \shortminus\cos m\theta\end{pmatrix}$
$\rho_{n}\to\rho_{m}$ | $\begin{pmatrix}c&\shortminus s\\\ s&\phantom{\shortminus}c\end{pmatrix}$, $\begin{pmatrix}\phantom{\shortminus}s&c\\\ \shortminus c&s\end{pmatrix}$, $\begin{pmatrix}c_{+}&\phantom{\shortminus}s_{+}\\\ s_{+}&\shortminus c_{+}\end{pmatrix}$, $\begin{pmatrix}\shortminus s_{+}&c_{+}\\\ \phantom{-}c_{+}&s_{+}\end{pmatrix}$
$\rho_{\textup{in}}\to\rho_{\textup{out}}$ | linearly independent solutions for $K_{\textup{self}}$
$\rho_{0}\to\rho_{0}$ | $(1)$
$\rho_{n}\to\rho_{n}$ | $\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}$, $\begin{pmatrix}\phantom{\shortminus}0&1\\\ \shortminus 1&0\end{pmatrix}$
Table 1: Solutions to the angular kernel constraint for kernels that map from
$\rho_{n}$ to $\rho_{m}$. We denote ${c_{\pm}=\cos((m\pm n)\theta)}$ and
${s_{\pm}=\sin((m\pm n)\theta)}$.
In general, as derived by Cohen et al. (2019b); Weiler et al. (2021) and in
appendix B, the kernels must satisfy for any gauge transformation
$g\in[0,2\pi)$ and angle $\theta\in[0,2\pi)$, that
$\displaystyle\\!\\!K_{\textup{neigh}}(\theta-g)$
$\displaystyle=\rho_{\textup{out}}(-g)K_{\textup{neigh}}(\theta)\rho_{\textup{in}}(g),$
(3) $\displaystyle K_{\textup{self}}$
$\displaystyle=\rho_{\textup{out}}(-g)\;K_{\textup{self}}\;\rho_{\textup{in}}(g).$
(4)
The kernel can be seen as consisting of multiple blocks, where each block
takes as input one irrep and outputs one irrep. For example if
$\rho_{\textup{in}}$ would be of type $\rho_{0}\oplus\rho_{1}\oplus\rho_{1}$
and $\rho_{\textup{out}}$ of type $\rho_{1}\oplus\rho_{3}$, we have $4\times
5$ matrix
$\displaystyle
K_{\textup{neigh}}(\theta)=\begin{pmatrix}K_{10}(\theta)&K_{11}(\theta)&K_{11}(\theta)\\\
K_{30}(\theta)&K_{31}(\theta)&K_{31}(\theta)\\\ \end{pmatrix}$
where e.g. $K_{31}(\theta)\in\mathbb{R}^{2\times 2}$ is a kernel that takes as
input irrep $\rho_{1}$ and as output irrep $\rho_{3}$ and needs to satisfy Eq.
3. As derived by Weiler & Cesa (2019) and in Appendix C, the kernels
$K_{\textup{neigh}}(\theta)$ and $K_{\textup{self}}$ mapping from irrep
$\rho_{n}$ to irrep $\rho_{m}$ can be written as a linear combination of the
basis kernels listed in Table 1. The table shows that equivariance requires
the self-interaction to only map from one irrep to the same irrep. Hence, we
have $K_{\textup{self}}=\begin{pmatrix}0&K_{11}&K_{11}\\\ 0&0&0\\\
\end{pmatrix}\in\mathbb{R}^{4\times 3}.$
All basis-kernels of all pairs of input irreps and output irreps can be
linearly combined to form an arbitrary equivariant kernel from feature of type
$\rho_{\textup{in}}$ to $\rho_{\textup{out}}$. In the above example, we have
$2\times 2+4\times 4=20$ basis kernels for $K_{\textup{neigh}}$ and 4 basis
kernels for $K_{\textup{self}}$. The layer thus has 24 parameters. As proven
in (Weiler & Cesa, 2019) and (Lang & Weiler, 2020), this parameterization of
the equivariant kernel space is _complete_ , that is, more general equivariant
kernels do not exist.
### 3.2 Geometry and Parallel Transport
In order to implement gauge equivariant mesh CNNs, we need to make the
abstract notion of tangent spaces, gauges and transporters concrete.
As the mesh is embedded in $\mathbb{R}^{3}$, a natural definition of the
tangent spaces $T_{p}M$ is as two dimensional subspaces that are orthogonal to
the normal vector at $p$. We follow the common definition of normal vectors at
mesh vertices as the area weighted average of the adjacent faces’ normals. The
Riemannian logarithm map $\log_{p}:{\mathcal{N}}_{p}\to T_{p}M$ represents the
one-ring neighborhood of each point $p$ on their tangent spaces as visualized
in figure 1. Specifically, neighbors $q\in{\mathcal{N}}_{p}$ are mapped to
$\log_{p}(q)\in T_{p}M$ by first projecting them to $T_{p}M$ and then
rescaling the projection such that the norm is preserved, i.e.
$|\log_{p}(q)|=|q-p|$; see Eq. 9. A choice of reference neighbor
$q_{p}\in{\mathcal{N}}$ uniquely determines a right handed, orthonormal
reference frame $(e_{p,1},\,e_{p,2})$ of $T_{p}M$ by setting
$e_{p,1}:=\log_{p}(q_{0})/|\log_{p}(q_{0})|$ and $e_{p,2}:=n\times e_{p,1}$.
The polar angle $\theta_{pq}$ of any neighbor $q\in{\mathcal{N}}$ relative to
the first frame axis is then given by $\theta_{pq}\ :=\
\operatorname{atan2}\big{(}e_{p,2}^{\top}\log_{p}(q),\
e_{p,1}^{\top}\log_{p}(q))\big{)}.$
Given the reference frame $(e_{p,1},e_{p,2})$, a 2-tuple of coefficients
$(v_{1},v_{2})\in\mathbb{R}^{2}$ specifies an (embedded) tangent vector
${v_{1}e_{p,1}+v_{2}e_{p,2}}\in T_{p}M\subset\mathbb{R}^{3}$. This assignment
is formally given by the _gauge map_ $E_{p}:\mathbb{R}^{2}\to
T_{p}M\subset\mathbb{R}^{3}$ which is a vector space isomorphism. In our case,
it can be identified with the matrix
$\displaystyle
E_{p}=\left[\begin{array}[]{cc}\rule[-2.15277pt]{0.5pt}{5.38193pt}&\rule[-2.15277pt]{0.5pt}{5.38193pt}\\\
e_{p,1}&e_{p,2}\\\
\rule[0.0pt]{0.5pt}{5.38193pt}&\rule[0.0pt]{0.5pt}{5.38193pt}\end{array}\right]\in\mathbb{R}^{3\times
2}.$ (8)
Feature vectors $f_{p}$ and $f_{q}$ at neighboring (or any other) vertices
$p\in M$ and $q\in{\mathcal{N}}_{p}\subseteq M$ live in different vector
spaces and are expressed relative to independent gauges, which makes it
invalid to sum them directly. Instead, they have to be parallel transported
along the mesh edge that connects the two vertices. As explained above, this
transport is given by group elements $g_{q\to p}\in[0,2\pi)$, which determine
the transformation of tangent vector _coefficients_ as $v_{q}\mapsto R(g_{q\to
p})v_{q}\in\mathbb{R}^{2}$ and, analogously, for feature vector coefficients
as $f_{q}\mapsto\rho(g_{q\to p})f_{q}$. Figure 4 in the appendix visualizes
the definition of edge transporters for flat spaces and meshes. On a flat
space, tangent vectors are transported by keeping them parallel in the usual
sense on Euclidean spaces. However, if the source and target frame
orientations disagree, the vector coefficients relative to the source frame
need to be transformed to the target frame. This coordinate transformation
from polar angles $\varphi_{q}$ of $v$ to $\varphi_{p}$ of $R(g_{q\to p})v$
defines the transporter $g_{q\to p}=\varphi_{p}-\varphi_{q}$. On meshes, the
source and target tangent spaces $T_{q}M$ and $T_{p}M$ are not longer
parallel. It is therefore additionally necessary to rotate the source tangent
space and its vectors parallel to the target space, before transforming
between the frames. Since transporters effectively make up for differences in
the source and target frames, the parallel transporters transform under gauge
transformations $g_{p}$ and $g_{q}$ according to $g_{q\to p}\mapsto
g_{p}+g_{q\to p}-g_{q}$. Note that this transformation law cancels with the
transformation law of the coefficients at $q$ and lets the transported
coefficients transform according to gauge transformations at $p$. It is
therefore valid to sum vectors and features that are parallel transported into
the same gauge at $p$.
A more detailed discussion of the concepts presented in this section can be
found in Appendix A.
## 4 Non-linearity
Besides convolutional layers, the GEM-CNN contains non-linear layers, which
also need to be gauge equivariant, for the entire network to be gauge
equivariant. The coefficients of features built out of irreducible
representaions, as described in section 3, do not commute with point-wise non-
linearities (Worrall et al., 2017; Thomas et al., 2018; Weiler et al., 2018a;
Kondor et al., 2018). Norm non-linearities and gated non-linearities (Weiler &
Cesa, 2019) can be used with such features, but generally perform worse in
practice compared to point-wise non-linearities (Weiler & Cesa, 2019). Hence,
we propose the _RegularNonlinearity_ , which uses point-wise non-linearities
and is approximately gauge equivariant.
This non-linearity is built on Fourier transformations. Consider a continuous
periodic signal, on which we perform a band-limited Fourier transform with
band limit $b$, obtaining $2b+1$ Fourier coefficients. If this continuous
signal is shifted by an arbitrary angle $g$, then the corresponding Fourier
components transform with linear transformation $\rho_{0:b}(-g)$, for $2b+1$
dimensional representation
$\rho_{0:b}:=\rho_{0}\oplus\rho_{1}\oplus...\oplus\rho_{b}$.
It would be exactly equivariant to take a feature of type $\rho_{0:b}$, take a
continuous inverse Fourier transform to a continuous periodic signal, then
apply a point-wise non-linearity to that signal, and take the continuous
Fourier transform, to recover a feature of type $\rho_{0:b}$. However, for
implementation, we use $N$ intermediate samples and the discrete Fourier
transform. This is exactly gauge equivariant for gauge transformation of
angles multiple of $2\pi/N$, but only approximately equivariant for other
angles. In App. G we prove that as $N\to\infty$, the non-linearity is exactly
gauge equivariant.
The run-time cost per vertex of the (inverse) Fourier transform implemented as
a simple linear transformation is $\mathcal{O}(bN)$, which is what we use in
our experiments. The pointwise non-linearity scales linearly with $N$, so the
complexity of the RegularNonLineariy is also $\mathcal{O}(bN)$. However, one
can also use a fast Fourier transform, achieving a complexity of
$\mathcal{O}(N\log N)$. Concrete memory and run-time cost of varying $N$ are
shown in appendix F.1.
## 5 Related Work
Our method can be seen as a practical implementation of coordinate independent
convolutions on triangulated surfaces, which generally rely on $G$-steerable
kernels (Weiler et al., 2021).
The irregular structure of meshes leads to a variety of approaches to define
convolutions. Closely related to our method are graph based methods which are
often based on variations of graph convolutional networks (Kipf & Welling,
2017; Defferrard et al., 2016). GCNs have been applied on spherical meshes
(Perraudin et al., 2019) and cortical surfaces (Cucurull et al., 2018; Zhao et
al., 2019a). Verma et al. (2018) augment GCNs with anisotropic kernels which
are dynamically computed via an attention mechanism over graph neighbours.
Instead of operating on the graph underlying a mesh, several approaches
leverage its geometry by treating it as a discrete manifold. Convolution
kernels can then be defined in geodesic polar coordinates which corresponds to
a projection of kernels from the tangent space to the mesh via the exponential
map. This allows for kernels that are larger than the immediate graph
neighbourhood and message passing over faces but does not resolve the issue of
ambiguous kernel orientation. Masci et al. (2015); Monti et al. (2016) and Sun
et al. (2018) address this issue by restricting the network to orientation
invariant features which are computed by applying anisotropic kernels in
several orientations and pooling over the resulting responses. The models
proposed in (Boscaini et al., 2016) and (Schonsheck et al., 2018) are
explicitly gauge dependent with preferred orientations chosen via the
principal curvature direction and the parallel transport of kernels,
respectively. Poulenard & Ovsjanikov (2018) proposed a non-trivially gauge
equivariant network based on geodesic convolutions, however, the model
parallel transports only partial information of the feature vectors,
corresponding to certain kernel orientations. In concurrent work, Wiersma et
al. (2020) also define convolutions on surfaces equivariantly to the
orientation of the kernel, but differ in that they use norm non-linearities
instead of regular ones and that they apply the convolution along longer
geodesics, which adds complexity to the geometric pre-computation - as partial
differential equations need to be solved, but may result in less
susceptibility to the particular discretisation of the manifold. A
comprehensive review on such methods can be found in Section 12 of (Weiler et
al., 2021).
Another class of approaches defines spectral convolutions on meshes. However,
as argued in (Bronstein et al., 2017), the Fourier spectrum of a mesh depends
heavily on its geometry, which makes such methods instable under deformations
and impedes the generalization between different meshes. Spectral convolutions
further correspond to isotropic kernels. Kostrikov et al. (2018) overcomes
isotropy of the Laplacian by decomposing it into two applications of the
first-order Dirac operator.
A construction based on toric covering maps of topologically spherical meshes
was presented in (Maron et al., 2017). An entirely different approach to mesh
convolutions is to apply a linear map to a spiral of neighbours (Bouritsas et
al., 2019; Gong et al., 2019), which works well only for meshes with a similar
graph structure.
The above-mentioned methods operate on the intrinsic, $2$-dimensional geometry
of the mesh. A popular alternative for embedded meshes is to define
convolutions in the embedding space $\mathbb{R}^{3}$. This can for instance be
done by voxelizing space and representing the mesh in terms of an occupancy
grid (Wu et al., 2015; Tchapmi et al., 2017; Hanocka et al., 2018). A downside
of this approach are the high memory and compute requirements of voxel
representations. If the grid occupancy is low, this can partly be addressed by
resorting to an inhomogeneous grid density (Riegler et al., 2017). Instead of
voxelizing space, one may interpret the set of mesh vertices as a point cloud
and run a convolution on those (Qi et al., 2017a; b). Point cloud based
methods can be made equivariant w.r.t. the isometries of $\mathbb{R}^{3}$
(Zhao et al., 2019b; Thomas et al., 2018), which implies in particular the
isometry equivariance on the embedded mesh. In general, geodesic distances
within the manifold differ usually substantially from the distances in the
embedding space. Which approach is more suitable depends on the particular
application.
On flat Euclidean spaces our method corresponds to Steerable CNNs (Cohen &
Welling, 2017; Weiler et al., 2018a; Weiler & Cesa, 2019; Cohen et al., 2019a;
Lang & Weiler, 2020). As our model, these networks process geometric feature
fields of types $\rho$ and are equivariant under gauge transformations,
however, due to the flat geometry, the parallel transporters become trivial.
Jenner & Weiler (2021) extended the theory of steerable CNNs to include
equivariant partial differential operators. Regular nonlinearities are on flat
spaces used in group convolutional networks (Cohen & Welling, 2016; Weiler et
al., 2018b; Hoogeboom et al., 2018; Bekkers et al., 2018; Winkels & Cohen,
2018; Worrall & Brostow, 2018; Worrall & Welling, 2019; Sosnovik et al.,
2020).
## 6 Experiments
### 6.1 Embedded MNIST
We first investigate how Gauge Equivariant Mesh CNNs perform on, and
generalize between, different mesh geometries. For this purpose we conduct
simple MNIST digit classification experiments on embedded rectangular meshes
of $28\\!\times\\!28$ vertices. As a baseline geometry we consider a flat mesh
as visualized in figure 5(a). A second type of geometry is defined as
different _isometric_ embeddings of the flat mesh, see figure 5(b). Note that
this implies that the _intrinsic_ geometry of these isometrically embedded
meshes is indistinguishable from that of the flat mesh. To generate geometries
which are intrinsically curved, we add random normal displacements to the flat
mesh. We control the amount of curvature by smoothing the resulting
displacement fields with Gaussian kernels of different widths $\sigma$ and
define the roughness of the resulting mesh as $3-\sigma$. Figures 5(c)-5(h)
show the results for roughnesses of 0.5, 1, 1.5, 2, 2.25 and 2.5. For each of
the considered settings we generate $32$ different train and $32$ test
geometries.
To test the performance on, and generalization between, different geometries,
we train equivalent GEM-CNN models on a flat mesh and meshes with a roughness
of 1, 1.5, 2, 2.25 and 2.5. Each model is tested individually on each of the
considered test geometries, which are the flat mesh, isometric embeddings and
curved embeddings with a roughness of 0.5, 1, 1.25, 1.5, 1.75, 2, 2.25 and
2.5. Figure 3 shows the test errors of the GEM-CNNs on the different train
geometries (different curves) for all test geometries (shown on the x-axis).
Since our model is purely defined in terms of the intrinsic geometry of a
mesh, it is expected to be insensitive to isometric changes in the embeddings.
This is empirically confirmed by the fact that the test performances on flat
and isometric embeddings are exactly equal. As expected, the test error
increases for most models with the surface roughness. Models trained on more
rough surfaces are hereby more robust to deformations. The models generalize
well from a rough training to smooth test geometry up to a training roughness
of 1.5. Beyond that point, the test performances on smooth meshes degrades up
to the point of random guessing at a training roughness of 2.5.
As a baseline, we build an _isotropic_ graph CNN with the same network
topology and number of parameters ($\approx 163k$). This model is insensitive
to the mesh geometry and therefore performs exactly equal on all surfaces.
While this enhances its robustness on very rough meshes, its test error of
$19.80\pm 3.43\%$ is an extremely bad result on MNIST. In contrast, the use of
anisotropic filters of GEM-CNN allows it to reach a test error of only
$0.60\pm 0.05\%$ on the flat geometry. It is therefore competitive with
conventional CNNs on pixel grids, which apply anisotropic kernels as well.
More details on the datasets, models and further experimental setup are given
in appendix E.1.
### 6.2 Shape Correspondence
As a second experiment, we perform non-rigid shape correspondence on the FAUST
dataset (Bogo et al., 2014), following Masci et al. (2015) 444These
experiments were executed on QUVA machines. . The data consists of 100 meshes
of human bodies in various positions, split into 80 train and 20 test meshes.
The vertices are registered, such that vertices on the same position on the
body, such as the tip of the left thumb, have the same identifier on all
meshes. All meshes have $6890$ vertices, making this a $6890$-class
segmentation problem.
The architecture transforms the vertices’ ${XYZ}$ coordinates (of type
$3\rho_{0}$), via 6 convolutional layers to features $64\rho_{0}$, with
intermediate features $16(\rho_{0}\oplus\rho_{1}\oplus\rho_{2})$, with
residual connections and the RegularNonlinearity with $N=5$ samples.
Afterwards, we use two $1\\!\times\\!1$ convolutions with ReLU to map first to
256 and then 6980 channels, after which a softmax predicts the registration
probabilities. The $1\\!\times\\!1$ convolutions use a dropout of 50% and 1E-4
weight decay. The network is trained with a cross entropy loss with an initial
learning rate of 0.01, which is halved when training loss reaches a plateau.
As all meshes in the FAUST dataset share the same topology, breaking the gauge
equivariance in higher layers can actually be beneficial. As shown in (Weiler
& Cesa, 2019), symmetry can be broken by treating non-invariant features as
invariant features as input to the final $1\\!\times\\!1$ convolution.
As baselines, we compare to various models, some of which use more complicated
pipelines, such as (1) the computation of geodesics over the mesh, which
requires solving partial differential equations, (2) pooling, which requires
finding a uniform sub-selection of vertices, (3) the pre-computation of SHOT
features which locally describe the geometry (Tombari et al., 2010), or (4)
post-processing refinement of the predictions. The GEM-CNN requires none of
these additional steps. In addition, we compare to SpiralNet++ (Gong et al.,
2019), which requires all inputs to be similarly meshed. Finally, we compare
to an isotropic version of the GEM-CNN, which reduces to a conventional graph
CNN, as well as a non-gauge-equivariant CNN based on SHOT frames. The results
in table 2 show that the GEM-CNN outperforms prior works and a non-gauge-
equivariant CNN, that isotropic graph CNNs are unable to solve the task and
that for this data set breaking gauge symmetry in the final layers of the
network is beneficial. More experimental details are given in appendix E.2.
Figure 3: Test errors for MNIST digit classification on embedded meshes.
Different lines denote train geometries, x-axis shows test geometries. Regions
are standard errors of the means over 6 runs.
Model | Features | Accuracy (%)
---|---|---
ACNN (Boscaini et al., 2016) | SHOT | 62.4
Geodesic CNN (Masci et al., 2015) | SHOT | 65.4
MoNet (Monti et al., 2016) | SHOT | 73.8
FeaStNet (Verma et al., 2018) | XYZ | 98.7
ZerNet (Sun et al., 2018) | XYZ | 96.9
SpiralNet++ (Gong et al., 2019) | XYZ | 99.8
Graph CNN | XYZ | 1.40$\pm$0.5
Graph CNN | SHOT | 23.80$\pm$8
Non-equiv. CNN (SHOT frames) | XYZ | 73.00$\pm$4.0
Non-equiv. CNN (SHOT frames) | SHOT | 75.11$\pm$2.4
GEM-CNN | XYZ | 99.73$\pm$0.04
GEM-CNN (broken symmetry) | XYZ | 99.89$\pm$0.02
Table 2: Results of FAUST shape correspondence. Statistics are means and
standard errors of the mean of over three runs. All cited results are from
their respective papers.
## 7 Conclusions
Convolutions on meshes are commonly performed as a convolution on their
underlying graph, by forgetting geometry, such as orientation of neighbouring
vertices. In this paper we propose Gauge Equivariant Mesh CNNs, which endow
Graph Convolutional Networks on meshes with anisotropic kernels and parallel
transport. Hence, they are sensitive to the mesh geometry, and result in
equivalent outputs regardless of the arbitrary choice of kernel orientation.
We demonstrate that the inference of GEM-CNNs is invariant under isometric
deformations of meshes and generalizes well over a range of non-isometric
deformations. On the FAUST shape correspondence task, we show that Gauge
equivariance, combined with symmetry breaking in the final layer, leads to
state of the art performance.
## References
* Bekkers et al. (2018) Bekkers, E. J., Lafarge, M. W., Veta, M., Eppenhof, K. A., Pluim, J. P., and Duits, R. Roto-translation covariant convolutional networks for medical image analysis. In _International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)_ , 2018.
* Bogo et al. (2014) Bogo, F., Romero, J., Loper, M., and Black, M. J. Faust: Dataset and evaluation for 3d mesh registration. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3794–3801, 2014.
* Boscaini et al. (2016) Boscaini, D., Masci, J., Rodolà, E., and Bronstein, M. M. Learning shape correspondence with anisotropic convolutional neural networks. In _NIPS_ , 2016.
* Bouritsas et al. (2019) Bouritsas, G., Bokhnyak, S., Ploumpis, S., Bronstein, M., and Zafeiriou, S. Neural 3d morphable models: Spiral convolutional networks for 3d shape representation learning and generation. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 7213–7222, 2019.
* Bronstein et al. (2017) Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., and Vandergheynst, P. Geometric deep learning: Going beyond Euclidean data. _IEEE Signal Processing Magazine_ , 2017.
* Cohen & Welling (2016) Cohen, T. and Welling, M. Group equivariant convolutional networks. In _ICML_ , 2016.
* Cohen & Welling (2017) Cohen, T. S. and Welling, M. Steerable CNNs. In _ICLR_ , 2017.
* Cohen et al. (2019a) Cohen, T. S., Geiger, M., and Weiler, M. A general theory of equivariant CNNs on homogeneous spaces. In _Conference on Neural Information Processing Systems (NeurIPS)_ , 2019a.
* Cohen et al. (2019b) Cohen, T. S., Weiler, M., Kicanaoglu, B., and Welling, M. Gauge equivariant convolutional networks and the Icosahedral CNN. 2019b.
* Crane et al. (2010) Crane, K., Desbrun, M., and Schröder, P. Trivial connections on discrete surfaces. _Computer Graphics Forum (SGP)_ , 29(5):1525–1533, 2010.
* Crane et al. (2013) Crane, K., de Goes, F., Desbrun, M., and Schröder, P. Digital geometry processing with discrete exterior calculus. In _ACM SIGGRAPH 2013 courses_ , SIGGRAPH ’13, New York, NY, USA, 2013\. ACM.
* Cucurull et al. (2018) Cucurull, G., Wagstyl, K., Casanova, A., Veličković, P., Jakobsen, E., Drozdzal, M., Romero, A., Evans, A., and Bengio, Y. Convolutional neural networks for mesh-based parcellation of the cerebral cortex. 2018\.
* Defferrard et al. (2016) Defferrard, M., Bresson, X., and Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In _Advances in neural information processing systems_ , pp. 3844–3852, 2016.
* Gallier & Quaintance (2020) Gallier, J. and Quaintance, J. _Differential Geometry and Lie Groups: A Computational Perspective_ , volume 12. Springer Nature, 2020.
* Gong et al. (2019) Gong, S., Chen, L., Bronstein, M., and Zafeiriou, S. Spiralnet++: A fast and highly efficient mesh convolution operator. In _Proceedings of the IEEE International Conference on Computer Vision Workshops_ , pp. 0–0, 2019.
* Hanocka et al. (2018) Hanocka, R., Fish, N., Wang, Z., Giryes, R., Fleishman, S., and Cohen-Or, D. Alignet: Partial-shape agnostic alignment via unsupervised learning. _ACM Transactions on Graphics (TOG)_ , 38(1):1–14, 2018.
* Hoogeboom et al. (2018) Hoogeboom, E., Peters, J. W. T., Cohen, T. S., and Welling, M. HexaConv. In _International Conference on Learning Representations (ICLR)_ , 2018.
* Jenner & Weiler (2021) Jenner, E. and Weiler, M. Steerable partial differential operators for equivariant neural networks. _arXiv preprint arXiv:2106.10163_ , 2021.
* Kipf & Welling (2017) Kipf, T. N. and Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In _ICLR_ , 2017.
* Kondor et al. (2018) Kondor, R., Lin, Z., and Trivedi, S. Clebsch-gordan nets: a fully fourier space spherical convolutional neural network. In _NIPS_ , 2018.
* Kostrikov et al. (2018) Kostrikov, I., Jiang, Z., Panozzo, D., Zorin, D., and Bruna, J. Surface networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 2540–2548, 2018.
* Lai et al. (2009) Lai, Y.-K., Jin, M., Xie, X., He, Y., Palacios, J., Zhang, E., Hu, S.-M., and Gu, X. Metric-driven rosy field design and remeshing. _IEEE Transactions on Visualization and Computer Graphics_ , 16(1):95–108, 2009.
* Lang & Weiler (2020) Lang, L. and Weiler, M. A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels. _arXiv preprint arXiv:2010.10952_ , 2020.
* Maron et al. (2017) Maron, H., Galun, M., Aigerman, N., Trope, M., Dym, N., Yumer, E., Kim, V. G., and Lipman, Y. Convolutional neural networks on surfaces via seamless toric covers. _ACM Trans. Graph._ , 36(4):71–1, 2017.
* Masci et al. (2015) Masci, J., Boscaini, D., Bronstein, M. M., and Vandergheynst, P. Geodesic convolutional neural networks on riemannian manifolds. _ICCVW_ , 2015.
* Monti et al. (2016) Monti, F., Boscaini, D., Masci, J., Rodolà, E., Svoboda, J., and Bronstein, M. M. Geometric deep learning on graphs and manifolds using mixture model cnns. _CoRR_ , abs/1611.08402, 2016. URL http://arxiv.org/abs/1611.08402.
* Perraudin et al. (2019) Perraudin, N., Defferrard, M., Kacprzak, T., and Sgier, R. Deepsphere: Efficient spherical convolutional neural network with healpix sampling for cosmological applications. _Astronomy and Computing_ , 27:130–146, 2019.
* Poulenard & Ovsjanikov (2018) Poulenard, A. and Ovsjanikov, M. Multi-directional geodesic neural networks via equivariant convolution. _ACM Transactions on Graphics_ , 2018.
* Qi et al. (2017a) Qi, C. R., Su, H., Mo, K., and Guibas, L. J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 652–660, 2017a.
* Qi et al. (2017b) Qi, C. R., Yi, L., Su, H., and Guibas, L. J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In _Advances in neural information processing systems_ , pp. 5099–5108, 2017b.
* Riegler et al. (2017) Riegler, G., Osman Ulusoy, A., and Geiger, A. Octnet: Learning deep 3d representations at high resolutions. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3577–3586, 2017.
* Schonsheck et al. (2018) Schonsheck, S. C., Dong, B., and Lai, R. Parallel Transport Convolution: A New Tool for Convolutional Neural Networks on Manifolds. _arXiv:1805.07857 [cs, math, stat]_ , May 2018.
* Serre (1977) Serre, J.-P. Linear representations of finite groups. 1977\.
* Sosnovik et al. (2020) Sosnovik, I., Szmaja, M., and Smeulders, A. Scale-equivariant steerable networks. In _International Conference on Learning Representations (ICLR)_ , 2020.
* Sun et al. (2018) Sun, Z., Rooke, E., Charton, J., He, Y., Lu, J., and Baek, S. Zernet: Convolutional neural networks on arbitrary surfaces via zernike local tangent space estimation. _arXiv preprint arXiv:1812.01082_ , 2018.
* Tchapmi et al. (2017) Tchapmi, L., Choy, C., Armeni, I., Gwak, J., and Savarese, S. Segcloud: Semantic segmentation of 3d point clouds. In _2017 international conference on 3D vision (3DV)_ , pp. 537–547. IEEE, 2017.
* Thomas et al. (2018) Thomas, N., Smidt, T., Kearnes, S., Yang, L., Li, L., Kohlhoff, K., and Riley, P. Tensor Field Networks: Rotation- and Translation-Equivariant Neural Networks for 3D Point Clouds. 2018\.
* Tombari et al. (2010) Tombari, F., Salti, S., and Di Stefano, L. Unique signatures of histograms for local surface description. In _European conference on computer vision_ , pp. 356–369. Springer, 2010.
* Tu (2017) Tu, L. W. _Differential geometry: connections, curvature, and characteristic classes_ , volume 275. Springer, 2017.
* Verma et al. (2018) Verma, N., Boyer, E., and Verbeek, J. Feastnet: Feature-steered graph convolutions for 3d shape analysis. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 2598–2606, 2018.
* Weiler & Cesa (2019) Weiler, M. and Cesa, G. General E(2)-equivariant steerable CNNs. In _Conference on Neural Information Processing Systems (NeurIPS)_ , 2019. URL https://arxiv.org/abs/1911.08251.
* Weiler et al. (2018a) Weiler, M., Geiger, M., Welling, M., Boomsma, W., and Cohen, T. 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data. In _NeurIPS_ , 2018a.
* Weiler et al. (2018b) Weiler, M., Hamprecht, F. A., and Storath, M. Learning steerable filters for rotation equivariant CNNs. In _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2018b.
* Weiler et al. (2021) Weiler, M., Forré, P., Verlinde, E., and Welling, M. Coordinate Independent Convolutional Networks - Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds. _arXiv preprint arXiv:2106.06020_ , 2021.
* Wiersma et al. (2020) Wiersma, R., Eisemann, E., and Hildebrandt, K. CNNs on Surfaces using Rotation-Equivariant Features. _Transactions on Graphics_ , 39(4), July 2020. doi: 10.1145/3386569.3392437.
* Winkels & Cohen (2018) Winkels, M. and Cohen, T. S. 3D G-CNNs for pulmonary nodule detection. In _Conference on Medical Imaging with Deep Learning (MIDL)_ , 2018\.
* Worrall & Welling (2019) Worrall, D. and Welling, M. Deep scale-spaces: Equivariance over scale. In _Conference on Neural Information Processing Systems (NeurIPS)_ , 2019.
* Worrall & Brostow (2018) Worrall, D. E. and Brostow, G. J. Cubenet: Equivariance to 3D rotation and translation. In _European Conference on Computer Vision (ECCV)_ , 2018.
* Worrall et al. (2017) Worrall, D. E., Garbin, S. J., Turmukhambetov, D., and Brostow, G. J. Harmonic Networks: Deep Translation and Rotation Equivariance. In _CVPR_ , 2017.
* Wu et al. (2015) Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., and Xiao, J. 3d shapenets: A deep representation for volumetric shapes. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 1912–1920, 2015.
* Zhao et al. (2019a) Zhao, F., Xia, S., Wu, Z., Duan, D., Wang, L., Lin, W., Gilmore, J. H., Shen, D., and Li, G. Spherical u-net on cortical surfaces: Methods and applications. _CoRR_ , abs/1904.00906, 2019a. URL http://arxiv.org/abs/1904.00906.
* Zhao et al. (2019b) Zhao, Y., Birdal, T., Lenssen, J. E., Menegatti, E., Guibas, L., and Tombari, F. Quaternion equivariant capsule networks for 3d point clouds. _arXiv preprint arXiv:1912.12098_ , 2019b.
## Appendix A Geometry & Parallel Transport
A gauge, or choice of reference neighbor at each vertex, fully determines the
neighbor orientations $\theta_{pq}$ and the parallel transporters $g_{q\to p}$
along edges. The following two subsections give details on how to compute
these quantities.
### A.1 Local neighborhood geometry
Neighbours $q$ of vertex $p$ can be mapped uniquely to the tangent plane at
$p$ using a map called the Riemannnian logarithmic map, visualized in figure
1. A choice of reference neighbor then determines a reference frame in the
tangent space which assigns polar coordinates to all other neighbors. The
neighbour orientations $\theta_{pq}$ are the angular components of each
neighbor in this polar coordinate system.
We define the tangent space $T_{p}M$ at vertex $p$ as that two dimensional
subspace of $\mathbb{R}^{3}$, which is determined by a normal vector $n$ given
by the area weighted average of the normal vectors of the adjacent mesh faces.
While the tangent spaces are two dimensional, we implement them as being
embedded in the ambient space $\mathbb{R}^{3}$ and therefore represent their
elements as three dimensional vectors. The reference frame corresponding to
the chosen gauge, defined below, allows to identify these 3-vectors by their
coefficient 2-vectors.
Each neighbor $q$ is represented in the tangent space by the vector
$\log_{p}(q)\in T_{p}M$ which is computed via the discrete analog of the
Riemannian logarithm map. We define this map $\log_{p}:{\mathcal{N}}_{p}\to
T_{p}M$ for neighbouring nodes as the projection of the edge vector $q-p$ on
the tangent plane, followed by a rescaling such that the norm
$|\log_{p}(q)|=|q-p|$ is preserved. Writing the projection operator on the
tangent plane as $(\mathbbm{1}-nn^{\top})$, the logarithmic map is thus given
by:
$\displaystyle\log_{p}(q)\ :=\
|q-p|\frac{(\mathbbm{1}-nn^{\top})(q-p)}{|(\mathbbm{1}-nn^{\top})(q-p)|}$ (9)
Geometrically, this map can be seen as “folding” each edge up to the tangent
plane, and therefore encodes the orientation of edges and preserves their
lengths.
The normalized reference edge vector $\log_{p}(q_{0})$ uniquely determines a
right handed, orthonormal reference frame $(e_{p,1},\,e_{p,2})$ of $T_{p}M$ by
setting $e_{p,1}:=\log_{p}(q_{0})/|\log_{p}(q_{0})|$ and $e_{p,2}:=n\times
e_{p,1}$. The angle $\theta_{pq}$ is then defined as the angle of
$\log_{p}(q)$ in polar coordinates corresponding to this reference frame.
Numerically, it can be computed by
$\theta_{pq}\ :=\ \operatorname{atan2}\big{(}e_{p,2}^{\top}\log_{p}(q),\
e_{p,1}^{\top}\log_{p}(q))\big{)}.$
Given the reference frame $(e_{p,1},e_{p,2})$, a 2-tuple of coefficients
$(v_{1},v_{2})\in\mathbb{R}^{2}$ specifies an (embedded) tangent vector
${v_{1}e_{p,1}+v_{2}e_{p,2}}\in T_{p}M\subset\mathbb{R}^{3}$. This assignment
is formally given by the _gauge map_ $E_{p}:\mathbb{R}^{2}\to
T_{p}M\subset\mathbb{R}^{3}$ which is a vector space isomorphism. In our case,
it can be identified with the matrix
$\displaystyle
E_{p}=\left[\begin{array}[]{cc}\rule[-2.15277pt]{0.5pt}{5.38193pt}&\rule[-2.15277pt]{0.5pt}{5.38193pt}\\\
e_{p,1}&e_{p,2}\\\
\rule[0.0pt]{0.5pt}{5.38193pt}&\rule[0.0pt]{0.5pt}{5.38193pt}\end{array}\right]\in\mathbb{R}^{3\times
2}.$ (13)
### A.2 Parallel edge transporters
(a) Parallel transport on a flat mesh.
(b) Parallel transport along an edge of a general mesh.
Figure 4: Parallel transport of tangent vectors $v\in T_{q}M$ at $q$ to
$R(g_{q\to p})v\in T_{p}M$ at $p$ on meshes. On a flat mesh, visualized in
figure 4(a), parallel transport moves a vector such that it stays parallel in
the usual sense on flat spaces. The parallel transporter $g_{q\to
p}=\varphi_{p}-\varphi_{q}$ corrects the transported vector _coefficients_ for
differing gauges at $q$ and $p$. When transporting along the edge of a general
mesh, the tangent spaces at $q$ and $p$ might not be aligned, see figure 4(b).
Before correcting for the relative frame orientation via $g_{q\to p}$, the
tangent space $T_{q}M$, and thus $v\in T_{q}M$, is rotated by an angle
$\alpha$ around $n_{q}\\!\times\\!n_{p}$ such that its normal $n_{q}$
coincides with that of $n_{p}$.
On curved meshes, feature vectors $f_{q}$ and $f_{p}$ at different locations
$q$ and $p$ are expressed in different gauges, which makes it geometrically
invalid to accumulate their information directly. Instead, when computing a
new feature at $p$, the neighboring feature vectors at $q\in{\mathcal{N}}_{p}$
first have to be parallel transported into the feature space at $p$ before
they can be processed. The parallel transport along the edges of a mesh is
determined by the (discrete) Levi-Civita connection corresponding to the
metric induced by the ambient space $\mathbb{R}^{3}$. This connection is given
by parallel transporters $g_{q\to p}\in[0,2\pi)$ on the mesh edges which map
tangent vectors $v_{q}\in T_{q}M$ at $q$ to tangent vectors $R(g_{q\to
p})v_{q}\in T_{p}M$ at $p$. Feature vectors $f_{q}$ of type $\rho$ are
similarly transported to $\rho(g_{q\to p})f_{q}$ by applying the corresponding
feature vector transporter $\rho(g_{q\to p})$.
In order to build some intuition, it is illustrative to first consider
transporters on a planar mesh. In this case the parallel transport can be
thought of as moving a vector along an edge without rotating it. The resulting
abstract vector is then parallel to the original vector in the usual sense on
flat spaces, see figure 4(a). However, if the (transported) source frame at
$q$ disagrees with the target frame at $p$, the _coefficients_ of the
transported vector have to be transformed to the target coordinates. This
coordinate transformation from polar angles $\varphi_{q}$ of $v$ to
$\varphi_{p}$ of $R(g_{q\to p})v$ defines the transporter $g_{q\to
p}=\varphi_{p}-\varphi_{q}$.
On general meshes one additionally has to account for the fact that the
tangent spaces $T_{q}M\subset\mathbb{R}^{3}$ and $T_{p}M\subset\mathbb{R}^{3}$
are usually not parallel in the ambient space $\mathbb{R}^{3}$. The parallel
transport therefore includes the additional step of first aligning the tangent
space at $q$ to be parallel to that at $p$, before translating a vector
between them, see figure 4(b). In particular, given the normals $n_{q}$ and
$n_{p}$ of the source and target tangent spaces $T_{q}M$ and $T_{p}M$, the
source space is being aligned by rotating it via
$R_{\alpha}\in\operatorname{SO}(3)$ by an angle
$\alpha=\arccos(n_{q}^{\top}n_{p})$ around the axis $n_{q}\\!\times\\!n_{p}$
in the ambient space. Denote the rotated source frame by
$(R_{\alpha}e_{q,1},R_{\alpha}e_{q,2})$ and the target frame by
$(e_{p,1},e_{p,2})$. The angle to account for the parallel transport between
the two frames, defining the discrete Levi-Civita connection on mesh edges, is
then found by computing
$g_{q\to p}\ =\
\operatorname{atan2}\big{(}(R_{\alpha}e_{q,2})^{\\!\top}\\!e_{p,1},\;(R_{\alpha}e_{q,1})^{\\!\top}\\!e_{p,1}\big{)}.$
(14)
In practice we precompute these connections before training a model.
Under gauge transformations by angles $g_{p}$ at $p$ and $g_{q}$ at $q$ the
parallel transporters transform according to
$g_{q\to p}\ \mapsto\ g_{p}+g_{q\to p}-g_{q}\,.$ (15)
Intuitively, this transformation states that a transporter in a transformed
gauge is given by a gauge transformation back to the original gauge via
$-g_{q}$ followed by the original transport by $g_{q\to p}$ and a
transformation back to the new gauge via $g_{p}$.
For more details on discrete connections and transporters, extending to
arbitrary paths e.g. over faces, we refer to (Lai et al., 2009; Crane et al.,
2010; 2013).
## Appendix B Deriving the Kernel Constraint
Given an input type $\rho_{\textup{in}}$, corresponding to vector space
$V_{\textup{in}}$ of dimension $C_{\textup{in}}$ and output type
$\rho_{\textup{out}}$, corresponding to vector space $V_{\textup{out}}$ of
dimension $C_{\textup{out}}$, we have kernels
$K_{\textup{self}}\in\mathbb{R}^{C_{\textup{out}}\times C_{\textup{in}}}$ and
$K_{\textup{neigh}}:[0,2\pi)\to\mathbb{R}^{C_{\textup{out}}\times
C_{\textup{in}}}$. Following Cohen et al. (2019b); Weiler et al. (2021), we
can derive a constraint on these kernels such that the convolution is
invariant.
First, note that for vertex $p\in M$ and neighbour $q\in{\mathcal{N}}_{p}$,
the coefficients of a feature vector $f_{p}$ at $p$ of type $\rho$ transforms
under gauge transformation $f_{p}\mapsto\rho(-g)f_{p}$. The angle
$\theta_{pq}$ gauge transforms to $\theta_{pq}-g$.
Next, note that $\hat{f}_{q}:=\rho_{\textup{in}}(g_{q\to p})f_{q}$ is the
input feature at $q$ parallel transported to $p$. Hence, it transforms as a
vector at $p$. The output of the convolution $f^{\prime}_{p}$ is also a
feature at $p$, transforming as $\rho_{\textup{out}}(-g)f^{\prime}_{p}$.
The convolution then simply becomes:
$f^{\prime}_{p}=K_{\textup{self}}f_{p}+\sum_{q}K_{\textup{neigh}}(\theta_{pq})\hat{f_{q}}$
Gauge transforming the left and right hand side, and substituting the equation
in the left hand side, we obtain:
$\displaystyle\rho_{\textup{out}}(-g)f_{p}^{\prime}=$
$\displaystyle\rho_{\textup{out}}(-g)\left(K_{\textup{self}}f_{p}+\sum_{q}K_{\textup{neigh}}(\theta_{pq})\hat{f_{q}}\right)=$
$\displaystyle
K_{\textup{self}}\rho_{\textup{in}}(-g)f_{p}+\sum_{q}K_{\textup{neigh}}(\theta_{pq}-g)\rho_{\textup{in}}(-g)\hat{f_{q}}$
Which is true for any features, if $\forall g\in[0,2\pi),\theta\in[0,2\pi)$:
$\displaystyle K_{\textup{neigh}}(\theta-g)$
$\displaystyle=\rho_{\textup{out}}(-g)\;K_{\textup{neigh}}(\theta)\;\rho_{\textup{in}}(g),$
(16) $\displaystyle K_{\textup{self}}$
$\displaystyle=\rho_{\textup{out}}(-g)\;K_{\textup{self}}\;\rho_{\textup{in}}(g).$
(17)
where we used the orthogonality of the representations
$\rho(-g)=\rho(g)^{-1}$.
## Appendix C Solving the Kernel Constraint
As also derived in (Weiler & Cesa, 2019; Lang & Weiler, 2020), we find all
angle-parametrized linear maps between $C_{\textup{in}}$ dimensional feature
vector of type $\rho_{\textup{in}}$ to a $C_{\textup{out}}$ dimensional
feature vector of type $\rho_{\textup{out}}$, that is,
$K:S^{1}\to\mathbb{R}^{C_{\textup{out}}\times C_{\textup{in}}}$, such that the
above equivariance constraint holds. We will solve for
$K_{\textup{neigh}}(\theta)$ and discuss $K_{\textup{self}}$ afterwards.
The irreducible real representations (irreps) of $\operatorname{SO}(2)$ are
the one dimensional trivial representation $\rho_{0}(g)=1$ of order zero and
$\forall n\in{\mathbb{N}}$ the two dimensional representations of order n:
$\rho_{n}:\operatorname{SO}(2)\to\operatorname{GL}(2,\mathbb{R}):g\mapsto\begin{pmatrix}\cos
ng&-\sin ng\\\ \sin ng&\phantom{-}\cos ng\end{pmatrix}.$
Any representation $\rho$ of $\operatorname{SO}(2)$ of $D$ dimensions can be
written as a direct sum of irreducible representations
$\displaystyle\rho$
$\displaystyle\cong\rho_{l_{1}}\oplus\rho_{l_{2}}\oplus...$
$\displaystyle\rho(g)$
$\displaystyle=A(\rho_{l_{1}}\oplus\rho_{l_{2}}\oplus...)(g)A^{-1}.$
where $l_{i}$ denotes the order of the irrep, $A\in\mathbb{R}^{D\times D}$ is
some invertible matrix and the direct sum $\oplus$ is the block diagonal
concatenations of the one or two dimensional irreps. Hence, if we solve the
kernel constraint for all irrep pairs for the in and out representations, the
solution for arbitrary representations, can be constructed. We let the input
representation be irrep $\rho_{n}$ and the output representation be irrep
$\rho_{m}$. Note that $K(g^{-1}\theta)=(\rho_{\textup{reg}}(g)[K])(\theta)$
for the infinite dimensional regular representation of $\operatorname{SO}(2)$,
which by the Peter-Weyl theorem is equal to the infinite direct sum
$\rho_{0}\oplus\rho_{1}\oplus...$.
Using the fact that all $\operatorname{SO}(2)$ irreps are orthogonal, and
using that we can solve for $\theta=0$ and from the kernel constraints we can
obtain $K(\theta)$, we see that Eq. 16 is equivalent to
$\hat{\rho}(g)K:=(\rho_{\textup{reg}}\otimes\rho_{n}\otimes\rho_{m})(g)K=K$
where $\otimes$ denotes the tensor product, we write $K:=K(\theta)$ and filled
in $\rho_{\textup{out}}=\rho_{m},\;\rho_{\textup{in}}=\rho_{n}$. This
constraint implies that the space of equivariant kernels is exactly the
trivial subrepresentation of $\hat{\rho}$. The representation $\hat{\rho}$ is
infinite dimensional, though, and the subspace can not be immediately
computed.
For $\operatorname{SO}(2)$, we have that for $n\geq 0$,
$\rho_{n}\otimes\rho_{0}=\rho_{n}$, and for $n,m>0$,
$\rho_{n}\otimes\rho_{m}\cong\rho_{n+m}\oplus\rho_{|n-m|}$. Hence, the trivial
subrepresentation of $\hat{\rho}$ is a subrepresentation of the finite
representation
$\tilde{\rho}:=(\rho_{n+m}\oplus\rho_{|n-m|})\otimes\rho_{n}\otimes\rho_{m}$,
itself a subrepresentation of $\hat{\rho}$.
As $\operatorname{SO}(2)$ is a connected Lie group, any
$g\in\operatorname{SO}(2)$ can be written as $g=\exp tX$ for $t\in\mathbb{R}$,
$X\in\mathfrak{so}(2)$, the Lie algebra of $\operatorname{SO}(2)$, and
$\exp:\mathfrak{so}(2)\to\operatorname{SO}(2)$ the Lie exponential map. We can
now find the trivial subrepresentation of $\tilde{\rho}$ looking
infinitesimally, finding
$\displaystyle\tilde{\rho}(\exp tX)K$ $\displaystyle=K$
$\displaystyle\Longleftrightarrow d\tilde{\rho}(X)K:=\frac{\partial}{\partial
t}\tilde{\rho}(\exp tX)|_{t=0}K$ $\displaystyle=0$
where we denote $d\tilde{\rho}$ the Lie algebra representation corresponding
to Lie group representation $\tilde{\rho}$. $\operatorname{SO}(2)$ is one
dimensional, so for any single $X\in\mathfrak{so}(2)$, $K$ is an equivariant
map from $\rho_{m}$ to $\rho_{n}$, if it is in the null space of matrix
$d\tilde{\rho}(X)$. The null space can be easily found using a computer
algebra system or numerically, leading to the results in table 1.
## Appendix D Equivariance
The GEM-CNN is by construction equivariant to gauge transformations, but
additionally satisfies two important properties. Firstly, it only depends on
the intrinsic shape of the 2D mesh, not how the mesh vertices are embedded in
$\mathbb{R}^{3}$, since the geometric quantities like angles $\theta_{pq}$ and
parallel transporters depend solely on the intrinsic properties of the mesh.
This means that a simultaneous rotation or translation of all vertex
coordinates, with the input signal _moving along_ with the vertices, will
leave the convolution output at the vertices unaffected.
The second property is that if a mesh has an orientation-preserving mesh
isometry, meaning that we can map between the vertices preserving the mesh
structure, orientations and all distances between vertices, the GEM-CNN is
equivariant with respect to moving the signal along such a transformation. An
(infinite) 2D grid graph is an example of a mesh with orientation-preserving
isometries, which are the translations and rotations by 90 degrees. Thus a
GEM-CNN applied to such a grid has the same equivariance properties a G-CNN
(Cohen & Welling, 2016) applied to the grid.
### D.1 Proof of Mesh Isometry Equivariance
Throughout this section, we denote $p^{\prime}=\phi(p),q^{\prime}=\phi(q)$. An
orientation-preserving mesh isometry is a bijection of mesh vertices
$\phi:{\mathcal{V}}\to{\mathcal{V}}$, such that:
* •
Mesh faces are one-to-one mapped to mesh faces. As an implication, edges are
one-to-one mapped to edges and neighbourhoods to neighbourhoods.
* •
For each point $p$, the differential $d\phi_{p}:T_{p}M\to T_{p^{\prime}}M$ is
orthogonal and orientation preserving, meaning that for two vectors
$v_{1},v_{2}\in T_{p}M$, the tuple $(v_{1},v_{2})$ forms a right-handed basis
of $T_{p}M$, then $(d\phi_{p}(v_{1}),d\phi_{p}(v_{2}))$ forms a right-handed
basis of $T_{p^{\prime}}M$.
###### Lemma D.1.
Given a orientation-preserving isometry $\phi$ on mesh $M$, with on each
vertex a chosen reference neighbour $q^{p}_{0}$, defining a frame on the
tangent plane, so that the log-map $\log_{p}q$ has polar angle
$\theta^{p}_{q}$ in that frame. For each vertex $p$, let
$g_{p}=\theta^{p^{\prime}}_{\phi(q_{0}^{p})}$. Then for each neighbour
$q\in{\mathcal{N}}_{p}$, we have
$\theta^{p^{\prime}}_{q^{\prime}}=\theta^{p}_{q}+g_{p}$. Furthermore, we have
for parallel transporters that $g_{q^{\prime}\to p^{\prime}}=g_{q\to
p}-g_{p}+g_{q}$.
###### Proof.
For any $v\in T_{p}M$, we have that
$\phi(\exp_{p}(v))=\exp_{p^{\prime}}(d\phi_{p}(v))$ (Tu, 2017, Theorem 15.2).
Thus
$\phi(\exp_{p}(\log_{p}q))=q^{\prime}=\exp_{p^{\prime}}(d\phi_{p}(\log_{p}q))$.
Taking the log-map at $p^{\prime}$ on the second and third expression and
expressing in polar coordinates in the gauges, we get
$(r^{p^{\prime}}_{q^{\prime}},\theta^{p^{\prime}}_{q^{\prime}})=d\phi_{p}(r^{p}_{q},\theta^{p}_{q})$.
As $\phi$ is an orientation-preserving isometry, $d\phi_{p}$ is a special
orthogonal linear map $\mathbb{R}^{2}\to\mathbb{R}^{2}$ when expressed in the
gauges. Hence $d\phi_{p}(r,\theta)=(r,\theta+z_{p})$ for some angle $z_{p}$.
Filling in $\theta^{p}_{q^{p}_{0}}=0$, we find $z_{p}=g_{p}$, proving the
first statement. The second statement follows directly from the fact that
parallel transport $q\to p$, then push-forward along $\phi$ to $p^{\prime}$
yields the same first pushing forward from $q$ to $q^{\prime}$ along $\phi$,
then parallel transporting $q^{\prime}\to p^{\prime}$ (Gallier & Quaintance,
2020, Theorem 18.3 (2)). ∎
For any feature $f$ of type $\rho$, we can define a push-forward along $\phi$
as $\phi_{*}(f)_{p^{\prime}}=\rho(-g_{p})f_{p}$.
###### Theorem D.1.
Given GEM-CNN convolution $K\star\cdot$ from a feature of type
$\rho_{\textup{in}}$ to a feature of type $\rho_{\textup{out}}$, we have that
$K\star\phi_{*}(f)=\phi_{*}(K\star f)$.
###### Proof.
$\displaystyle\phi_{*}(K\star f)_{p^{\prime}}$
$\displaystyle=\rho_{\textup{out}}(-g_{p})\left(K_{\textup{self}}f_{p}+\sum\nolimits_{q\in{\mathcal{N}}_{p}}K_{\textup{neigh}}(\theta_{pq})\rho_{\textup{in}}(g_{q\to
p})f_{q}\right)$
$\displaystyle=\rho_{\textup{out}}(-g_{p})\left(K_{\textup{self}}f_{p}+\sum\nolimits_{q^{\prime}\in{\mathcal{N}}_{p^{\prime}}}K_{\textup{neigh}}(\theta_{p^{\prime}q^{\prime}}-g_{p})\rho_{\textup{in}}(g_{q^{\prime}\to
p^{\prime}}+g_{p}-g_{q})f_{q}\right)$
$\displaystyle=\rho_{\textup{out}}(-g_{p})\left(K_{\textup{self}}f_{p}+\sum\nolimits_{q^{\prime}\in{\mathcal{N}}_{p^{\prime}}}K_{\textup{neigh}}(\theta_{p^{\prime}q^{\prime}}-g_{p})\rho_{\textup{in}}(g_{p})\rho_{\textup{in}}(g_{q^{\prime}\to
p^{\prime}})\rho_{\textup{in}}(-g_{q})f_{q}\right)$
$\displaystyle=K_{\textup{self}}\rho_{\textup{in}}(-g_{p})f_{p}+\sum\nolimits_{q^{\prime}\in{\mathcal{N}}_{p^{\prime}}}K_{\textup{neigh}}(\theta_{p^{\prime}q^{\prime}})\rho_{\textup{in}}(g_{q^{\prime}\to
p^{\prime}})\rho_{\textup{in}}(-g_{q})f_{q}$
$\displaystyle=(K\star\phi_{*}(f))_{p^{\prime}}$
where in the second line we apply lemma D.1 and the fact that $\phi$ gives a
bijection of neighbourhoods of $p$, in the third line we use the functoriality
of $\rho$ and in the fourth line we apply the kernel constraints on
$K_{\textup{self}}$ and $K_{\textup{neigh}}$. ∎
## Appendix E Additional details on the experiments
### E.1 Embedded MNIST
(a) Flat embedding
(b) Isometric embedding
(c) Curved, roughness = 0.5
(d) Curved, roughness = 1
(e) Curved, roughness = 1.5
(f) Curved, roughness = 2
(g) Curved, roughness = 2.25
(h) Curved, roughness = 2.5
Figure 5: Examples of different grid geometries on which the MNIST dataset is
evaluated. All grids have $28\\!\times\\!28$ vertices but are embedded
differently in the ambient space. Figure 5(a) shows a flat embedding,
corresponding to the usual pixel grid. The grid in Figure 5(b) is isometric to
the flat embedding, its internal geometry is indistinguishable from that of
the flat embedding. Figures 5(c)-5(h) show curved geometries which are not
isometric to the flat grid. They are produced by a random displacement of each
vertex in its normal direction, followed by a smoothing of displacements.
To create the intrinsically curved grids we start off with the flat,
rectangular grid, shown in figure 5(a), which is embedded in the $XY$-plane.
An independent displacement for each vertex in $Z$-direction is drawn from a
uniform distribution. A subsequent smoothing step of the normal displacements
with a Gaussian kernel of width $\sigma$ yields geometries with different
levels of curvature. Figures 5(c)-5(h) show the results for standard
deviations of 2.5, 2, 1.5, 1, 0.75 and 0.5 pixels, which are denoted by their
roughness $3-\sigma$ as 0.5, 1, 1.5, 2, 2.25 and 2.5. In order to facilitate
the generalization between different geometries we normalize the resulting
average edge lengths.
The same GEM-CNN is used on all geometries. It consists of seven convolution
blocks, each of which applies a convolution, followed by a RegularNonlinearity
with $N=7$ orientations, batch normalization and dropout of 0.1. This depth is
chosen since GEM-CNNs propagate information only between direct neighbors in
each layer, such that the field of view after 7 layers is $2\times 7+1=15$
pixel. The input and output types of the network are scalar fields of
multiplicity 1 and 64, respectively, which transform under the trivial
representation and ensure a gauge invariant prediction. All intermediate
layers use feature spaces of types $M\rho_{0}\oplus M\rho_{1}\oplus
M\rho_{2}\oplus M\rho_{3})$ with $M=4,\ 8,\ 12,\ 16,\ 24,\ 32$. After a
spatial max pooling, a final linear layer maps the 64 resulting features to 10
neurons, on which a softmax function is applied. The model has 163k
parameters. A baseline GCN, applying by isotropic kernels, is defined by
replacing the irreps $\rho_{i}$ of orders $i\geq 1$ with trivial irreps
$\rho_{0}$ and rescaling the width of the model such that the number of
parameters is preserved. All models are trained for 20 epochs with a weight
decay of 1E-5 and an initial learning rate of 1E-2. The learning rate is
automatically decayed by a factor of 2 when the validation loss did not
improve for 3 epochs.
The experiments were run on a single TitanX GPU.
### E.2 Shape Correspondence experiment
All experiments were ran on single RTX 2080TI GPUs, requiring 3 seconds /
epoch.
The non-gauge-equivariant CNN uses as gauges the SHOT local reference frames
(Tombari et al., 2010). For one input and output channel, it has features
$f_{p}\in\mathbb{R}$ convolution and weights $w\in\mathbb{R}^{2B+2}$, for
$B\in{\mathbb{N}}$. The convolution is:
$(K\star
f)_{p}=w_{0}f_{p}+\sum_{q\in{\mathcal{N}}_{p}}\left(w_{1}+\sum_{n=1}^{B}(w_{2n}\cos(n\theta_{pq})+w_{2n+1}\sin(n\theta_{pq})\right)f_{q}.$
(18)
This convolution kernel is an unconstranied band-limited spherical function.
This is then done for $C_{\textup{in}}$ input channels and $C_{\textup{out}}$
output channels, giving $(2B+2)C_{\textup{in}}C_{\textup{out}}$ parameters per
layer. In our experiments, we use $B=2$ and 7 layers, with ReLU non-
linearities and batch-norm, just as for the gauge equivariant convolution.
After hyperparameter search in $\\{16,32,64,128,256\\}$, we found 128 channels
to perform best.
## Appendix F Additional experiments
### F.1 RegularNonLinearity computational cost
Number of samples | Time / epoch (s) | Memory (GB)
---|---|---
none | 21.2 | 1.22
1 | 21.9 | 1.22
5 | 21.6 | 1.23
10 | 21.5 | 1.24
20 | 22.0 | 1.27
50 | 21.7 | 1.35
Table 3: Run-time of one epoch training and validation and max memory usage of
FAUST model without RegularNonLinearity of with varying number of samples used
in the non-linearity. The hyperparameters are modified to have batch size 1.
In table 3, we show the computational cost of the RegularNonLinearity,
computed by training and computing validation errors for 10 epochs. The run-
time is not significantly affected, but memory usage is.
### F.2 Equivariance Errors
In this experiment, we evaluate empirically equivariance to three kinds of
transformations: gauge transformations, transformations of the vertex
coordinates and transformations under isometries of the mesh, as introduced
above in appenndix D. We do this on two data sets: the icosahedron, a platonic
solid of 12 vertices, referred to in the plots as ’Ico’; and the deformed
icosahedron, in which the vertices have been moved away from the origin by a
factor of sampled from ${\mathcal{N}}(1,0.01)$, referred to in the plots as
’Def. Ico’. We evaluate this on the GEM-CNN (7 layers, 101 regular samples,
unless otherwise noted in the plots) and the Non-Equivariance CNN based on
SHOT frames introduced above in Eq. 18 (7 layers unless otherwise noted in the
plots). Both models have 16 channels input and 16 channels output. The
equivariance model has scalar features as input and output and intermediate
activations with band limit 2 with multiplicity 16. The non-equivariant model
has hidden activations of 16 dimensions. If not for the finite samples of the
RegularNonLinearity, the equivariant model should be exactly gauge invariant
and invariant to isometries. Both models use batchnorm, in order to evaluate
deeper models.
#### F.2.1 Gauge Equivariance
Figure 6: Equivariance error to gauge transformation.
We evaluate gauge equivariance by randomly initialising a model, randomly
sampling input features. We also sample 16 random gauge transformations at
each point. We compare the outputs of the model based on the different gauges.
As the input and output features of the equivariant model are scalars, the
outputs should coincide. This process is repeated 10 times. For the non-
equivariant model, we compute frames based on SHOT and then randomly rotate
these.
The equivariance error is quantified by as:
$\sqrt{\frac{\mathbb{E}_{\Phi,f}\mathbb{E}_{p,c}\mathrm{Var}_{g}(\Phi_{g}(f)_{p,c})}{\mathrm{Var}_{\Phi,f,p,c}(\Phi_{g_{0}}(f)_{p,c})}}$
(19)
where $\Phi_{g}(f)_{c}$ denotes the model $\Phi$ with gauge transformed by $g$
applied to input $f$ then taken the $c$-th channel, $\mathbb{E}_{\Phi,f}$
denotes the expectation over model initialisations and random inputs, for
which we take 10 samples, $\mathbb{E}_{p,c}$ denotes averaging over the 12
vertices and 16 output channels, $\mathrm{Var}_{g}$ denotes the variance over
the different gauge transformations, $\mathrm{Var}_{\Phi,f,c}$ takes the
variance over the models, inputs and channels, and $g_{0}$ denotes one of the
sampled gauge transformations. This quantity indicates how much the gauge
transformation affects the output, normalized by how much the model
initialisations and initial parameters affect the output.
Results are shown in Figure 6. As expected, the non-equivariant model is not
equivariant to gauge transformations. The equivariant model approaches gauge
equivariance as the number of samples of the Regular NonLinearity increases.
As expected, the error to gauge equivariance accumulate as the number of
layers increases. The icosahedron and deformed icosahedron behave the same.
#### F.2.2 Ambient Equivariance
Figure 7: Equivariance error to ambient transformations of the vertex
coordinates.
In this experiment, we measure whether the output is invariant to when all
vertex coordinates are jointly transformed under rotations and translations.
We perform the experiment as above, but sample as transformations $g$ 300
translations and rotations of the ambient space $\mathbb{R}^{3}$. We evaluate
again using Eq 19, where $g$ now denotes a ambient transformation.
Results are shown in Figure 7. We see that the equivariant GEM-CNN is
invariant to these ambient transformations. Somewhat unexpectedly, we see that
the non-equivariant model based on SHOT frames is not invariant. This is
because of an significant failure mode of SHOT frames in particular and
heuristically chosen gauges with a non-gauge-equivariant methods in general.
On some meshes, the heuristic is unable to select a canonical frame, because
the mesh is locally symmetric under (discrete subgroups of) planar rotations.
This is the case for the icosahedron. Hence, SHOT can not disambiguate the X
from the Y axis. The reason this happens in the SHOT local reference frame
selection (Tombari et al., 2010) is the first two singular values of the $M$
matrix are equal, making a choice between the first and second singular
vectors ambiguous. This ambiguity breaks ambient invariance. For the non-
symmetric deformed icosahedron, this problem for the non-equivariant method
disappears.
#### F.2.3 Isometry equivariance
Figure 8: Equivariance error to isometry transformation.
The icosahedron has 60 orientation-preserving isometries. We evaluate
equivariance using:
$\sqrt{\frac{\mathbb{E}_{\Phi,f}\mathbb{E}_{p,c}(\Phi(g(f)_{p,c}-\Phi(f)_{g(p),c})^{2}}{\mathrm{Var}_{\Phi,f,p,c}(\Phi(f)_{p,c})}}$
where $g:M\to M$ is an orientation-preserving isometry, sampled uniformly from
all 60 and $g(f)$ is the transformation of a scalar input feature
$f:M\to\mathbb{R}^{C_{\textup{in}}}$ by pre-composing with $g^{-1}$.
As expected, the non-equivariant model is not equivariant to isometries. The
GEM-CNN is not equivariant to the icosahedral isometries on the deformed
icosahedron, as the deformation removes the symmetry. As the number of Regular
NonLinearity samples increases, the GEM-CNN becomes more equivariant.
Interestingly, the GEM-CNN is equivariant whenever the number of samples is a
multiple of 5. This is because the stabilizer subgroup of the icosahedron at
the vertices is the cyclic group of order 5. Whenever the RegularNonLinearity
has a multiple of 5 samples, it is exactly equivariant to these
transformations.
## Appendix G Equivariance Error Bounds on Regular Non-Linearity
The regular non-linearity acts on each point on the sphere in the following
way. For simplicity, we assume that the representation is $U$ copies of
$\rho_{0}\oplus\rho_{1}\oplus...\oplus\rho_{M}$. One such copy can be treated
as the discrete Fourier modes of a circular signal with band limit $M$. We map
these Fourier modes to $N$ spatial samples with an inverse Discrete Fourier
Transform (DFT) matrix. Then apply to those samples a point-wise non-
linearity, like ReLU, and map back to the Fourier modes with a Discrete
Fourier Transform Matrix.
This procedure is exactly equivariant for gauge transformation with angles
multiple of $2\pi/N$, but approximately equivariant for small rotations in
between.
In equations, we start with Fourier modes
$x_{0},(x_{\alpha}(m),x_{\beta}(m))_{m=1}^{B}$ at some point on the sphere and
result in Fourier modes $z_{0},(z_{\alpha}(m),z_{\beta}(m))_{m=1}^{B}$. We let
$t=0,...,N-1$ index the spatial samples.
$\displaystyle x(t)$
$\displaystyle=x_{0}+\sum_{m}x_{\alpha}(m)\cos\left(\dfrac{2\pi}{N}mt\right)+\ldots$
(20)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\sum_{m}x_{\beta}(m)\sin\left(\dfrac{2\pi}{N}mt\right)$
$\displaystyle y(t)$ $\displaystyle=f(x(t))$ $\displaystyle z_{0}$
$\displaystyle=\dfrac{1}{N}\sum_{t}y(t)$ $\displaystyle z_{\alpha}(m)$
$\displaystyle=\dfrac{2}{N}\sum_{t}\cos\left(\dfrac{2\pi}{N}mt\right)y(t)$
$\displaystyle z_{\beta}(m)$
$\displaystyle=\dfrac{2}{N}\sum_{t}\sin\left(\dfrac{2\pi}{N}mt\right)y(t)$
Note that Nyquist’s sampling theorem requires us to pick $N\geq 2B+1$, as
otherwise information is always lost. The normalization is chosen so that
$z_{\alpha}(m)=x_{\alpha}(m)$ if $f$ is the identity.
Now we are interested in the equivariance error between the following two
terms, for small rotation $\delta\in[0,1)$. Any larger rotation can be
expressed in a rotation by a multiple of $2\pi/N$, which is exactly
equivariant, followed by a smaller rotation. We let $z_{\alpha}^{FT}(m)$ be
the resulting Fourier mode if first the input is gauge-transformed and then
the regular non-linearity is applied, and let $z_{\alpha}^{TF}(m)$ be the
result of first applying the regular non-linearity, followed by the gauge
transformation.
$\displaystyle z_{\alpha}^{FT}(m)$
$\displaystyle=\dfrac{2}{N}\sum_{t}\cos\left(\dfrac{2\pi}{N}mt\right)y(t+\delta)$
$\displaystyle=\dfrac{2}{N}\sum_{t}c_{m}(t)y(t+\delta)$ $\displaystyle
z_{\alpha}^{TF}(m)$
$\displaystyle=\dfrac{2}{N}\sum_{t}\cos\left(\dfrac{2\pi}{N}m(t-\delta)\right)y(t)$
$\displaystyle=\dfrac{2}{N}\sum_{t}c_{m}(t-\delta)y(t)$
where we defined for convenience $c_{m}(t)=\cos(2\pi mt/N)$. We define norms
$||x||_{1}=|x_{0}|+\sum_{m}(|x_{\alpha}(m)|+|x_{\beta}(m)|)$ and $||\partial
x||_{1}=\sum_{m}m(|x_{\alpha}(m)|+|x_{\beta}(m)|)$.
###### Theorem G.1.
If the input $x$ is band limited by $B$, the output $z$ is band limited by
$B^{\prime}$, $N$ samples are used and the non-linearity has Lipschitz
constant $L_{f}$, then the error to the gauge equivariance of the regular non-
linearity bounded by:
$\displaystyle||z^{FT}-z^{TF}||_{1}\leq\dfrac{4\pi
L_{f}}{N}\left((2B^{\prime}+\dfrac{1}{2})||\partial
x||_{1}+B^{\prime}(B^{\prime}+1)||x||_{1}\right)$
which goes to zero as $N\to\infty$.
###### Proof.
First, we note, since the Lipschitz constant of the cosine and sine is 1:
$\displaystyle|c_{m}(t-\delta)-c_{m}(t)|$ $\displaystyle\leq\dfrac{2\pi
m\delta}{N}\leq\dfrac{2\pi m}{N}$ $\displaystyle|x(t+\delta)-x(t)|$
$\displaystyle\leq\dfrac{2\pi}{N}\sum_{m}m(|x_{\alpha}(m)|+|x_{\beta}(m)|)$
$\displaystyle\leq\dfrac{2\pi}{N}||\partial x||_{1}$
$\displaystyle|y(t+\delta)-y(t)|$ $\displaystyle\leq
L_{f}\dfrac{2\pi}{N}||\partial x||_{1}$ $\displaystyle|c_{m}(t)|$
$\displaystyle\leq 1$ $\displaystyle|x(t)|$
$\displaystyle\leq|x_{0}|+\sum_{m}(|x_{\alpha}(m)|+|x_{\beta}(m)|)$
$\displaystyle\leq||x||_{1}$ $\displaystyle|y(t)|$ $\displaystyle\leq
L_{f}||x||_{1}$
Then:
$\displaystyle|c_{m}(t)y(t+\delta)-c_{m}(t-\delta)y(t)|$ $\displaystyle=$
$\displaystyle|c_{m}(t)\left[y(t+\delta)-y(t)\right]-y(t)\left[c_{m}(t-\delta)-c_{m}(t)\right]|$
$\displaystyle\leq$
$\displaystyle|c_{m}(t)||y(t+\delta)-y(t)|+|y(t)||c_{m}(t-\delta)-c_{m}(t)|$
$\displaystyle\leq$ $\displaystyle L_{f}\dfrac{2\pi}{N}||\partial
x||_{1}+L_{f}||x||_{1}\dfrac{2\pi m}{N}$ $\displaystyle=$
$\displaystyle\dfrac{2\pi L_{f}}{N}\left(||\partial x||_{1}+m||x||_{1}\right)$
So that finally:
$\displaystyle|z_{\alpha}^{FT}(m)-z_{\alpha}^{TF}(m)|$ $\displaystyle\leq$
$\displaystyle\dfrac{2}{N}\sum_{t}|c_{m}(t)y(t+\delta)-c_{m}(t-\delta)y(t)|$
$\displaystyle\leq$ $\displaystyle\dfrac{4\pi L_{f}}{N}\left(||\partial
x||_{1}+m||x||_{1}\right)$
The sinus component $|z_{\beta}^{FT}(m)-z_{\beta}^{TF}(m)|$ has the same
bound, while $|z_{0}^{FT}-z_{0}^{TF}|=|y(t+\delta)-y(t)|$, which is derived
above. So if $z$ is band-limited by $B^{\prime}$:
$\displaystyle||z^{FT}-z^{TF}||_{1}$ $\displaystyle=|z_{0}^{FT}-z_{0}^{TF}|+$
$\displaystyle\sum_{m=1}^{B^{\prime}}|z_{\alpha}^{FT}(m)-z_{\alpha}^{TF}(m)|+|z_{\beta}^{FT}(m)-z_{\beta}^{TF}(m)|$
$\displaystyle\leq\dfrac{4\pi
L_{f}}{N}\left((2B^{\prime}+\dfrac{1}{2})||\partial
x||_{1}+\sum_{m=1}^{B^{\prime}}2m||x||_{1}\right)$ $\displaystyle=\dfrac{4\pi
L_{f}}{N}\left((2B^{\prime}+\dfrac{1}{2})||\partial
x||_{1}+B^{\prime}(B^{\prime}+1)||x||_{1}\right)$
Since $||\partial x||_{1}=\mathcal{O}(B||x||_{1})$, we get
$||z^{FT}-z^{TF}||_{1}=\mathcal{O}(\frac{BB^{\prime}+{B^{\prime}}^{2}}{N}||x||_{1})$,
which obviously vanishes as $N\to\infty$. ∎
|
2024-09-04T02:54:59.310286 | 2020-03-11T17:36:44 | 2003.05428 | {
"authors": "Mitchell Kinney",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26173",
"submitter": "Mitchell Kinney",
"url": "https://arxiv.org/abs/2003.05428"
} | arxiv-papers | # Template Matching Route Classification
Mitchell Kinney
University of Minnesota - Twin Cities
###### Abstract
This paper details a route classification method for American football using a
template matching scheme that is quick and does not require manual labeling.
Pre-defined routes from a standard receiver route tree are aligned closely
with game routes in order to determine the closest match. Based on a test game
with manually labeled routes, the method achieves moderate success with an
overall accuracy of 72% of the 232 routes labeled correctly.
Keywords— Route Classification, Template Matching, Unsupervised Setting
## 1 Introduction
The world of sports is fully embracing the power of data driven decision
making and analytics. Recently, NFL players have started to wear RFID chips in
their shoulder pads to be able to track their movements on the field during
play. While there are many avenues of exploration with this new wave of data,
in this article automatic labeling of receiver routes is the focus. When
executing a passing play it is important for quarterbacks and receivers to be
coordinated so the ball can arrive on time and safely for a completed catch.
Many pass plays happen every game, and it is of interest when studying game
film to know what routes work well. Automatically labeling the types of routes
these receivers are running would be a major help in understanding what
determines success in a passing play. This paper uses a template matching
scheme to try and minimize the distance between game routes run by the players
and pre-defined routes from a typical route tree. The important aspects of
this method are scaling and translating the pre-defined routes to match
closely with the game routes. Closely matched routes will imply they are of
the same type and the label of the pre-defined route can be assigned to the
game route.
The data used in this paper comes from the NFL’s Big Data Bowl which provided
player tracking data from part of the 2017 NFL season.
## 2 Related work
Previous work in route classification has two main approaches. The first is to
manually label a portion of routes and then use those labels to train a model
to identify the remaining routes. This was done previously in [2] and [4]. In
[2] route characteristics such as the depth of the route before turning, the
direction of the turn and the length of the route after turning were recorded.
Then a training set was built from manually labeled routes and these
characteristics as well as their labels were fed to various models to train
classifiers. In [4] the author made use of a Convolutional Neural Network to
learn the hidden features of the routes after labeling approximately 1,000
routes by hand. The other approach is to use hierarchical clustering to
guarantee similar looking routes share the same label. This has been used in
[1] and [6]. In [1] the authors use a expectation-maximization (EM) algorithm
in a likelihood based approach. The authors assume each receiver trajectory
comes from a distribution with distinct parameters based on known distinct
route features. They attempt to tune the number of these distinct route types
to best separate the routes. In [6] features of the route were extracted and
hierarchical clustering was used. There were two methods shown useful in the
paper. The first used the beginning and ending of the routes as features and
the second used the length of the route before turning, the angle of the turn
and the length of the route after turning as features. The main weakness of
these methods is the amount of time that is needed to tune/ train the models
and the need in each to manually label routes at the beginning or manually
label clusters at the end. While these methods require manual labeling of
routes which the method proposed in this paper does not require.
A method which also uses pre-defined curves to compare routes was introduced
in [3]. The authors used a belief network, which is similar to a naive Bayes
classifier, to classify offensive plays. The pre-defined curves were used as
priors in the network. This method is similar to the one proposed in this
paper because it requires no manual labeling of routes. The goal of the method
in [3] is oriented more towards classifying a whole play rather than
individual routes though.
Figure 1: Empirical cumulative distribution of route cutoff times in seconds
## 3 Wrangling the routes
The data is from the Big Data Bowl competition which released tracking data
from games during the 2017 season. Each player was equipped with a tracking
device and their movements were recorded every 100 milliseconds. For this
method the positions were filtered to only include wide receivers, tight ends,
and running backs that were split out. It was found that running backs in the
backfield do not adhere to the same route tree when running routes because
they have to navigate the offensive and defensive lines some of the time. The
routes begin at the snap so no pre-snap motion was included. Once the ball was
caught by the receiver, incomplete, or intercepted the trajectory collection
was stopped. Also to avoid any creative deviations by players to possibly get
open on a broken play each route was cut off if the route was run for at least
5 seconds, (as was done in [2]). Therefore, the recorded routes ended at the
minimum time between when the pass outcome happened and 5 seconds after the
ball was snapped. A visualization of times in seconds that routes were cut off
in an example game is shown in Figure 1. There is relatively few routes that
reached the full 5 seconds, around 15%. The vast majority fell between 3 and 4
seconds. Finally, the routes were rotated and mirrored as if they were run
from the left side of the ball. There was no distinction between running from
the left or right side of the ball or the direction up or down the field.
Figure 2: Basic receiver route tree used to
classify routes
Figure 3: Game route classified as a ‘post’ run by Bennie Fowler while a
Denver Bronco
## 4 Route Tree
Routes in the NFL are differentiated based on direction changes made when
running up or down the field. Routes are an integral part of a passing play in
the NFL. Only the offense knows where a receiver will run, and these routes
help the quarterback know where a receiver will be, which allows a pass
completion to be made. In the route tree in Figure 3, the difference between
an out route and a dig route is the direction the receiver runs after running
forwards a few yards. Turning towards the center of the field will be a dig
route and turning towards the closest side line will be an out route. Another
difference is the length of the field the receiver runs before changing
direction. In a post route the receiver runs up the field before turning
slightly towards the center of the field and running towards the goal post.
While in a slant route, the receiver runs towards the center of field much
sooner, sometimes without running up the field first. As seen in Bennie
Fowler’s 20 yard post route run in the Denver Broncos versus Los Angeles
Chargers game in 2017 in Figure 3, routes run in a real NFL game have turns
that are not as crisp as seen in the route tree in Figure 3 and the length the
receivers run down the field before turning is not uniform. Therefore, a
method to classify routes must be adjustable to the angle of direction change
and the distance run down the field. The method proposed in this paper
captures this flexibility by scaling pre-defined routes to match closely to
game routes.
Figure 4: Bounding box of the post route from Figure 3
Figure 5: Aligned bounding boxes of an in-game post route (blue) and a
manually created post route (red)
Figure 6: Scaled bounding box of a manually created post route (red) from
Figure 6
## 5 Scaling Pre-Defined Routes
Scaling to match routes is a crucial first step in the method proposed,
because a distance metric is used to classify routes. Each proposed pre-
defined route should be overlaid in such a way that if a pre-defined route
label should be assigned to a game route the distance between the pre-defined
route and the game route is minimal. The only routes that are changed when
scaling/ transforming are the pre-defined routes to align closely with the
game routes. All pre-defined routes have been manually given coordinates that
match the shapes given in Figure 3. The calculations to scale any pre-defined
route only requires the bounding box of the pre-defined route and game route.
The bounding box of a set of two dimensional coordinates is the smallest
rectangle that captures all of the points. A route $R$ is defined as a set of
$(x,y)$ coordinate pairs with cardinality $|T|$ such that:
$R=\\{(x_{1},y_{1}),(x_{2},y_{2}),\dots,(x_{T},y_{T})\\},$ (1)
where $T$ is the number of timesteps the receiver’s position was recorded. The
bounding box of the set $R$ is a four dimensional tuple defined as
$\big{(}\min\limits_{x}R,\min\limits_{y}R,\max\limits_{x}R,\max\limits_{y}R\;\big{)}$.
An example of a bounding box for a route is shown in Figure 6. The scaling
approach used was to find the largest difference between the width and height
ratios of the bounding boxes of the pre-defined route and the game route and
then scale each coordinate in the pre-defined route so the largest difference
would match instead. Let the set of game route coordinates be
$R_{\text{game}}$ and the pre-defined route coordinates be $R_{\text{pre-
defined}}$. The first step in scaling is to translate all the coordinates in
both routes so the minimum $x$ and $y$ coordinates are $(0,0)$ and
$\displaystyle\min\limits_{x}R_{\text{game}}$
$\displaystyle=\min\limits_{x}R_{\text{pre-defined}}=0$ (2)
$\displaystyle\min\limits_{y}R_{\text{game}}$
$\displaystyle=\min\limits_{y}R_{\text{pre-defined}}=0$ (3)
This will align the bottom and left sides of the bounding boxes of the routes
and allow for the correct calculation of the ratio between the horizontal and
vertical direction of the bounding boxes. An example can be seen in Figure 6.
After aligning the bounding boxes the horizontal ratio $r_{h}$ and vertical
ratio $r_{v}$ can be calculated by
$\displaystyle r_{h}$ $\displaystyle=\dfrac{\max\limits_{x}R_{\text{pre-
defined}}}{\max\limits_{x}R_{\text{game}}}$ (4) $\displaystyle r_{v}$
$\displaystyle=\dfrac{\max\limits_{y}R_{\text{pre-
defined}}}{\max\limits_{y}R_{\text{game}}}.$ (5)
The larger ratio is used to scale each coordinate in the pre-defined route.
For instance, if $r_{h}>r_{v}$, then each $(x,y)$ coordinate in the pre-
defined route is multiplied by $r_{h}^{-1}$. Let $R_{\text{scaled}}$ be the
pre-defined route coordinates multiplied by the proper ratio.
$\displaystyle R_{\text{scaled}}=\min(r_{h}^{-1},r_{v}^{-1})\cdot
R_{\text{pre-defined}}.$ (6)
A Scaled dig route
B Scaled post route
Figure 7: Routes scaled using an exact match bounding box (red) over an in
game route run by Emmanuel Sanders while a Denver Bronco (blue) Figure 8: Out
route scaled using a smaller discrepancy bounding box (red) over an in game
route run by Emmanuel Sanders while a Denver Bronco (blue)
An example is shown in Figure 6. This approach was chosen because it maintains
the aspect ratio of the bounding box of the pre-defined route and reduces the
possibility of a pre-defined route not scaling “reasonably.” The aspect ratio
is the ratio between the height and width of the bounding box. Maintaining the
aspect ratio is critical to differentiating routes since direction changes are
what separates routes, for example, separating outs from corners and corners
from streaks. Allowing the aspect ratio to change could change the angles at
the direction changes in the routes so much that two types routes can become
almost indistinguishable. In Figure 7 an example of changing the aspect ratio
when scaling, is shown where a dig game route is closer to a post route than
the correct dig route because the direction change of the scaled pre-defined
post route conforms to the game route. Changing the aspect ratio occurs when
the bounding box of the pre-defined route is scaled to exactly match the game
route as in Figure 7. This angle manipulation is most problematic in routes
such as corners or posts and flats or slants. Matching bounding boxes of the
pre-defined routes and the game route would allow for “perfect” matches in the
ideal scenario but will also change the aspect ratio, sometimes drastically.
The other scaling approach is to minimize the smaller discrepancy which is
simply using the smaller ratio to scale the pre-defined route instead of the
larger ratio. The issue that arises with this approach mainly comes from how
the pre-defined routes are initially plotted. The coordinates of the pre-
defined routes were made much larger than necessary compared to game routes to
guarantee that the pre-defined routes would always scale down (both $x$ and
$y$ coordinates get smaller). This saves an additional logic step that would
be needed to possibly scale pre-defined routes up or down. The large size of
the pre-defined route in comparison to the game route is true of all pre-
defined routes. Matching the larger discrepancy implies that the pre-defined
route bounding box will be shrunk to always be contained within the game route
bounding box as in Figures 6 and 6. This way after scaling, all pre-defined
routes will be approximately similar sizes, whereas scaling to match the
smaller discrepancy might cause erratic behavior. This can be seen in Figure 8
where the scaled pre-defined route is unrealistically large compared to the
game route.
Figure 9: Example of a grid search which shifts the pre-defined route’s
bounding box
incrementally upward over an in game route run by Bennie Fowler while a Denver
Bronco
## 6 Route Classification
To classify the game routes, a simple Euclidean distance is used between the
game route and the scaled pre-defined routes after shifting the scaled pre-
defined route to align as closely as possible. Then the label of the scaled
pre-defined route that is the minimum distance from the game route is used to
also label the game route. To match up the game route and scaled route as
closely as possible, a shift is used on the scaled route. Recall from the
Section 5 that the method of scaling chosen was to minimize the largest
discrepancy. This implies that the bounding box of the scaled route will be
completely contained within the bounding box of the game route each time.
Therefore, a grid search for the optimal position of the scaled pre-defined
route can be done within the bounding box of the game route. An example of a
grid search with a scaled pre-defined out route is shown in Figure 9. Note the
scaled and game route bounding boxes will still be aligned on their bottom and
left sides. If the scaled route used $r_{h}^{-1}$ then the right side of the
bounding boxes will be aligned, and similarly if $r_{v}^{-1}$ was used the top
side of the bounding boxes will be aligned. Therefore the shift on the grid
will either be exclusively vertical or horizontal to keep the scaled route
bounding box within the game route bounding box. The chosen shift step size
was half a yard for this method. The distance that the scaled route can be
shifted is equal to $w$ defined as
$\displaystyle
w=\max\big{(}|\max\limits_{x}R_{\text{scaled}}-\max\limits_{x}R_{\text{game}}|,|\max\limits_{y}R_{\text{scaled}}-\max\limits_{y}R_{\text{game}}|\big{)}.$
(7)
Figure 10: Representation of $w$ which is the distance between the remaining
discrepancy between the bounding boxes of the in game route (blue) and scaled
pre-defined route (red)
Figure 11: Visual of how the original pre-defined route (left) gets points
added (right) to equal the number of points of the game route
One of the two elements in the max will be zero, so $w$ is equal to whichever
is positive. The number of steps taken is equal to the ceiling of
$\;\dfrac{w}{0.5}$. If the tops of the bounding boxes are aligned, then the
distances between the game and scaled routes will be measured after shifting
the $x$-coordinate of the scaled route $\\{0,0.5,\dots,w_{0.5}\\}$, where
$w_{0.5}$ is $w$ rounded down to the nearest $0.5$ increment. If the right
sides of the bounding boxes are aligned, the $y$-coordinate of the scaled
route will be shifted. A visual of $w$ is shown in Figure 11.
The distance at each step is calculated by measuring the distance between
every coordinate in the game route to the closest point on the line of the
scaled route and adding the distance between every coordinate in the scaled
route to the closest point on the line of the game route. Let $\ell_{i,i+1}$
be the line segment between coordinates $(x_{i},y_{i})$ and
$(x_{i+1},y_{i+1})$ for $i\in 1,\dots,T-1$, where $T$ is still the number of
coordinates recorded in the game route. Points are artificially added to the
scaled routes until $R_{\text{scaled}}$ has the same cardinality as
$R_{\text{game}}$; $|T|$. These points are placed evenly on the route as shown
in Figure 11. These added points do not affect the bounding box of the scaled
route. Then the collection of line segments that make up a route is defined as
$\displaystyle L=\\{\ell_{1,2},\ell_{2,3},\dots,\ell_{T-1,T}\\}.$ (8)
Let $\delta\big{(}(x,y),L\big{)}$ be defined as the minimum distance between
the point $(x,y)$ and $L$. Then the distance measurement $D_{\text{game}}$ is
found by summing the minimum distance to the line $L_{\text{scaled}}$ over the
points in $R_{\text{game}}$.
$\displaystyle D_{\text{game}}=\sum_{(x,y)\in
R_{\text{game}}}\delta\big{(}(x,y),L_{\text{scaled}}\big{)}.$ (9)
To avoid misclassification when the game route is close to only part of the
scaled route, as in Figure 13, the same measurement is taken between
coordinates of the scaled route and the line of the game route.
$\displaystyle D_{\text{scaled}}=\sum_{(x,y)\in
R_{\text{scaled}}}\delta\big{(}(x,y),L_{\text{game}}\big{)}.$ (10)
Figure 12: Partial overlap of routes
showing the necessity to calculate
the closest distances between both routes
Figure 13: Distance measured in $D_{\text{scaled}}$ with route from Figure 9
An example of the distance being measured for each coordinate in
$D_{\text{scaled}}$ can be found in Figure 13 using the routes from the
earlier grid search example in Figure 9. The total distance between the scaled
and game routes is summed with a weight $\gamma$ on $D_{\text{scaled}}$.
$\displaystyle D_{\text{route}}=D_{\text{game}}+\gamma D_{\text{scaled}}.$
(11)
Here $D_{\text{route}}$ is the measurement of distance between the game route
and one of the named routes from the route tree in Figure 3. The weight
$\gamma\leq 1$ and is designed to help balance the distance measurements since
the scaled route will necessarily be smaller than the game route because of
the scaling strategy. The weight $\gamma$ is to represent $D_{\text{game}}$
being more important since it is possible for the game route to extend further
than the scaled route while the scaled route lines up extremely closely with
only part of the game route. Weighting $D_{\text{scaled}}$ down will help with
this problem by making the distances calculated from game route coordinates
overlapping with the scaled route line more important. The route name with the
minimum distance among the entire route tree and all shifts will be assigned
to the game route.
For each classification the same pre-defined route tree is used initially.
There is no attempt made to incorporate labeled routes into future predictions
through a process such as active learning where labeling is done while
learning the coefficient space. This is done to prevent the routes used for
labeling from drifting too far from the known truth as described in [5]. This
is a phenomenon seen when the input distribution changes, especially in semi-
supervised problems. An example is in correlation matching in images. The
template being used to match within the image can start to drift away from the
truth if updated regularly. Especially in cases like route labeling when there
is little supervision, it is preferred to guarantee the templates reflect the
truth at all times rather than attempt to leverage game routes that have
already been labeled. This avoids treating a wrong label as truth.
Route | Precision | Recall | Count
---|---|---|---
Corner | 0.36 | 0.76 | 21
Dig | 0.75 | 0.27 | 45
Flat | 0.64 | 0.78 | 23
Out | 0.75 | 0.20 | 30
Post | 0.33 | 0.50 | 22
Slant | 0.67 | 0.76 | 38
Sluggo | 0.33 | 1.0 | 1
Streak | 0.54 | 0.36 | 36
Wheel | 0.33 | 0.29 | 7
Overall | 0.48 | 0.48 | 234
Route | Precision | Recall | Count
---|---|---|---
Corner | 0.54 | 0.95 | 21
Dig | 0.77 | 0.60 | 45
Flat | 0.70 | 0.91 | 23
Out | 0.95 | 0.60 | 30
Post | 0.53 | 0.86 | 22
Slant | 0.76 | 0.82 | 38
Sluggo | 0.25 | 1.0 | 1
Streak | 0.87 | 0.75 | 36
Wheel | 0.67 | 0.29 | 7
Overall | 0.72 | 0.71 | 234
Table 1: Precision and recall of labeled routes for cutoff times of 3 (left)
and 5 (right) seconds from the 2017 season game between the Denver Broncos and
Los Angeles Chargers
## 7 Performance
Overall this method performed well and was able to distinguish between routes
based on their fundamental characteristics. The method was able to handle the
direction differences (e.g. out route versus dig route) and was able to
differentiate routes based on the angle at which the receivers initially
turned. Examples of routes labeled post, corner, out, dig, slant, flat,
streak, sluggo, and wheel can be seen in the Appendix. What stands out is the
difference in when receivers are making their breaks which differentiates the
short routes such as slants and flats, the intermediate routes such as digs
and corners, and the deep routes such as posts and corners. In the flats and
slants many of these captured routes have little to no angle as a break, the
digs and outs show a very sharp turn to the center or sideline, while the
posts and corners show a more gradual turn. This method tends to over
categorize corner and post routes. It can be seen in the Appendix that there
are some dig and out routes that are mislabeled as post and corner routes
respectively because of their more gradual turns. Instead, the method should
be taking into account the ending turn which is much sharper than what would
be expected of a post or corner route.
Figure 14: Normalized confusion matrix of accuracy of routes
A more quantitative way to assess the performance of this method is similar to
the analysis performed in [2], to measure precision and recall. Precision and
recall are calculated for each route label using the number of true positives
($t_{p}$) correctly identified routes of the current label, the number of
false positives ($f_{p}$) other routes mislabeled as the current label, and
false negatives ($f_{n}$) routes of the current label that were misclassified.
They are defined as
Precision $\displaystyle=\dfrac{t_{p}}{t_{p}+f_{p}}\;,$ (12) Recall
$\displaystyle=\dfrac{t_{p}}{t_{p}+f_{n}}\;.$ (13)
Table 1 shows individual precision and recall scores for each route based on a
manually labeled game. The true labels were gathered by systematically
watching and recording a best guess for each route run during the Denver
Broncos versus the Los Angeles Chargers game in the 2017 season. The overall
score shows a moderate success at labeling. Of note is that even though there
were curl and comeback routes in the game ($\leq 5$ of both) there were no
curl or comeback routes predicted. This method will struggle with these routes
because many times when these routes are run the receiver is doubling over
onto the route which does not distinguish itself in this classification
method. Better techniques for classifying these specific routes are left for
future work. Also receivers that were labeled to be either blocking or waiting
for a bubble screen were classified as such and included in the accuracy
measurements but not shown. Blocking or waiting for a bubble screen is
indistinguishable with this method. Receivers were classified as blocking or
waiting for a bubble screen if they did not move more than 4 yards during the
play. The overall accuracy for this game was 72%. The confusion matrix in
Figure 14 shows the normalized categorization probabilities for classified
routes. The greatest mislabeling was outs erroneously labeled as flats. In
Figure 15 the distribution of routes can be seen which shows that tight end
routes were dominated by slants and flats while wide receivers seemed to run
an approximately equal amount of streaks, slants, posts, digs and corners.
These observations align closely with work done in [1].
Figure 15: Distribution of routes by position
Cutting off routes at 3 seconds was also considered because this is a more
natural amount of time a receiver would develop their routes fully. This
resulted in a sharp degradation of performance as can be seen in Table 1.
Recall in Figure 1 this cuts off many routes before the play was complete.
Another possibility would be to cut off the routes at 4 seconds but this too
showed poor performance compared to cutting off routes at 5 seconds.
## 8 Conclusion
This paper presented an unsupervised template matching method that allows for
routes to be classified using a simple distance metric. This method’s main
benefits are the overall speed and no manual labeling of routes is required.
After pre-processing the game routes, the three main parts of this method are
scaling, translating and measuring distances. Each of these operations have
$\mathcal{O}(T)$ complexity where $T$ is the number of points in the game
route. This $T$ is actually capped by the amount of maximum time allowed for
each route, which for this method is 5 seconds or 50 points. When labeling a
full game with 252 routes this method took 303 seconds or approximately 5
minutes. The other benefit is that labeling is done for each route without
having to manually label clusters afterwards or labeling routes beforehand to
use as a training set. Other methods require labeling at some time by humans,
but template matching assigns a label without any human intervention.
Converting raw coordinates of players to route labels is a step in trying to
glean more information from NFL games. The next step is to use these labels
with more standard statistical methods to understand what routes work well in
different situations: Namely how do certain route combinations work against
certain defensive coverage schemes or how does a specific player’s routes work
against various coverage types. This involves labeling individual defensive
players as zone or man, then using that information to imply an overall
coverage scheme. Producing summary statistics about these matchups will
follow. The github url github.com/kinne174 is where this project and others
are stored.
In this paper a template based search criterion was used to automatically
classify routes run by receivers. It was shown that moderate success is
achieved through an appropriate scaling method and translations of the route
to align the pre-defined routes as closely as possible with the game route.
## References
* [1] Chu, Dani, et al. “Route Identification in the National Football League.” arXiv preprint arXiv:1908.02423 (2019).
* [2] Hochstedler and Gagnon. “American Football Route Identification Using Supervised Machine Learning.” MIT Sloan Sports Analytics Conference 2017.
* [3] Intille, Stephen S., and Aaron F. Bobick. “A framework for recognizing multi-agent action from visual evidence.” AAAI/IAAI 99.518-525 (1999): 2.
* [4] Sterken, Nathan. “RouteNet: a Convolutional Nueral Network for Classifying Routes.” NFL Big Data Bowl (2019).
* [5] Quionero-Candela, Joaquin, et al. Dataset shift in machine learning. The MIT Press, 2009.
* [6] Vonder Haar, Adam. “Exploratoy Data Analysis of Passing Plays using NFL Tracking Data.” NFL Big Data bowl (2019).
## Appendix
Examples of game routes that were labeled the same in the Denver Broncos
versus Los Angeles Chargers game during the 2017 season. The magenta line
represents the median route of the group showing this method is able to
partition essential qualities of common routes.
A Corner
B Post
A Out
B Dig
A Flat
B Slant
A Wheel
B Sluggo
|
2024-09-04T02:54:59.326223 | 2020-03-11T18:06:14 | 2003.05465 | {
"authors": "Adrian Chapman and Steven T. Flammia",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26174",
"submitter": "Adrian Chapman",
"url": "https://arxiv.org/abs/2003.05465"
} | arxiv-papers | [1]Centre for Engineered Quantum Systems, School of Physics, The University of
Sydney, Sydney, Australia
# Characterization of solvable spin models via graph invariants
Adrian Chapman<EMAIL_ADDRESS>Steven T. Flammia
(May 27, 2020)
###### Abstract
Exactly solvable models are essential in physics. For many-body
spin-$\mathbf{\nicefrac{{1}}{{2}}}$ systems, an important class of such models
consists of those that can be mapped to free fermions hopping on a graph. We
provide a complete characterization of models which can be solved this way.
Specifically, we reduce the problem of recognizing such spin models to the
graph-theoretic problem of recognizing line graphs, which has been solved
optimally. A corollary of our result is a complete set of constant-sized
commutation structures that constitute the obstructions to a free-fermion
solution. We find that symmetries are tightly constrained in these models.
Pauli symmetries correspond to either: (i) cycles on the fermion hopping
graph, (ii) the fermion parity operator, or (iii) logically encoded qubits.
Clifford symmetries within one of these symmetry sectors, with three
exceptions, must be symmetries of the free-fermion model itself. We
demonstrate how several exact free-fermion solutions from the literature fit
into our formalism and give an explicit example of a new model previously
unknown to be solvable by free fermions.
## 1 Introduction
Exactly solvable models provide fundamental insight into physics without the
need for difficult numerical methods or perturbation theory. In the particular
setting of many-body spin-$\nicefrac{{1}}{{2}}$ systems, a remarkable method
for producing exact solutions involves finding an effective description of the
system by noninteracting fermions. This reduces the problem of solving the
$n$-spin system over its full $2^{n}$-dimensional Hilbert space to one of
solving a single-particle system hopping on a lattice of $O(n)$ sites. The
paradigmatic example of this method is the exact solution for the XY model
[1], where the Jordan-Wigner transformation [2] is employed to describe the
model in terms of free fermions propagating in one spatial dimension. These
fermions are resolved as nonlocal Pauli operators in the spin picture, and the
nonlocal nature of this mapping may suggest that finding generalizations to
this mapping for more complicated spin systems is a daunting task. Of the many
generalizations that have since been proposed [3, 4, 5, 6, 7, 8, 9, 10, 11], a
particularly interesting solution to this problem is demonstrated in the exact
solution of a 2-d spin model on a honeycomb lattice introduced by Kitaev [12].
For this model, the transformation to free-fermions can be made locality-
preserving over a fixed subspace through the use of local symmetries.
The dynamics of free-fermion systems are generated by Gaussian-fermionic
Hamiltonians and correspond to the class of so-called matchgate circuits. This
circuit class coincides with the group of free-fermion propagators generated
by arbitrarily time-dependent single-particle Hamiltonians [13, 14] and has an
extensive complexity-theoretic characterization. In general, matchgate
circuits can be efficiently simulated classically with arbitrary single-qubit
product-state inputs and measurement [15, 16]. However, they become universal
for quantum computation with the introduction of non-matchgates such as the
$\mathrm{SWAP}$ gate [17, 18], certain measurements and resource inputs [19,
20], and when acting on nontrivial circuit geometries [21]. Furthermore, these
circuits share an interesting connection to the problem of counting the number
of perfect matchings in a graph, which is the context in which they were first
developed [22, 23, 24, 25]. This problem is known to be very hard
computationally (it is #P-complete [26]), but is efficiently solved for planar
graphs using the so-called Fisher-Kasteleyn-Temperley algorithm [27, 28].
In this work, we develop a distinct connection between free-fermion systems
and graph theory by using tools from quantum information science. The central
object of our formalism is the _frustration graph_. This is a network
quantifying the anticommutation structure of terms in the spin Hamiltonian
when it is expanded in the basis of Pauli operators [29]. This graph has been
invoked previously in the setting of variational quantum eigensolvers [30, 31,
32, 33, 34, 35, 36, 37], commonly under the name “anti-compatibility graph".
We show that the problem of recognizing whether a given spin model admits a
free-fermion solution is equivalent to that of recognizing whether its
frustration graph is a _line graph_ , which can be performed optimally in
linear time [38, 39, 40]. From the definition of a line graph, it will be
clear that such a condition is necessary, but we will show that it is also
sufficient. When the condition is met, we provide an explicit solution to the
model.
Line graphs have recently emerged as the natural structures describing the
effective tight-binding models for superconducting waveguide networks [41, 42,
43]. In this setting, the line graph corresponds to the physical hopping graph
of photons in the network. We will see how this scenario is a kind of “inverse
problem" to the one we consider, wherein fermions are hopping on the _root_ of
the line graph. It is clear from both scenarios that the topological
connectivity structure of many-body systems plays a central role in their
behavior, and it is remarkable that this is already being observed in
experiments. We expect that further investigation of the graph structure of
many-body Hamiltonians will continue to yield important insights into their
physics.
### 1.1 Summary of Main Results
Here we give a brief summary of the main results. We first define the
frustration graph of a Hamiltonian, given in the Pauli basis, as the graph
with nonzero Pauli terms as vertices and an edge between two vertices if their
corresponding terms anticommute. A line graph $G$ of a graph $R$ is the
intersection graph of the edges of $R$ as two-element subsets of the vertices
of $R$. With these simple definitions, we can informally state our first main
result, which we call our “fundamental theorem:"
###### Result 1 (Existence of free-fermion solution; Informal version of Thm.
1).
Given an $n$-qubit Hamiltonian in the Pauli basis for which the frustration
graph $G$ is the line graph of another graph $R$, then there exists a free-
fermion description of $H$.
From this description, an exact solution for the spectrum and eigenstates of
$H$ can be constructed. This theorem illustrates a novel connection between
the physics of quantum many-body systems and graph theory with some surprising
implications. First, it gives the exact correspondence between the spatial
structure of a spin Hamiltonian and that of its effective free-fermion
description. As we will see through several examples, this relationship is not
guaranteed to be straightforward. Second, the theorem gives an exact condition
by which a spin model can _fail_ to have a free-fermion solution, the culprit
being the presence of forbidden anticommutation structures in the frustration
graph of $H$.
Some caveats to Result 1 (that are given precisely in the formal statement,
Theorem 1) involve cases in which this mapping between Pauli terms in $H$ and
fermion hopping terms is not one-to-one. In particular, if we are given a
Hamiltonian whose frustration graph is not a line graph, then a free-fermion
solution may still be possible via a non-injective mapping over a subspace
defined by fixing stabilizer degrees of freedom. Additionally, it is possible
for a given spin Hamiltonian to describe multiple free-fermion models
simultaneously, each generating dynamics over an independent stabilizer
subspace of the full Hilbert space as for the Kitaev honeycomb model [12].
These symmetries are sometimes referred to as gauge degrees of freedom, though
we will reserve this term for freedoms which cannot affect the physics of the
free-fermion model. Finally, it may be the case that the free-fermion model
contains states which are nonphysical in the spin-Hamiltonian picture, and so
these must be removed by fixing a symmetry as well. Luckily, all of these
cases manifest as structures in the frustration graph of $H$. The first,
regarding when a non-injective free-fermion solution is required, is signified
by the presence of so-called twin vertices, or vertices with the same
neighborhood. We deal with this case in our first lemma. The next two cases
are covered by our second theorem:
###### Result 2 (Graphical symmetries; Informal version of Thm. 2).
Given an $n$-qubit Hamiltonian in the Pauli basis for which the frustration
graph $G$ is the line graph of another graph $R$, then Pauli symmetries of $H$
correspond to either:
1. (i)
Cycles of $R$;
2. (ii)
A T-join of $R$, associated to the fermion-parity operator;
3. (iii)
Logically encoded qubits;
and these symmetries generate an abelian group.
We then prove that we can always fix all of the cycle symmetries by choosing
an orientation of the root graph $R$. Our results also relate the more general
class of Clifford symmetries to the symmetries of the single-particle free-
fermion Hamiltonian. We show that with exactly three exceptions, Clifford
symmetries of the spin model, in a subspace defined by fixing the symmetries
listed above, must also be symmetries of the single-particle Hamiltonian (see
Corollary 1.2 for a precise statement).
Finally, we illustrate these ideas with several examples: small systems of up
to 3 qubits, the 1-dimensional anisotropic $XY$ model in a transverse field
and its nearest-neighbor solvable generalization, the Kitaev honeycomb model,
the 3-dimensional frustrated hexagonal gauge color code [44], and the
Sierpinski-Hanoi model. To the best of our knowledge, this last model was
previously not known to be solvable.
The remainder of the paper is organized as follows. In Section 2, we will
introduce notation and give some background on the formalism of free-fermions
and frustration graphs. In Section 3, we will formally state Theorem 1 and
some general implications thereof. In Section 4, we elaborate on the structure
of symmetries which can be present in our class of solvable models. In Section
4.1, we will use the theorems of the previous two sections to outline an
explicit solution method. We close by demonstrating how the examples of free-
fermion solutions listed above fit into this formalism in Section 5.
## 2 Background
### 2.1 Frustration Graphs
The models we consider are spin-$\nicefrac{{1}}{{2}}$ (qubit) Hamiltonians
written in the Pauli basis
$\displaystyle H=\sum_{\boldsymbol{j}\in
V}h_{\boldsymbol{j}}\sigma^{\boldsymbol{j}}\mathrm{,}$ (1)
where $\boldsymbol{j}\equiv(\boldsymbol{a},\boldsymbol{b})$, with
$\boldsymbol{a}$, $\boldsymbol{b}\in\\{0,1\\}^{\times n}$ labeling an
$n$-qubit Pauli operator as
$\displaystyle\sigma^{\boldsymbol{j}}=i^{\boldsymbol{a}\cdot\boldsymbol{b}}\left(\bigotimes_{k=1}^{n}X_{k}^{a_{k}}\right)\left(\bigotimes_{k=1}^{n}Z_{k}^{b_{k}}\right)\mathrm{.}$
(2)
The exponent of the phase factor, $\boldsymbol{a}\cdot\boldsymbol{b}$, is the
_Euclidean inner product_ between $\boldsymbol{a}$ and $\boldsymbol{b}$. This
phase is chosen such that the overall operator is Hermitian, and such that
$a_{k}=b_{k}=1$ means that $\sigma^{\boldsymbol{j}}$ acts on qubit $k$ by a
Pauli-$Y$ operator. We denote the full $n$-qubit Pauli group by $\mathcal{P}$,
and $V\subseteq\mathcal{P}$ is the set of Pauli terms in $H$ (i.e.
$h_{\boldsymbol{j}}=0$ for all $\boldsymbol{j}\notin V$). Let the Pauli
subgroup generated by this set be denoted $\mathcal{P}_{H}$.
For our purposes, what is important is not the explicit Pauli description of
the Hamiltonian, but rather the commutation relations between its terms. As
Pauli operators only either commute or anticommute, a useful quantity is their
_scalar commutator_ $[\\![\cdot,\cdot]\\!]$, which we define implicitly as
$\displaystyle\sigma^{\boldsymbol{j}}\sigma^{\boldsymbol{k}}=[\\![\sigma^{\boldsymbol{j}},\sigma^{\boldsymbol{k}}]\\!]\sigma^{\boldsymbol{k}}\sigma^{\boldsymbol{j}}\mathrm{.}$
(3)
The scalar commutator thus only takes the values $\pm 1$. Additionally, the
scalar commutator distributes over multiplication in each argument, e.g.
$\displaystyle[\\![\sigma^{\boldsymbol{j}},\sigma^{\boldsymbol{k}}\sigma^{\boldsymbol{l}}]\\!]=[\\![\sigma^{\boldsymbol{j}},\sigma^{\boldsymbol{k}}]\\!][\\![\sigma^{\boldsymbol{j}},\sigma^{\boldsymbol{l}}]\\!].$
(4)
For $n$-qubit Paulis, the scalar commutator can thus be read off from the
Pauli labels as
$\displaystyle[\\![\sigma^{\boldsymbol{j}},\sigma^{\boldsymbol{k}}]\\!]=(-1)^{\langle\boldsymbol{j},\boldsymbol{k}\rangle}$
(5)
Here, $\langle\boldsymbol{j},\boldsymbol{k}\rangle$ is the _symplectic inner
product_
$\displaystyle\langle\boldsymbol{j},\boldsymbol{k}\rangle\equiv\begin{pmatrix}\boldsymbol{a}_{j}&\boldsymbol{b}_{j}\end{pmatrix}\begin{pmatrix}\mathbf{0}_{n}&\mathbf{I}_{n}\\\
-\mathbf{I}_{n}&\mathbf{0}_{n}\end{pmatrix}\begin{pmatrix}\boldsymbol{a}_{k}\\\
\boldsymbol{b}_{k}\end{pmatrix}\mathrm{,}$ (6)
where naturally $\boldsymbol{j}\equiv(\boldsymbol{a}_{j},\boldsymbol{b}_{j})$
and $\boldsymbol{k}\equiv(\boldsymbol{a}_{k},\boldsymbol{b}_{k})$.
$\mathbf{0}_{n}$ is the $n\times n$ all-zeros matrix, and $\mathbf{I}_{n}$ is
the $n\times n$ identity matrix. Eq. (5) captures the fact that a factor of
$-1$ is included in the scalar commutator for each qubit where the operators
$\sigma^{\boldsymbol{j}}$ and $\sigma^{\boldsymbol{k}}$ differ and neither
acts trivially. Since the inner product appears as the exponent of a sign
factor, without loss of generality, we can replace it with the _binary
symplectic inner product_
$\displaystyle\langle\boldsymbol{j},\boldsymbol{k}\rangle_{2}\equiv\langle\boldsymbol{j},\boldsymbol{k}\rangle\bmod
2.$ (7)
$H$ | $\sum\limits_{j\in\\{x,y,z\\}}h_{j}\sigma^{j}$ | $\sum\limits_{\begin{subarray}{c}\boldsymbol{j}\in\\{0,x,y,z\\}^{\times 2}\\\ \boldsymbol{j}\neq(0,0)\end{subarray}}h_{\boldsymbol{j}}\sigma^{\boldsymbol{j}}$
---|---|---
$G(H)$ $\simeq L(R)$ | |
$R$ | or |
Table 1: Example frustration graphs for general Hamiltonians on small (1- and
2-qubit) systems. (Left column) For general single-qubit Hamiltonians, the
frustration graph is the complete graph on three vertices, $K_{3}$. By the
Whitney isomorphism theorem [45], $K_{3}$ is the only graph which is not the
line graph of a unique graph, but rather is the line graph of both $K_{3}$ and
the ‘claw’ graph, $K_{1,3}$. This implies the existence of two distinct free-
fermion solutions of single-qubit Hamiltonians. (Right column) For general
two-qubit Hamiltonians, the frustration graph is the line graph of the
complete graph on six vertices $K_{6}$ [29, 46]. Colored are the size-five
cliques corresponding to the degree-five vertices of the root graph. This
mapping implies the existence of a free-fermion solution for general two-qubit
Hamiltonians by six fermions, reflecting the accidental Lie-algebra
isomorphism $\mathfrak{su}(4)\simeq\mathfrak{spin}(6)$ (see Section 5.1).
Through the binary symplectic inner product, the scalar commutator defines a
symmetric binary relation between terms in the Hamiltonian, to which we
associate the adjacency matrix of a graph. Denote the _frustration graph_ for
a Hamiltonian of the form in Eq. (1) by $G(H)\equiv(V,E)$ with vertex set
given by the Pauli terms appearing in $H$, and edge set
$\displaystyle
E\equiv\\{(\boldsymbol{j},\boldsymbol{k})|\langle\boldsymbol{j},\boldsymbol{k}\rangle_{2}=1\\}$
(8)
That is, two Pauli terms correspond to neighboring vertices in $G(H)$ if and
only if they anticommute. Without loss of generality, we can assume that
$G(H)$ is connected, as disconnected components of this graph correspond to
commuting collections of terms in the Hamiltonian and can thus be
independently treated. As such, we will further assume that $H$ has no
identity component in the expansion (1)—rendering it traceless—since this will
only contribute an overall energy shift to the system with no effect on
dynamics.
### 2.2 Majorana Fermions
A related set of Hermitian operators which only either commute or anticommute
is that of the Majorana fermion modes $\\{\gamma_{\mu}\\}_{\mu}$, which
satisfy the canonical anticommutation relations
$\displaystyle\gamma_{\mu}\gamma_{\nu}+\gamma_{\nu}\gamma_{\mu}=2\delta_{\mu\nu}I\mathrm{,}$
(9)
and for which $\gamma_{\mu}^{\dagger}=\gamma_{\mu}$. A familiar way of
realizing these operators in terms of $n$-qubit Pauli observables is through
the Jordan-Wigner transformation
$\displaystyle\gamma_{2j-1}=\bigotimes_{k=1}^{j-1}Z_{k}\otimes
X_{j}\mbox{\hskip 28.45274pt}\gamma_{2j}=\bigotimes_{k=1}^{j-1}Z_{k}\otimes
Y_{j}\mathrm{.}$ (10)
The Pauli operators on the right can easily be verified to constitute $2n$
operators satisfying Eq. (9). Of course, we will explore the full set of
generalizations to this transformation in this work. We seek to identify those
qubit Hamiltonians which can be expressed as quadratic in the Majorana modes.
Such _free-fermion_ Hamiltonians are written as
$\displaystyle\widetilde{H}=i\boldsymbol{\gamma}\cdot\mathbf{h}\cdot\boldsymbol{\gamma}^{\mathrm{T}}\equiv
2i\sum_{(j,k)\in\widetilde{E}}h_{jk}\gamma_{j}\gamma_{k}$ (11)
where $\boldsymbol{\gamma}$ is a row-vector of the Majorana operators, and
$\mathbf{h}$ is the _single-particle Hamiltonian_. Without loss of generality,
$\mathbf{h}$ can be taken as a real antisymmetric matrix, as we can similarly
assume $\widetilde{H}$ is traceless, and the canonical anticommutation
relations Eq. (9) guarantee that any symmetric component of $\mathbf{h}$ will
not contribute to $\widetilde{H}$. $\widetilde{E}$ is the edge-set of the
_fermion-hopping graph_ $R\equiv(\widetilde{V},\widetilde{E})$ on the fermion
modes $\widetilde{V}$. That is, $h_{jk}=0$ for those pairs
$(j,k)\notin\widetilde{E}$, and the factor of two in the rightmost expression
accounts for the fact that each edge in $\widetilde{E}$ is included only once
in the sum.
As a result of the canonical anticommutation relations (9), the individual
Majorana modes transform covariantly under the time evolution generated by
$\widetilde{H}$
$\displaystyle\mathrm{e}^{i\widetilde{H}t}\gamma_{\mu}\mathrm{e}^{-i\widetilde{H}t}=\sum_{\nu\in\widetilde{V}}\left(\mathrm{e}^{4\mathbf{h}t}\right)_{\mu\nu}\gamma_{\nu}$
(12)
since
$\displaystyle[\bm{\gamma}\cdot\mathbf{h}\cdot\boldsymbol{\gamma}^{\mathrm{T}},\gamma_{\mu}]=-4(\mathbf{h}\cdot\boldsymbol{\gamma}^{\mathrm{T}})_{\mu}\,.$
(13)
Since $\mathbf{h}$ is antisymmetric and real,
$\mathrm{e}^{4\mathbf{h}t}\in\mathrm{SO}(2n,\mathds{R})$. Thus, $\mathbf{h}$
can be block-diagonalized via a real orthogonal matrix,
$\mathbf{W}\in\mathrm{SO}(2n,\mathds{R})$, as
$\displaystyle\mathbf{W}^{\mathrm{T}}\cdot\mathbf{h}\cdot\mathbf{W}=\bigoplus_{j=1}^{n}\begin{pmatrix}0&-\lambda_{j}\\\
\lambda_{j}&0\\\ \end{pmatrix}$ (14)
We can represent $\mathbf{W}$ as the exponential of a quadratic Majorana
fermion operator as well, by defining
$\displaystyle\mathbf{W}\equiv\mathrm{e}^{4\mathbf{w}}\mathrm{,}$ (15)
$\widetilde{H}$ is therefore diagonalized as
$\displaystyle\mathrm{e}^{-\boldsymbol{\gamma}\cdot\mathbf{w}\cdot\boldsymbol{\gamma}^{\mathrm{T}}}\widetilde{H}\mathrm{e}^{\boldsymbol{\gamma}\cdot\mathbf{w}\cdot\boldsymbol{\gamma}^{\mathrm{T}}}$
$\displaystyle=i\boldsymbol{\gamma}\cdot\left(\mathbf{W}^{\mathrm{T}}\cdot\mathbf{h}\cdot\mathbf{W}\right)\cdot\boldsymbol{\gamma}^{\mathrm{T}}$
(16) $\displaystyle=-2i\sum_{j=1}^{n}\lambda_{j}\gamma_{2j-1}\gamma_{2j}$ (17)
$\displaystyle\mathrm{e}^{-\boldsymbol{\gamma}\cdot\mathbf{w}\cdot\boldsymbol{\gamma}^{\mathrm{T}}}\widetilde{H}\mathrm{e}^{\boldsymbol{\gamma}\cdot\mathbf{w}\cdot\boldsymbol{\gamma}^{\mathrm{T}}}$
$\displaystyle=2\sum_{j=1}^{n}\lambda_{j}Z_{j}$ (18)
Note that the exact diagonalization can be performed with reference to the
_quadratics_ in the Majorana fermion modes only. To completely solve the
system, it is only necessary to diagonalize $\mathbf{h}$ classically, find a
generating matrix $\mathbf{w}$, and diagonalize $\widetilde{H}$ using an
exponential of quadratics with regard to some fermionization like Eq. (10).
Eigenstates of $\widetilde{H}$ can be found by acting
$\mathrm{e}^{\boldsymbol{\gamma}\cdot\mathbf{w}\cdot\boldsymbol{\gamma}^{\mathrm{T}}}$
on a computational basis state $\left|\mathbf{x}\right\rangle$ for
$\mathbf{x}\in\\{0,1\\}^{\times n}$. The associated eigenvalue is
$\displaystyle E_{\mathbf{x}}=2\sum_{j=1}^{n}(-1)^{x_{j}}\lambda_{j}$ (19)
Therefore, systems of the form in Eq. (11) may be considered _exactly
solvable_ classically, since their exact diagonalization is reduced to exact
diagonalization on a _poly_ $(n)$-sized matrix $\mathbf{h}$.
## 3 Fundamental Theorem
As mentioned previously, we seek to characterize the full set of Jordan-
Wigner-like transformations, generalizing Eq. (10). To be more precise, we ask
for the conditions under which there exists a mapping
$\phi:V\mapsto\widetilde{V}^{\times 2}$, for some set $\widetilde{V}$ (the
fermion modes), effecting
$\displaystyle\sigma^{\boldsymbol{j}}\mapsto
i\gamma_{\phi_{1}(\boldsymbol{j})}\gamma_{\phi_{2}(\boldsymbol{j})}\mathrm{,}$
(20)
for $\phi_{1}(\boldsymbol{j})$, $\phi_{2}(\boldsymbol{j})\in\widetilde{V}$,
and such that
$\displaystyle[\\![\sigma^{\boldsymbol{j}},\sigma^{\boldsymbol{k}}]\\!]=[\\![\gamma_{\phi_{1}(\boldsymbol{j})}\gamma_{\phi_{2}(\boldsymbol{j})},\gamma_{\phi_{1}(\boldsymbol{k})}\gamma_{\phi_{2}(\boldsymbol{k})}]\\!]$
(21)
for all pairs, $\boldsymbol{j}$ and $\boldsymbol{k}$. Such a mapping induces a
term-by-term _free-fermionization_ of the Hamiltonian (1) to one of the form
(11) such that
$\displaystyle G(H)\simeq G(\widetilde{H})\mathrm{.}$ (22)
Again, $G(H)$ is the frustration graph of $H$.
From the canonical anticommutation relations, Eq. (9), and the distribution
rule Eq. (4), we see that scalar commutators between quadratic Majorana-
fermion operators are given by
$\displaystyle[\\![\gamma_{\mu}\gamma_{\nu},\gamma_{\alpha}\gamma_{\beta}]\\!]=(-1)^{|(\mu,\nu)\cap(\alpha,\beta)|}$
(23)
Eqs. (22) and (23) can be restated graph theoretically as saying that $G(H)$
is the graph whose vertex set is the edge set of the fermion hopping graph
$R$, and vertices of $G(H)$ are neighboring if and only if the associated
edges of $R$ share exactly one vertex. Such a graph is called the _line graph_
of $R$.
###### Definition 1 (Line Graphs).
The line graph $L(R)\equiv(E,F)$ of a root graph $R\equiv(V,E)$ is the graph
whose vertex set is the edge set of $R$ and whose edge set is given by
$\displaystyle F\equiv\\{(e_{1},e_{2})\ |\ e_{1},e_{2}\in E,\ |e_{1}\cap
e_{2}|=1\\}$ (24)
That is, vertices are neighboring in $L(R)$ if the corresponding edges in $R$
are incident at a vertex.
Notice that if $L(R)$ is connected if and only if $R$ is. With these
definitions in-hand, our first main result can be stated simply as
###### Theorem 1 (Existence of free-fermion solution).
An injective map $\phi$ as defined in Eq. (20) and Eq. (21) exists for the
Hamiltonian $H$ as defined in Eq. (1) if and only if there exists a root graph
$R$ such that
$\displaystyle G(H)\simeq L(R),$ (25)
where R is the hopping graph of the free-fermion solution.
| With Twins | Without twins
---|---|---
Forbidden Graphs | (a) | (d)
Twin-Free, $L(R)$ | (b) |
Root $R$ | (c) | (e)
Table 2: A graph is a line graph if and only if it does not contain any of the
nine forbidden graphs in (a), (d), and (e) as an induced subgraph [47]. Of
these nine graphs, the three in (a) contain twin vertices, highlighted. If
these three graphs are induced subgraphs of a frustration graph such that
these highlighted vertices are twins in the larger graph, then the twins can
be removed by restricting onto a fixed mutual eigenspace of their products,
which correspond to constants of motion of the Hamiltonian. (b) The twin-free
restrictions of the graphs in (a), with all but one highlighted vertex from
(a) removed. These graphs are the line graphs of the graphs in (c). In Ref.
[48], it was shown that only five graphs contain the forbidden subgraphs in
(e) and none of those in (a) or (d). Finally, this set was further refined in
Ref. [49] to a set of three forbidden subgraphs for 3-connected line graphs of
minimum degree at least seven, though we do not display these graphs here.
###### Proof.
The proof can be found in Section 6.1. ∎
The intuition for this result is that the root graph $R$ is the graph where
the vertices are fermions and the edges are the bilinears that appear in the
Hamiltonian $H$. The result reveals a correspondence between a
characterization of line graphs and a characterization of free-fermion spin
models, as not every graph can be expressed as the line graph of some root. We
must however note that, strictly speaking, the existence of this mapping alone
does not guarantee a free-fermion solution, since the “Lie-homomorphism"
constraint, Eq. (21), does not fix the _sign_ of the terms in the free-fermion
Hamiltonian. Choosing a sign for each term is equivalent to _orienting_ the
root graph, since multiplying by a sign is equivalent to making the exchange
$\phi_{1}(\boldsymbol{j})\leftrightarrow\phi_{2}(\boldsymbol{j})$ in Eq. (20).
Different orientations may not faithfully reproduce the properties of $H$, but
we will see that such an orientation can always be chosen. The line graph
condition in Eq. (25) is therefore necessary and sufficient for a free-fermion
solution to exist. Before turning to further implications of Theorem 1, let us
first detail some properties of line graphs.
Line graphs are closely related to so-called _intersection graphs_ ,
originally studied by Erdős [50] and others (see, for example, Ref. [51]). An
intersection graph $G\equiv(V,E)$ is a graph whose vertex set, $V\subseteq
2^{S}$, consists of distinct subsets of some set $S$. Two vertices, $u$ and
$v$, are neighboring in $G$ if their intersection is nonempty ($|v\cap w|\neq
0$). A line graph is a special case of an intersection graph where every
vertex corresponds to a subset of size at most two. When we specify that
$\phi$ be injective, we are requiring that no distinct vertices have identical
subsets, and our definition of a free-fermion solution Eq. (25) identically
coincides with that of a line graph. Since terms in $H$ can thus intersect by
at most one Majorana mode, collections of terms containing a given mode are
all neighboring in $G(H)$, so this mode corresponds to a _clique_ , or
complete subgraph, of $G(H)$. This characterization of line graphs was first
given by Krausz [52] and bears stating formally.
###### Definition 2 (Krausz decomposition of line graphs).
Given a line graph $G\simeq L(R)$, there exists a partition of the edges of
$G$ into cliques such that every vertex appears in at most two cliques.
Cliques in $G(H)$ can therefore be identified with the individual Majorana
modes in a free-fermion solution of $H$. If a term belongs to only one clique,
we can ensure our resulting fermion Hamiltonian is quadratic by taking the
second clique for that term to be a clique of no edges, as we will see in
several examples below. The existence of a Krausz decomposition is utilized in
a linear-time algorithm to recognize line graphs by Roussopoulos [38], though
the earliest such algorithm for line-graph recognition was given by Lehot
[39]. A dynamic solution was later given by Degiorgi and Simon [40]. These
algorithms are optimal and constructive, and so can be applied to a given spin
model to provide an exact free-fermion solution.
We next turn to the _hereditary property_ of line graphs, for which we require
the following definition:
###### Definition 3 (Induced subgraphs).
Given a graph $G\equiv(V,E)$, an induced subgraph of $G$ by a subset of
vertices $V^{\prime}\subset V$, is a graph
$G[V^{\prime}]\equiv(V^{\prime},E^{\prime})$ such that for any pair of
vertices $u$, $v\in V^{\prime}$, $(u,v)\in E^{\prime}$ if and only if
$(u,v)\in E$ in $G$.
An induced subgraph of $G$ can be constructed by removing the subset of
vertices $V/V^{\prime}$ from $G$, together with all edges incident to any
vertex in this subset. Line graphs are a _hereditary class_ of graphs in the
sense that any induced subgraph of a line graph is also a line graph. This
coincides with our intuition that removing a term from a free-fermion
Hamiltonian does not change its free-fermion solvability. Conversely,
Hamiltonians for which no free-fermion solution exists are accompanied by
“pathological" structures in their frustration graphs, which obstruct a free-
fermion description no matter how we try to impose one. This is captured by
the forbidden subgraph characterization of Beineke [47] and later refined by
others [48, 49].
###### Corollary 1.1 (Beineke no-go theorem).
A given spin Hamiltonian $H$ has a free-fermion solution if and only if its
frustration graph $G(H)$ does not contain any of nine forbidden subgraphs,
shown in Table 2, (a) (d) (e), as an induced subgraph.
These forbidden subgraphs above can be interpreted as collections of
“frustrating" terms. At least one of the terms must be assigned to a fermion
interaction in every possible assignment from Pauli operators to fermions.
Correspondingly, ignoring these terms by removing their corresponding vertices
from the frustration graph may remove a forbidden subgraph and cause the
Hamiltonian to become solvable. The terms which we need to remove in this way
need not be unique. In the next section, we discuss one such strategy for
removing vertices such that our solution will remain faithful to the original
spin Hamiltonian by exploiting symmetries.
## 4 Symmetries
An important class of symmetries involves twin vertices in the frustration
graph.
###### Definition 4 (Twin Vertices).
Given a graph $G\equiv(V,E)$, vertices $u$, $v\in V$ are twin vertices if, for
every vertex $w\in V$, $(u,w)\in E$ if and only if $(v,w)\in E$.
Twin vertices have exactly the same neighborhood, and are thus never neighbors
in a frustration graph, which contains no self edges due to the fact that
every operator commutes with itself. Sets of twin vertices are the subject of
our first lemma.
###### Lemma 1 (Twin vertices are constants of motion).
Suppose a pair of terms $\sigma^{\boldsymbol{j}}$ and
$\sigma^{\boldsymbol{k}}$ in $H$ correspond to twin vertices in $G(H)$, then
the product $\sigma^{\boldsymbol{j}}\sigma^{\boldsymbol{k}}$ is a nontrivial
Pauli operator commuting with every term in the Hamiltonian. Distinct such
products therefore commute with each other.
###### Proof.
The statement follows straightforwardly from the definition of twin vertices:
every term in $H$ (including $\sigma^{\boldsymbol{j}}$ and
$\sigma^{\boldsymbol{k}}$ themselves) either commutes with both
$\sigma^{\boldsymbol{j}}$ and $\sigma^{\boldsymbol{k}}$ or anticommutes with
both of these operators. Terms in $H$ therefore always commute with the
product $\sigma^{\boldsymbol{j}}\sigma^{\boldsymbol{k}}$. This product is
furthermore a nontrivial Pauli operator, for if
$\sigma^{\boldsymbol{j}}\sigma^{\boldsymbol{k}}=I$, then
$\boldsymbol{j}=\boldsymbol{k}$, and we would not identify these Paulis with
distinct vertices in $G(H)$. Constants of motion generated this way must
commute with one another, since they commute with every term in the
Hamiltonian and are themselves products of Hamiltonian terms. They therefore
generate an abelian subgroup of the symmetry group of the Hamiltonian. ∎
Let the symmetry subgroup generated by products of twin vertices in this way
be denoted $\mathcal{S}$. We can leverage these symmetries to remove twin
vertices from the frustration graph $G(H)$. To do this, choose a minimal
generating set $\\{\sigma^{\boldsymbol{s}}\\}$ of Pauli operators for
$\mathcal{S}$ and choose a $\pm 1$ eigenspace for each. Let
$(-1)^{x_{\boldsymbol{s}}}$ be the eigenvalue associated to the generator
$\sigma^{\boldsymbol{s}}\in\mathcal{S}$, for $x_{\boldsymbol{s}}\in\\{0,1\\}$.
We restrict to the subspace defined as the mutual $+1$ eigenspace of the
stabilizer group
$\displaystyle\mathcal{S}_{\boldsymbol{x}}=\langle(-1)^{x_{\boldsymbol{s}}}\sigma^{\boldsymbol{s}}\rangle$
(26)
For a pair of twin vertices corresponding to Hamiltonian terms
$\sigma^{\boldsymbol{j}}$ and $\sigma^{\boldsymbol{k}}$, we let
$\displaystyle\sigma^{\boldsymbol{j}}\sigma^{\boldsymbol{k}}\equiv(-1)^{d_{\boldsymbol{j},\boldsymbol{k}}}\left[\prod_{\boldsymbol{s}\in
S_{\boldsymbol{j},\boldsymbol{k}}}(-1)^{x_{\boldsymbol{s}}}\sigma^{\boldsymbol{s}}\right]$
(27)
where $d_{\boldsymbol{j},\boldsymbol{k}}\in\\{0,1\\}$ specifies the
appropriate sign factor, and $S_{\boldsymbol{j},\boldsymbol{k}}$ is the subset
of generators of $\mathcal{S}$ such that
$\displaystyle\bigoplus_{\boldsymbol{s}\in
S_{\boldsymbol{j},\boldsymbol{k}}}\boldsymbol{s}=\boldsymbol{j}\oplus\boldsymbol{k}$
(28)
where “$\oplus$” denotes addition modulo 2 here. In the stabilizer subspace of
$\mathcal{S}_{\boldsymbol{x}}$, we can make the substitution
$\displaystyle\sigma^{\boldsymbol{k}}\rightarrow(-1)^{d_{\boldsymbol{j},\boldsymbol{k}}}\sigma^{\boldsymbol{j}}$
(29)
effectively removing the vertex $\boldsymbol{k}$ from $G(H)$.
Twin vertices capture the cases where a free-fermion solution for $H$ exists,
but is necessarily non-injective. Indeed, note that we are careful in our
statement of Theorem 1 to specify that our condition Eq. (25) is necessary and
sufficient when $\phi$ is injective. If we instead relax our requirement that
vertices of a line graph correspond to distinct subsets of size two in our
earlier discussion of intersection graphs, then we are allowing for line
graphs of graphs with multiple edges, or _multigraphs_. However, our
definition of $G(\widetilde{H})$ will differ from the line graph of a
multigraph for pairs of vertices corresponding to identical edges, which must
be adjacent in the line graph of a multigraph, but will be nonadjacent in
$G(\widetilde{H})$ from Eq. (23). Such vertices will nevertheless be twin
vertices in $G(\widetilde{H})$ due to the graph-isomorphism constraint, Eq.
(22). Therefore, if no _injective_ mapping $\phi$ satisfying Theorem 1 exists,
a many-to-one free-fermion solution exists only when twin vertices are
present. Lemma 1 allows us to deal with this non-injective case by removing
twin vertices until we obtain the line graph of a simple graph when possible.
The particular way we choose to perform this removal cannot affect the overall
solvability of the model, since the frustration graph with all twin vertices
removed is an induced subgraph of any frustration graph with only a proper
subset of such vertices removed. A model which is solvable by free fermions
this way is therefore solvable in all of its symmetry sectors.
Finally, we can see when a non-injective free-fermion solution may be possible
from the forbidden subgraph characterization, Corollary 1.1. As seen in Table
2, some of the forbidden subgraphs shown in (a) themselves contain twin
vertices. If these forbidden subgraphs are connected to the global frustration
graph such that their twin vertices remain twins in the larger graph, then
they may be removed, possibly allowing for a solution of the full Hamiltonian
by free fermions. When the twins are removed from the forbidden subgraphs,
they become line graphs as shown in Table 2 (b) and (c). An example of a model
which can be solved this way is the Heisenberg-Ising model introduced in Ref.
[1].
We next proceed to identify the remaining Pauli symmetries for a Hamiltonian
satisfying Theorem 1. For this, we invoke the natural partition of the Pauli
group $\mathcal{P}$ into the subgroup $\mathcal{P}_{H}$, again defined as that
generated by Hamiltonian terms
$\\{\sigma^{\boldsymbol{j}}\\}_{\boldsymbol{j}\in V}$, and the Pauli operators
outside this subgroup, $\mathcal{P}_{\perp}\equiv\mathcal{P}/\mathcal{P}_{H}$.
Note that the latter set does not form a group in general, as for example,
single-qubit Paulis may be outside of $\mathcal{P}_{H}$ yet may be multiplied
to operators in $\mathcal{P}_{H}$. A subgroup of the symmetries of the
Hamiltonian is the center $\mathcal{Z}(\mathcal{P}_{H})$ of $\mathcal{P}_{H}$,
the set of $n$-qubit Pauli operators in $\mathcal{P}_{H}$ which commute with
every element of $\mathcal{P}_{H}$ and therefore with every term in the
Hamiltonian. To characterize this group, we need two more definitions.
###### Definition 5 (Cycle subgroup).
A cycle of a graph $G\equiv(V,E)$ is a subset of its edges, $Y\subseteq E$,
such that every vertex contains an even number of incident edges from the
subset. If a Pauli Hamiltonian satisfies Eq. (25) for some root graph $R$, we
define its cycle subgroup $Z_{H}\subseteq\mathcal{P}_{H}$ as the abelian Pauli
subgroup generated by the cycles $\\{Y_{i}\\}_{i}$ of $R$,
$\displaystyle
Z_{H}=\bigl{\langle}\Pi_{\\{\boldsymbol{j}|\phi(\boldsymbol{j})\in
Y_{i}\\}}\sigma^{\boldsymbol{j}}\bigr{\rangle}_{i}.$ (30)
Since
$\displaystyle\prod_{\\{\boldsymbol{j}|\phi(\boldsymbol{j})\in
Y_{i}\\}}\gamma_{\phi_{1}(\boldsymbol{j})}\gamma_{\phi_{2}(\boldsymbol{j})}=\pm
I$ (31)
we have, from Eq. (21) and the definition of $\phi$, that the elements of
$Z_{H}$ commute with every term in the Hamiltonian and thus with each other
(since they are products of Hamiltonian terms). That is,
$Z_{H}\subseteq\mathcal{Z}(\mathcal{P}_{H})$. Notice that the definition of
the generators for $Z_{H}$ in Eq. (30) may sometimes yield operators
proportional to identity.
A familiar symmetry of free-fermion Hamiltonians is the _parity operator_
$\displaystyle P\equiv
i^{\frac{1}{2}|\widetilde{V}|(|\widetilde{V}|-1)}\prod_{k\in\widetilde{V}}\gamma_{k}$
(32)
which commutes with every term in the Hamiltonian since each term is quadratic
in the Majorana modes. The phase factor is chosen such that $P$ is Hermitian.
Here, we define this operator in terms of Pauli Hamiltonian terms through a
combinatorial structure known as a T-join.
###### Definition 6 (Parity operator).
A T-join of a graph $G\equiv(V,E)$ is a subset of edges, $T\subseteq E$, such
that an odd number of edges from $T$ is incident to every vertex in $V$. If a
Pauli Hamiltonian satisfies Eq. (25) for some root graph $R$ such that the
number of vertices in $R$ is even, we define the parity operator as
$\displaystyle P\equiv i^{d}\prod_{\boldsymbol{j}\in
T}\sigma^{\boldsymbol{j}}$ (33)
where the product is taken over a T-join of $R$, and $d\in\\{0,1,2,3\\}$
specifies the phase necessary to agree with Eq. (32).
Here we have
$\displaystyle i^{d}\prod_{\boldsymbol{j}\in
T}i\gamma_{\phi_{1}(\boldsymbol{j})}\gamma_{\phi_{2}(\boldsymbol{j})}$
$\displaystyle=i^{\frac{1}{2}|\widetilde{V}|(|\widetilde{V}|-1)}\prod_{\mu\in\widetilde{V}}\gamma_{\mu}=P\mathrm{,}$
(34)
since every fermion mode will be hit an odd number of times in the T-join.
Unlike with the cycle subgroup, $P$ is never proportional to the identity in
the fermion description, though it may still be proportional to the identity
in the Pauli description (up to stabilizer equivalences). In this case, only
solutions for the free-fermion Hamiltonian in a fixed-parity subspace will be
physical. We will see several examples of this in the next section.
When no T-join exists, we cannot form $P$ as a product of Hamiltonian terms.
In fact, $P\in\mathcal{Z}(\mathcal{P}_{H})$ only when $|\widetilde{V}|$ is
even. Now with these definitions in hand, we are ready to state our second
theorem.
###### Theorem 2 (Symmetries are cycles and parity).
Given a Hamiltonian satisfying Eq. (25) such that the number of vertices
$|\widetilde{V}|$ in the root graph is odd, then we have
$\displaystyle\mathcal{Z}(\mathcal{P}_{H})=Z_{H}.$ (35)
If the number of vertices in the root graph is even, then we have
$\displaystyle\mathcal{Z}(\mathcal{P}_{H})=\left\langle Z_{H},P\right\rangle.$
(36)
###### Proof.
The proof can be found in Section 6.2. ∎
The Pauli symmetries of the Hamiltonian outside of
$\mathcal{Z}(\mathcal{P}_{H})$ may be thought of as “logical" or “gauge"
qubits, and this characterization allows for a simple accounting of these
qubits. Suppose we express a spin Hamiltonian $H$ on $n$ qubits as a free-
fermion Hamiltonian on the hopping graph $R=(\widetilde{V},\widetilde{E})$,
and let $|\mathcal{Z}(\mathcal{P}_{H})|$ be the number of independent
generators of $\mathcal{Z}(\mathcal{P}_{H})$. The number of logical qubits
$n_{L}$ of the model is given by
$\displaystyle
n_{L}\equiv\begin{cases}n-\left[\frac{1}{2}(|\widetilde{V}|-1)+|\mathcal{Z}(\mathcal{P}_{H})|\right]&|\widetilde{V}|\
\mathrm{odd}\\\
n-\left[\frac{1}{2}(|\widetilde{V}|-2)+|\mathcal{Z}(\mathcal{P}_{H})|\right]&|\widetilde{V}|\
\mathrm{even}\end{cases}.$ (37)
This follows from the fact that the $\mathds{F}_{2}$-rank of the adjacency
matrix of $G(H)$ is twice the number of qubits spanned by the fermionic
degrees of freedom in the model, and also the number of vertices of the root
graph $R$ up to a constant shift.
$R$ | $L(R)$
---|---
|
|
|
Table 3: The Whitney isomorphism theorem [45] guarantees that the edge
automorphisms exchanging $e$ and $e^{\prime}$ in the graphs $R$ in the left
column, or corresponding vertices in their line graphs on the right, which
cannot be realized by any vertex automorphism of $R$, are the only such cases.
Finally, we note that there may be additional symmetries, such as translation
invariance, if the coefficients $h_{\boldsymbol{j}}$ themselves satisfy a
symmetry. Our characterization will allow us to say something about this
situation when the associated symmetry transformation is a Clifford
operator—that is, a unitary operator in the normalizer of the Pauli
group—commuting with the Pauli symmetries in $\mathcal{Z}(\mathcal{P}_{H})$,
such as, e.g., a spatial translation. The following statement follows from a
theorem by Whitney [45] (and extended to infinite graphs in [53]).
###### Corollary 1.2 (Clifford Symmetries and Whitney Isomorphism).
Let
$\widetilde{H}=i\boldsymbol{\gamma}\cdot\mathbf{h}\cdot\boldsymbol{\gamma}^{\mathrm{T}}$
be a free-fermion Hamiltonian with single-particle Hamiltonian $\mathbf{h}$ in
a fixed symmetry sector of $\mathcal{Z}(\mathcal{P}_{H})$. Then any unitary
Clifford symmetry $U$ such that $U^{\dagger}\widetilde{H}U=\widetilde{H}$
induces a signed permutation symmetry $\mathbf{u}$ such that
$\mathbf{h}=\mathbf{u}^{\mathrm{T}}\cdot\mathbf{h}\cdot\mathbf{u}$, except for
when $U$ induces one of the three edge isomorphisms shown in Table 3.
###### Proof.
This follows from the Whitney isomorphism theorem: except for the three cases
shown in Table 3, any adjacency-preserving permutation of the vertices of
$G(\widetilde{H})$ is induced by an adjacency-preserving permutation of the
vertices of $R$. A Clifford symmetry $U$ acts as a signed permutation of the
Hamiltonian terms which preserves $\widetilde{H}$. Suppose the associated
unsigned permutation is not one of the exceptional cases, and so is induced by
a permutation $\pi$ on the vertices of $R$. This gives
$\displaystyle\widetilde{H}$ $\displaystyle=U^{\dagger}\widetilde{H}U$ (38)
$\displaystyle=i\sum_{(j,k)\in\widetilde{E}}h_{jk}\left(U^{\dagger}\gamma_{j}U\right)\left(U^{\dagger}\gamma_{k}U\right)$
(39)
$\displaystyle=i\sum_{(j,k)\in\widetilde{E}}(-1)^{x_{j}+x_{k}}h_{jk}\gamma_{\pi(j)}\gamma_{\pi(k)}$
(40)
where $x_{j}\in\\{0,1\\}$ designates the sign associated to the permutation of
vertex $j\in\widetilde{V}$. By unitarity, this sign must depend on $j$ alone,
since $U^{\dagger}\gamma_{j}U$ can only depend on $j$. Let $\mathbf{u}$ be a
single-particle transition matrix defined as
$\displaystyle u_{jk}=(-1)^{x_{j}}\delta_{k\pi(j)}.$ (41)
Then we can reinterpret Eq. (40) in the single-particle picture as
$\displaystyle
i\boldsymbol{\gamma}\cdot\mathbf{h}\cdot\boldsymbol{\gamma}^{\mathrm{T}}$
$\displaystyle=i\boldsymbol{\gamma}\cdot\left(\mathbf{u}^{\mathrm{T}}\cdot\mathbf{h}\cdot\mathbf{u}\right)\cdot\boldsymbol{\gamma}^{\mathrm{T}}.$
(42)
By linear independence, Eq. (42) therefore implies
$\displaystyle\mathbf{h}=\mathbf{u}^{\mathrm{T}}\cdot\mathbf{h}\cdot\mathbf{u}$
(43)
and the claim follows. ∎
See section 5.1 for a simple example of an exceptional Hamiltonian realizing a
frustration graph shown in Table 3. We now complete our characterization of
free-fermion solutions by choosing an orientation for every edge in the root
graph over a restricted subspace determined by the constants of motion.
### 4.1 Orientation and Full Solution
As discussed previously, the Lie-homomorphism condition Eq. (21) does not
fully constrain the free-fermion solution of a given Pauli Hamiltonian. This
is because we are free to choose a direction to each edge in the root graph by
exchanging $\phi_{1}(\boldsymbol{j})\leftrightarrow\phi_{2}(\boldsymbol{j})$,
which is equivalent to changing the sign of the term
$i\gamma_{\phi_{1}(\boldsymbol{j})}\gamma_{\phi_{2}(\boldsymbol{j})}$ in
$\widetilde{H}$ corresponding to $\sigma^{\boldsymbol{j}}$ in $H$. A related
ambiguity corresponds to the cycle symmetry subgroup $Z_{H}$: we are free to
choose a symmetry sector over which to solve the Hamiltonian $H$ by choosing a
mutual $\pm 1$-eigenspace of independent nontrivial generators of this group.
It will turn out that both ambiguities are resolved simultaneously.
First suppose we have a Hamiltonian $H$ satisfying Eq. (25) for some root
graph $R\equiv(\widetilde{V},\widetilde{E})$. Construct a spanning tree
$\Upsilon\equiv(\widetilde{V},\widetilde{E}^{\prime})$ of $R$, defined as:
###### Definition 7 (Spanning Tree).
Given a connected graph $G\equiv(V,E)$, a spanning tree
$\Upsilon\equiv(V,E^{\prime})\subseteq G$ is a connected subgraph of $G$ such
that $E^{\prime}$ contains no cycles.
This can be performed in linear time in $|\widetilde{V}|$. Designate a
particular vertex $v\in\widetilde{V}$ as the root of this tree. Each vertex
$u\in\widetilde{V}$ has a unique path $p(u,v)\subseteq\widetilde{E}$ in
$\Upsilon$ to the root, the path $p(v,v)$ being empty. Choose an arbitrary
direction for each edge in $\widetilde{E}^{\prime}$ (we will see shortly to
what extent this choice is important).
Our choice of spanning tree determines a basis of _fundamental cycles_ for the
binary cycle space of $R$ and thus a generating set of Paulis for the cycle
subgroup $Z_{H}$. To see this, note that for each edge
$\phi(\boldsymbol{j})\in\widetilde{E}/\widetilde{E}^{\prime}$, there is a
unique cycle of $R$ given by
$\displaystyle Y_{\boldsymbol{j}}\equiv p[\phi_{1}(\boldsymbol{j}),v]\cup
p[\phi_{2}(\boldsymbol{j}),v]\cup\phi(\boldsymbol{j}).$ (44)
Let $\sigma^{\boldsymbol{y}(\boldsymbol{j})}$ be the cycle subgroup generator
associated to $Y_{\boldsymbol{j}}$, defined by
$\displaystyle\boldsymbol{y}(\boldsymbol{j})=\bigoplus_{\\{\boldsymbol{z}|\phi(\boldsymbol{z})\in
Y_{\boldsymbol{j}}\\}}\boldsymbol{z}$ (45)
such that
$\displaystyle\sigma^{\boldsymbol{y}(\boldsymbol{j})}=i^{d}\prod_{\\{\boldsymbol{z}|\phi(\boldsymbol{z})\in
Y_{\boldsymbol{j}}\\}}\sigma^{\boldsymbol{z}}$ (46)
where $d\in\\{0,1,2,3\\}$ again designates the appropriate phase. The set of
such $Y_{\boldsymbol{j}}$ contains $|\widetilde{E}|-|\widetilde{V}|+1$ cycles
and forms an independent generating set for all the cycles of $R$ under
symmetric difference. The corresponding set of
$\sigma^{\boldsymbol{y}(\boldsymbol{j})}$ is therefore an independent
generating set of the cycle subgroup up to signs, since individual Pauli
operators either commute or anticommute and square to the identity.
In a similar fashion as with twin-vertex symmetries, we restrict to a mutual
$\pm 1$ eigenspace of the cycle-subgroup generators, designated by a binary
string $\boldsymbol{x}\in\\{0,1\\}^{\times|\widetilde{E}|-|\widetilde{V}|+1}$
over the $\boldsymbol{j}$ such that
$\phi(\boldsymbol{j})\in\widetilde{E}/\widetilde{E}^{\prime}$. That is, we
restrict to the mutual $+1$ eigenspace of the stabilizer group
$\displaystyle
Z_{H,\boldsymbol{x}}\equiv\bigl{\langle}(-1)^{x_{\boldsymbol{j}}}\sigma^{\boldsymbol{y}(\boldsymbol{j})}\bigr{\rangle}.$
(47)
If Eq. (45) gives $\boldsymbol{y}(\boldsymbol{j})=\mathbf{0}$ for any
$\boldsymbol{j}$, then we take the corresponding $x_{\boldsymbol{j}}=0$. We
then simply choose the direction for the edge $\phi(\boldsymbol{j})$ such that
$\displaystyle(-1)^{x_{\boldsymbol{j}}}i^{d}\left[\prod_{\\{\boldsymbol{z}|\phi(\boldsymbol{z})\in
Y_{\boldsymbol{j}}\\}}i\gamma_{\phi_{1}(\boldsymbol{z})}\gamma_{\phi_{2}(\boldsymbol{z})}\right]=+I$
(48)
where $d$ is as defined in Eq. (46). This ensures that the product of Majorana
hopping terms around a fundamental cycle $Y_{\boldsymbol{j}}$ agrees with the
corresponding Pauli product over the restricted subspace (i.e. up to
equivalencies by stabilizers in the group $Z_{H,\boldsymbol{x}}$). By the Lie-
homomorphism constraint Eq. (21), all products of Majorana hopping terms
around a cycle of $R$ therefore agree with their corresponding Pauli products
over this subspace, and so the multiplication relations of the Paulis are
respected by their associated fermion hopping terms up to stabilizer
equivalencies. Since we have exactly as many elements $Y_{\boldsymbol{j}}$ in
our fundamental cycle basis as undirected edges $\phi(\boldsymbol{j})$, such
an orientation can always be chosen.
$\sigma^{i}_{1}$\$\sigma^{j}_{2}$ | $I$ | $X$ | $Y$ | $Z$ | 2-qubit frustration graph $L(K_{6})$
---|---|---|---|---|---
$I$ | $P\equiv i\gamma_{1}\gamma_{2}\gamma_{3}\gamma_{4}\gamma_{5}\gamma_{6}$ | $i\gamma_{3}\gamma_{5}$ | $i\gamma_{2}\gamma_{5}$ | $i\gamma_{2}\gamma_{3}$ |
$X$ | $i\gamma_{4}\gamma_{6}$ | $i\gamma_{1}\gamma_{2}$ | $-i\gamma_{1}\gamma_{3}$ | $i\gamma_{1}\gamma_{5}$ |
$Y$ | $-i\gamma_{1}\gamma_{6}$ | $-i\gamma_{2}\gamma_{4}$ | $i\gamma_{3}\gamma_{4}$ | $i\gamma_{4}\gamma_{5}$ |
$Z$ | $-i\gamma_{1}\gamma_{4}$ | $i\gamma_{2}\gamma_{6}$ | $-i\gamma_{3}\gamma_{6}$ | $i\gamma_{5}\gamma_{6}$ |
Table 4: (Left) Fermionization of the two-qubit Pauli algebra
$\mathcal{P}_{2}=\\{\sigma^{i}_{1}\otimes\sigma^{j}_{2}\\}_{(i,j)\neq(0,0)}$
by six fermion modes. The graph isomorphism $G(\mathcal{P}_{2})\simeq
L(K_{6})$ reflects the Lie-algebra isomorphism between $\mathfrak{su}(4)$ and
$\mathfrak{spin}(6)$. Though the scalar-commutation relations are reproduced
by quadratics in $\\{\gamma_{\mu}\\}_{\mu=1}^{6}$, the one-sided
multiplication relations are only recovered upon projecting onto the $+1$
eigenspace of $P\equiv
i\gamma_{1}\gamma_{2}\gamma_{3}\gamma_{4}\gamma_{5}\gamma_{6}$ on the fermion
side of the mapping. (Right) The graph $L(K_{6})$, with vertices labeled by a
particular satisfying Pauli assignment. Edges are colored to identify the six
$K_{5}$ subgraphs in the Krausz decomposition of this graph, corresponding to
the six fermion modes. Each vertex belongs to exactly two such subgraphs, as
must be the case for a line graph. This graphical correspondence was first
observed in Ref. [46] in the language of the Dirac algebra.
Why are we free then, to choose an arbitrary sign for each free-fermion term
in our original spanning tree? This choice is actually equivalent to a choice
of signs on the definitions of the individual Majorana modes themselves and so
amounts to a choice of orientation for the coordinate basis in which we write
$\mathbf{h}$. To see this, choose a fiducial orientation for $R$ satisfying
Eq. (48), and suppose our particular free-fermion solution—not necessarily
oriented this way—corresponds to the mapping
$\displaystyle\sigma^{\boldsymbol{j}}\mapsto
i(-1)^{x_{\boldsymbol{j}}}\gamma_{\phi_{1}(\boldsymbol{j})}\gamma_{\phi_{2}(\boldsymbol{j})}$
(49)
for $\phi(\boldsymbol{j})\in\widetilde{E}^{\prime}$ and with
$x_{\boldsymbol{j}}\in\\{0,1\\}$ designating the edge-direction of
$\phi(\boldsymbol{j})$ relative to the fiducial orientation. We have
$\displaystyle(-1)^{x_{\boldsymbol{j}}}$
$\displaystyle=(-1)^{r_{\phi_{1}(\boldsymbol{j})}+r_{\phi_{2}(\boldsymbol{j})}}$
(50)
where
$\displaystyle r_{u}=\sum_{\\{\boldsymbol{k}|\phi(\boldsymbol{k})\in
p(u,v)\\}}x_{\boldsymbol{k}}.$ (51)
Since the symmetric difference of $p[\phi_{1}(\boldsymbol{j}),v]$ and
$p[\phi_{2}(\boldsymbol{j}),v]$ is the edge
$\phi(\boldsymbol{j})\in\widetilde{E}^{\prime}$, all sign factors on the right
side of Eq. (50) cancel except for $(-1)^{x_{\boldsymbol{j}}}$.
We can then absorb $(-1)^{r_{\phi_{1}(\boldsymbol{j})}}$ and
$(-1)^{r_{\phi_{2}(\boldsymbol{j})}}$ onto the definitions of
$\gamma_{\phi_{1}(\boldsymbol{j})}$ and $\gamma_{\phi_{2}(\boldsymbol{j})}$,
respectively. We furthermore see that imposing Eq. (48) gives an edge-
direction for $\phi(\boldsymbol{j})\in\widetilde{E}/\widetilde{E}^{\prime}$
that differs from that of the fiducial orientation by the associated sign
factor $(-1)^{r_{\phi_{1}(\boldsymbol{j})}+r_{\phi_{2}(\boldsymbol{j})}}$,
which remains consistent with a redefinition of the signs on the individual
Majorana modes. Letting $\mathbf{h}$ be the single-particle Hamiltonian for
the fiducial orientation, such a redefinition corresponds to conjugating
$\mathbf{h}$ by a $\pm 1$ diagonal matrix. As no scalar quantity of
$\mathbf{h}$ can depend on this choice, this redefinition corresponds to a
gauge freedom.
Proceeding this way, we can solve the effective Hamiltonian
$\displaystyle H_{\boldsymbol{x}}\equiv
H\prod_{\\{\boldsymbol{j}|\phi(\boldsymbol{j})\in\widetilde{E}/\widetilde{E}^{\prime}\\}}\left(\frac{I+(-1)^{x_{\boldsymbol{j}}}\sigma^{\boldsymbol{y}(\boldsymbol{j})}}{2}\right)$
(52)
sector-by-sector over each stabilizer eigenspace designated by
$\boldsymbol{x}$. If we also need to remove twin vertices from $G(H)$ before
it is a line graph, we project onto the mutual $+1$ eigenspace of the
stabilizer group $\mathcal{S}_{\boldsymbol{x}}$ defined previously in Section
4 as well. Finally, if the parity operator $P$ is trivial in the Pauli
description, then only a fixed-parity eigenspace in the fermion description
will be physical.
In the next section, we will see how known free-fermion solutions fit into
this characterization and demonstrate how our method can be used to find new
free-fermion solvable models, for which we give an example.
## 5 Examples
### 5.1 Small Systems
The frustration graph of single-qubit Paulis ${X,Y,Z}$ is $K_{3}$, the
complete graph on three vertices. This graph is the line graph of not one, but
two non-isomorphic graphs: the so-called ‘claw’ graph $K_{1,3}$, and $K_{3}$
itself (see Table 1). By the Whitney isomorphism theorem [45], $K_{3}$ is the
only graph which is not the line graph of a unique graph. This ambiguity
results in the existence of two distinct free-fermion solutions of a single
qubit Hamiltonian, which we will hereafter refer to as “even" (labeled “0")
and “odd" (labeled “1") fermionizations
$\displaystyle\begin{cases}X_{0}=i\gamma_{0}\gamma_{1}&X_{1}=i\gamma_{2}\gamma_{3}\\\
Y_{0}=i\gamma_{1}\gamma_{2}&Y_{1}=i\gamma_{0}\gamma_{3}\\\
Z_{0}=i\gamma_{0}\gamma_{2}&Z_{1}=-i\gamma_{1}\gamma_{3}\end{cases}\mathrm{.}$
(53)
In the even fermionization, no T-join of the root graph $K_{3}$ exists since
there are only three fermion modes $\\{\gamma_{0},\gamma_{1},\gamma_{2}\\}$.
The orientation of the root graph is constrained by the identity $XYZ=iI$. In
the odd fermionization, there are four fermion modes
$\\{\gamma_{0},\gamma_{1},\gamma_{2},\gamma_{3}\\}$, and so a T-join does
exist for the root graph $K_{1,3}$. It is the set of all edges of this graph.
The parity operator is trivial in the Pauli description however, and so the
constraint $XYZ=iI$ is enforced by restricting to the $+1$ eigenspace of
$P\equiv-\gamma_{0}\gamma_{1}\gamma_{2}\gamma_{3}$ in the fermion description.
We are free to choose the orientation of the root graph $K_{1,3}$ however we
like in this case, since it contains no cycles, though this choice will affect
what we call the physical eigenspace of $P$. By virtue of the line-graph
construction and our choice of orientation, both fermionizations respect the
single-qubit Pauli multiplication relations, up to stabilizer equivalencies in
some cases.
We have made the choice to label the Paulis in the two fermionizations in a
compatible way, such that
$\displaystyle\sigma^{j}_{0}=P\sigma^{j}_{1}$ (54)
This gives
$\displaystyle[\sigma^{j}_{m},\sigma^{k}_{m}]=2i\varepsilon_{jk\ell}\sigma^{\ell}_{0}\mathrm{.}$
(55)
where $m\in\\{0,1\\}$ and $\varepsilon$ is the Levi-Civita tensor. Since an
even number of parity-operator factors appear on the left side of Eq. (55),
the commutator between two Paulis in either fermionization is always a Pauli
in the even fermionization. We can additionally write multiplication relations
between the two fermionizations concisely, as
$\displaystyle\sigma^{j}_{p}\sigma^{k}_{q}=\delta_{jk}P^{p\oplus
q}+i(1-\delta_{jk})\varepsilon_{jk\ell}\sigma_{p\oplus q}^{\ell}\mathrm{.}$
(56)
Another exceptional situation arises for 2 qubits, for which the full
frustration graph is the line graph of $K_{6}$, again depicted graphically in
Table 1. This reveals a free-fermion solution for all 2-qubit Hamiltonians by
six fermion modes, listed explicitly in Table 4. We again choose our
orientation by picking a spanning tree of the root graph (for example all
terms containing the mode $\gamma_{5}$), choosing an arbitrary orientation on
this tree, and choosing the remaining orientations by enforcing the condition
in Eq. (48). Since $K_{6}$ has an even number of vertices, there exists a
T-join for this graph, e.g. the terms $\\{XX,YY,ZZ\\}$. The associated parity
operator is trivial however, as $(XX)(YY)(ZZ)=-I$, and so only the $+1$
eigenspace of $P$ in the fermion description will be physical. This solution
reflects the exceptional Lie algebra isomorphism,
$\mathfrak{su}(4)\simeq\mathfrak{spin}(6)$.
Finally, we give an example of a three-qubit Hamiltonian with an exceptional
_symmetry_ , namely
$\displaystyle H=XII+YII+ZXX+ZZZ$ (57)
This Hamiltonian has the frustration graph shown in the top right entry of
Table 3, and thus is an exceptional case to Corollary 1.2. A symmetry
transformation exchanging $e$ and $e^{\prime}$ for this Hamiltonian is the
Hadamard gate applied to the second and third qubits, which exchanges the
third and fourth terms, but cannot be realized as any permutation of the
individual Majorana modes in its free-fermion description.
Figure 1: Frustration graph for the general XY model and its root graph, shown
below. Cliques are colored to show the Krausz decomposition, which is the
image of the model under the Jordan-Wigner transform. Vertices in the root
graph are correspondingly colored, and a spanning tree is highlighted.
### 5.2 1-dimensional chains
Shown in Figure 1 is the frustration graph $G(H)$ for the most general
nearest-neighbor Pauli Hamiltonian in 1-d (on open boundary conditions) which
is mapped to a free-fermion Hamiltonian under the Jordan-Wigner
transformation,
$\displaystyle
H=\sum_{j=1}^{n-1}\sum_{\alpha,\beta\in\\{x,y\\}}\mu^{j}_{\alpha\beta}\sigma_{j}^{\alpha}\otimes\sigma_{j+1}^{\beta}+\sum_{j=1}^{n}\nu_{j}Z_{j}.$
(58)
Cliques are colored according to the Krausz decomposition of this graph, which
is easily seen by the free-fermion description. The fermion hopping graph,
$R$, is shown below. Note that the cycle symmetry subgroup $Z_{H}$ for this
model is trivial, as every product of Hamiltonian terms along a cycle in $R$
is the identity. Since the number of vertices in $R$ is even, a T-join does
exist, and the parity operator is in-fact $P=Z^{\otimes n}$. Therefore, we
have $|\mathcal{Z}(\mathcal{P}_{H})|=1$.
An example spanning tree for the root graph is highlighted, taken simply to be
the path along edges $(j,j+1)$ from $\gamma_{1}$ to $\gamma_{2n}$. Including
any additional edge in this tree will form a cycle. A natural orientation for
this tree is to direct every edge from vertex $j+1$ to vertex $j$. Note that
we can recover the Jordan-Wigner transformation from this graphical
description alone. We first adjoin a single fictitious qubit and a single
coupling term to the Hamiltonian, as
$\displaystyle H^{\prime}=\mu_{xx}^{0}X_{0}X_{1}+H.$ (59)
Since the remaining qubits only couple to qubit “0" along the $X$-direction,
all operators in $\mathcal{P}_{H}$ commute on this qubit. Furthermore, this
new term adds one vertex to the black clique at the left boundary of the chain
in Fig. 1. It thus extends the spanning tree of $R$ by one vertex – which we
label $\gamma_{0}$ – due to the fact that this new term only belongs to one
clique (so we take its additional Majorana mode to be a clique of size zero).
It can be easily verified that products of Hamiltonian terms from this new
vertex to any vertex along the chosen spanning tree have the form
$\displaystyle\begin{cases}i\gamma_{0}\gamma_{2j-1}\equiv
X_{0}\bigotimes_{k=1}^{j-1}Z_{k}\otimes X_{j}&\\\ i\gamma_{0}\gamma_{2j}\equiv
X_{0}\bigotimes_{k=1}^{j-1}Z_{k}\otimes Y_{j}&\\\ \end{cases}$ (60)
All such operators share $\gamma_{0}$, so their commutation relations are
unchanged by truncating $\gamma_{0}$. Furthermore, since all operators in
$\mathcal{P}_{H}$ commute on qubit-0, we may truncate this qubit as well
without changing the commutation relations of the operators above to obtain
the Jordan-Wigner transformation
$\displaystyle\begin{cases}\gamma_{2j-1}\equiv\bigotimes_{k=1}^{j-1}Z_{k}\otimes
X_{j}&j\ \mathrm{odd}\\\ \gamma_{2j}\equiv\bigotimes_{k=1}^{j-1}Z_{k}\otimes
Y_{j}&j\ \mathrm{odd}\\\ \end{cases}$ (61)
In principle, a similar trick would work in general, but we find it generally
simpler to define Majorana quadratic operators to avoid truncating at a
boundary. Our method is especially convenient when considering the case of
periodic boundary conditions on this model, wherein we add the boundary term
$\displaystyle
H_{\mathrm{boundary}}=\sum_{\alpha,\beta\in\\{x,y\\}}\mu^{n}_{\alpha\beta}\sigma_{1}^{\alpha}\otimes\sigma_{n}^{\beta}$
(62)
to the Hamiltonian in Eq. (58). With this term included, the model has a
nontrivial cycle symmetry given by taking the product of fermion bilinears
around the periodic boundary, and this product is also proportional to the
parity operator $P=Z^{\otimes n}$ in the spin picture. Adding a boundary term
therefore does not change $|\mathcal{Z}(\mathcal{P}_{H})|$, though it does
require that we solve the model over each of the eigenspaces of the cycle
symmetry independently by choosing the sign of the additional terms in the
fermion picture as described in Section 4.1. Let the eigenspace of $Z^{\otimes
n}$ be specified by the eigenvalue $(-1)^{p}$. For each associated free-
fermion model solution, we must then restrict to the $+1$ eigenspace of the
parity operator in the spin picture
$\displaystyle\prod_{j=1}^{n}X_{j}X_{j+1}\mapsto(-i)^{n}(-1)^{p}\prod_{k=1}^{2n}\gamma_{k}\mathrm{.}$
(63)
where index addition is taken modulo $n$. This ensures that our free-fermionic
solution respects the constraint
Figure 2: Frustration graph for the Kitaev honeycomb model (left) and its root
graph (right). Cliques are colored to show the Krausz decomposition.
Interestingly, this model’s root graph is the same as its interaction graph. A
spanning tree of the root is again highlighted.
$\displaystyle\prod_{j=1}^{n}X_{j}X_{j+1}=I\mathrm{.}$ (64)
Notice that solving the two free fermion models together (one for each
eigenspace of $Z^{\otimes n}$) gives $2^{n+1}$ eigenstates, yet restricting to
a fixed-parity sector in each keeps only $2^{n}$ of them, as required.
Finally, we see that this model contains no logical qubits via
$\displaystyle n_{L}=n-\left[\frac{1}{2}(2n-2)+1\right]=0$ (65)
as we might expect.
### 5.3 The Kitaev honeycomb model
Next we consider the Kitaev honeycomb model in two dimensions [12]. This model
has the Hamiltonian
$\displaystyle H=\sum_{\alpha\in\\{x,y,z\\}}\sum_{\alpha-\mathrm{links}\
j}J^{j}_{\alpha}\sigma^{\alpha}_{j}\sigma^{\alpha}_{j+\hat{\alpha}}$ (66)
where each of the $\alpha$ links correspond to one of the compass directions
of the edges of a honeycomb lattice. Once again, the frustration graph with
shaded cliques according to the Krausz decomposition is shown in Fig. 2.
Interestingly, the root graph of this model’s frustration graph is again the
honeycomb lattice. By going backwards, we can see that indeed, any free-
fermion model with trivalent hopping graph can be embedded in a 2-body qubit
Hamiltonian with the same interaction graph. This is because we can find a set
of Pauli operators satisfying any frustration graph whose edges can be
partitioned into triangles by assigning a different single-qubit Pauli to each
of the vertices of every triangle. A term in the Hamiltonian is then the
tensor product of all of the Pauli operators from the triangles to which its
vertex in $G(H)$ belongs.
Unlike in the one-dimensional example, the cycle subgroup of this model,
$Z_{H}$, is nontrivial. This subgroup is generated by the products of
Hamiltonian terms around a hexagonal plaquette of the honeycomb lattice,
denoted $W_{p}$ for plaquette $p$. These cycles are not independent, however,
with constraints between them depending on the boundary conditions of the
lattice. In particular, if the model is on a torus of dimension $L_{x}$ by
$L_{y}$, then the product of all Hamiltonian terms is trivial
$\displaystyle\prod_{\boldsymbol{j}\in
V}\sigma^{\boldsymbol{j}}=(-1)^{L_{x}L_{y}}I$ (67)
In this case, the cycles of the honeycomb lattice are not independent, since
they similarly multiply to the identity. There are thus $L_{x}L_{y}-1$
independent plaquettes on the lattice. There are additionally two
homotopically nontrivial cycles, which are independent as well. Notice that
the edges of the honeycomb lattice itself form a T-join, and so the above
constraint is also the statement that $P$ is furthermore trivial. Therefore,
we have
$\displaystyle|\mathcal{Z}(\mathcal{P}_{H})|=L_{x}L_{y}-1+2=L_{x}L_{y}+1$ (68)
and once again (as first computed in Ref. [54])
$\displaystyle
n_{L}=2L_{x}L_{y}-\left[\frac{1}{2}\left(2L_{x}L_{y}-2\right)+L_{x}L_{y}+1\right]=0.$
(69)
This example also illustrates that quite a large number of symmetries could be
present, and in general this will complicate finding, e.g., the symmetry
sector that contains the ground state.
Figure 3: The frustrated hexagonal gauge 3d color code, proposed in Ref. [44].
This model is based on the 3d gauge color code, whose qubits live on the
vertices of the lattice shown. Gauge generators for the 3d gauge color code
consist of Pauli-$Z$ and Pauli-$X$ operators around both the square and
hexagonal faces of the lattice. Stabilizers of the 3d gauge color code consist
of Pauli-$Z$ and Pauli-$X$ operators on both the cube and “ball" cells. The
frustrated hexagonal gauge 3D color code is given by taking the stabilizers of
the gauge color code together with the hexagonal gauge generators, which
commute with the stabilizers, but not with each other. We see from the colored
hexagonal faces above that the frustration graph of these gauge generators is
a set of disconnected path graphs. Every hexagonal plaquette term anticommutes
with exactly two others—the plaquette terms of the other Pauli type
intersecting it at exactly one qubit—and commutes with all other terms in the
Hamiltonian.
### 5.4 Frustrated Hexagonal Gauge 3D Color Code
Figure 4: The Sierpinski-Hanoi model (left) with its frustration graph,
highlighted, and its root graph (right) with a spanning tree highlighted, for
$k=5$ and local fields absent. Hamiltonian terms are 3-qubit operators acting
on qubits at the vertices of the Sierpinski sieve graph, highlighted in blue.
Cliques of the frustration graph are colored to show the graph’s Krausz
decomposition. Green and orange cells depict generators for the model’s
logical Pauli group. At the interior triangular cells of the lattice are the
3-body generators shown in green. At the interior and exterior edges of the
model are 2-body generators shown in orange. These are obtained from their
adjoining Hamiltonian terms by reflecting the action on the intersection of
their supports (so these generators act differently depending on which edge
they act on). The frustration graph of this model is the Hanoi graph
$H_{3}^{k-1}$. The vertices of this graph are in correspondence to the states
of the towers of Hanoi problem with three towers and $k-1$ discs. The root
graph of $H_{3}^{k-1}$ contains $H_{3}^{k-2}$ as a topological minor.
The frustrated hexagonal gauge 3d color code is a noncommuting Hamiltonian
whose terms consist of the stabilizer generators and a subset of the gauge
generators from the gauge color code. The gauge color code [55, 56, 57, 58]
has a Hamiltonian that is defined in terms of a natural set of gauge
generators as
$\displaystyle H=-\sum_{S\in\square,\varhexagon}\left(J_{x}\bigotimes_{j\in
S}X_{j}+J_{Z}\bigotimes_{j\in S}Z_{j}\right)$ $\displaystyle+\ \text{(boundary
terms)}$ (70)
where “$\square$" and “$\varhexagon$" denote the sets of square and hexagonal
faces on the lattice in Fig. 3, respectively (see Ref. [59] for a detailed
description of this lattice). Here the qubits live on the vertices of the
lattice. Nontrivial boundary conditions are required to restrict the logical
space of this code to a single qubit. We will ignore these boundary conditions
and consider only gauge generators in the bulk of the lattice. The stabilizers
for this model are given by products of $X$ or $Z$ around every elementary
cell, either a cube or a “ball".
We consider a model where we partially restore some of these symmetries.
Namely, we will consider the cube and balls to be “restored” symmetries of the
model, and we will remove the square generators. This leads to the following
gauge Hamiltonian that sums over only hexagonal faces, balls, and cubes,
$\displaystyle
H=-\sum_{S\in\varhexagon,\text{\mancube},{\mathchoice{\includegraphics[height=4.82224pt]{BallCell.png}}{\includegraphics[height=4.82224pt]{BallCell.png}}{\includegraphics[height=3.61664pt]{BallCell.png}}{\includegraphics[height=2.71246pt]{BallCell.png}}}}\left(J_{X}\bigotimes_{j\in
S}X_{j}+J_{Z}\bigotimes_{j\in S}Z_{j}\right).$ (71)
The cube and ball terms commute with all of the hexagon terms, and so
constitute symmetries of the model. Once we fix a sector for these terms, we
can solve the remaining model by mapping to free-fermions as follows [44].
In Figure 3, we represent a subsection of the qubit lattice of this code,
where qubits live at the tetravalent vertices. Because the cube and ball terms
commute with everything, the frustration graph depends only on the hexagonal
faces, several of which are colored in Figure 3. We see that some of these
faces intersect at exactly one vertex, and so the $X$\- and $Z$-type gauge
generators will anticommute on the associated qubit. These intersection
patterns only occur in 1D chains along the cardinal axes of the lattice. In
particular, every hexagonal face only overlaps with two other hexagonal faces
along these chains and otherwise intersects the other faces at an even number
of qubits. The frustration graph of this model thus decouples into a set of
disconnected paths, which are line graphs, and in fact they are the
frustration graph of the XY-model [1] and the 1-d Kitaev wire [60]. A free-
fermion mapping therefore exists for this model, and this demonstrates an
example of how one might construct subsystem codes with a free-fermion
solution to obtain desired spectral properties. In particular, when
$|J_{x}|\not=|J_{z}|$ the model in Eq. (71) is gapped. We note that this
observation was made previously in Ref. [44] in the context of quantum error
correcting codes.
Figure 5: Single-Particle spectrum of the Sierpinski-Hanoi model for $k=5$
with an additional local field term present in the symmetry sector for which
all cycles are $+1$. Circled are two critical points where excited bands
become degenerate.
### 5.5 Sierpinski-Hanoi model
Finally, we introduce our own example of a solvable spin model, which was
previously unknown to the best of our knowledge. This model consists of 3-body
$XYZ$-interaction terms on the shaded cells of the Sierpinski triangle, all
with the same orientation, as depicted in Fig. 4. Explicitly, the Hamiltonian
for this model is given by
$\displaystyle
H=\sum_{(i,j,k)\in\color[rgb]{0.63,0.79,0.95}\mbox{\normalsize$\blacktriangle$}}X_{i}Y_{j}Z_{k}+JH_{\text{local}}$
(72)
where $(i,j,k)$ is an ordered triple of qubits belonging to a particular
shaded cell on the lattice and we will define additional on-site terms
$H_{\text{local}}$ in Eq. (77). An instance of the model is parameterized by
$k$, the fractal recursion depth of the underlying Sierpinski lattice, where
$k=1$ is taken to be a single 3-qubit interaction.
Let us first consider the simplified model where $J=0$. Then the frustration
graph of this model is the so-called _Hanoi graph_ $H_{3}^{k-1}$. The vertices
of this graph are labeled by states of the towers of Hanoi problem with $k-1$
discs, and two vertices are neighboring if transitioning between the
corresponding states is an allowed move in the problem. Perhaps surprisingly,
this graph is a line graph, with root and highlighted spanning tree shown in
Fig. 4. Furthermore, the root graph contains $H_{3}^{k-2}$ as a topological
minor, obtained by removing the vertices of degree one and contracting the
vertices of degree two each along one of their two edges.
This model contains
$\displaystyle n=\frac{3}{2}(3^{k-1}+1)$ (73)
physical qubits, and its root graph contains
$\displaystyle|\widetilde{V}|=\begin{cases}2&k=1\\\ \frac{1}{2}\left[5\times
3^{k-2}+3\right]&k>1\end{cases}$ (74)
vertices. A T-join for this graph therefore exists only for even $k$ and
$k=1$, and the parity operator is never trivial when a T-join exists. The root
graph also contains
$\displaystyle|Z_{H}|=\sum_{j=0}^{k-3}3^{j}=\begin{cases}0&k\leq 2\\\
\frac{1}{2}\left(3^{k-2}-1\right)&k>2\end{cases}$ (75)
fundamental cycles. None of the generators of the cycle subgroup are trivial
since every qubit is acted upon by at most two anti-commuting operators. The
number of logical qubits in this model is therefore
$\displaystyle n_{L}=\begin{cases}2&k=1\\\ \frac{1}{4}\left[11\times
3^{k-2}+8+(-1)^{k}\right]&k>1,\end{cases}$ (76)
and so this Hamiltonian encodes logical qubits at a constant rate of
$\frac{11}{18}$ in the infinite $k$ limit. Perhaps unsurprisingly, these
logical qubits live on the boundaries of the fractal, and we can obtain a set
of generators for the logical Pauli group of this model as shown in Fig. 4. We
can encode logical quantum information in this model by picking symplectic
pairs of generators from this group, which anticommute with one another yet
commute with the remaining generators in the group. The remaining such
generators can be used as gauge qubits for the logical qubit we wish to
protect, and the free-fermion Hamiltonian of the model can be used for error
suppression.
We are also free to add an anisotropic local-field term to a subset of the
qubits without breaking solvability
$\displaystyle
H_{\mathrm{local}}=\sum_{i\in\color[rgb]{0.63,0.79,0.95}\mbox{\normalsize$\blacktriangle$}\mathbf{-}\color[rgb]{0.63,0.79,0.95}\mbox{\normalsize$\blacktriangle$}}\sigma_{i}^{j_{i}}$
(77)
The sum is taken over all qubits corresponding to _black_ edges connecting two
shaded cells in Fig. 4. $j_{i}$ is the third Pauli type from the two Paulis
acting on qubit $i$ by the interaction terms. The effect of these local-field
terms is to couple every black vertex in the root graph shown in Fig. 4,
except for those at the three corners, to a dedicated fermion mode. We do not
depict these additional modes to avoid cluttering the figure. These terms also
do not affect the symmetries of the model, except possibly to add the parity
operator to $\mathcal{Z}(\mathcal{P}_{H})$ when the number of vertices in the
original graph was odd, as the number of vertices in the graph with local
field terms present will always be even. The parity operator can then be
constructed as the product of all of the Hamiltonian terms, since the root
graph only has vertices of degrees 1 and 3.
In Fig. 5, we display the single-particle spectrum of the Sierpinski-Hanoi
model as a function of the local field $J$ for $k=5$ in the sector for which
all of the cycle symmetries are in their mutual $+1$ eigenspace. We highlight
two critical points where excited energy levels become degenerate to within
our numerical precision. We observe that the locations of these points are not
system-size-independent, but rather asymptotically approach $J=0$ as the
system size is increased. We conjecture that this is connected to the
emergence of scale symmetry, which the model possesses in the thermodynamic
limit, yet not for any finite size. It would be intriguing if certain physical
features of this symmetry could be realized at the critical points at finite
size, potentially opening the door to simulating scale-invariant systems on a
finite-sized quantum computer.
## 6 Proofs of Main Theorems
### 6.1 Proof of Theorem 1
We restate Theorem 1 for convenience.
###### Theorem 1, restated (Existence of free-fermion solution).
An injective map $\phi$ as defined in Eq. (20) and Eq. (21) exists for the
Hamiltonian $H$ as defined in Eq. (1) if and only if there exists a root graph
$R$ such that
$\displaystyle G(H)\simeq L(R),$ (78)
where R is the hopping graph of the free-fermion solution.
###### Proof.
If $\phi$ exists, define $R=(V,E)$, where
$E\equiv\\{(\phi_{1}(\boldsymbol{j}),\phi_{2}(\boldsymbol{j}))|\phi_{1}(\boldsymbol{j}),\phi_{2}(\boldsymbol{j})\in
V,\boldsymbol{j}\in E\\}$. If and only if
$|(\phi_{1}(\boldsymbol{j}),\phi_{2}(\boldsymbol{j}))\cap(\phi_{1}(\boldsymbol{k}),\phi_{2}(\boldsymbol{k}))|=1$,
then the vertices corresponding to $\boldsymbol{j}$ and $\boldsymbol{k}$ are
neighboring in $G$ by Eq. (21). Thus, $G(H)\simeq L(R)$ and a mapping $\phi$
exists only if $R$ does.
If there exists a graph $R\equiv(V,E)$ such that $G\simeq L(R)$, take the
Krausz decomposition of $G(H)$. Namely, partition the edges of $G(H)$ as
$F=\\{C_{1},\dots,C_{|V|}\\}$, where each $C_{i}$ constitutes a clique in $G$
and such that every vertex in $G$ appears in at most two $C_{i}$. The cliques
in this partitioning correspond to the vertices $V$ of $R$. For each vertex
$\boldsymbol{j}$, define $\phi(\boldsymbol{j})$ to be the pair of cliques in
which $\boldsymbol{j}$ appears. Since the cliques partition the edges of $G$,
then if vertices $\boldsymbol{j}$ and $\boldsymbol{k}$ are neighboring in $G$,
they must appear in exactly one clique together, and thus
$|(\phi_{1}(\boldsymbol{j}),\phi_{2}(\boldsymbol{j}))\cap(\phi_{1}(\boldsymbol{k}),\phi_{2}(\boldsymbol{k}))|=1$.
Thus, $\phi$ satisfies Eq. (21). Furthermore, $\phi$ is injective, since if
there are two vertices $\boldsymbol{j}$, $\boldsymbol{k}\in G$ such that
$\phi(\boldsymbol{j})=\phi(\boldsymbol{k})$, then $\boldsymbol{j}$ and
$\boldsymbol{k}$ appear in the same two cliques, but since the Krausz
decomposition is a partition of the edges, this would require that
$\boldsymbol{j}$ and $\boldsymbol{k}$ neighbor by two edges. However, the
definition of $G$ guarantees that pairs of vertices can only neighbor by at
most one edge, and so this is impossible. Therefore $\phi$ is injective. ∎
### 6.2 Proof of Theorem 2
Once again, we restate our theorem for convenience
###### Theorem 2, restated (Symmetries are cycles and parity).
Given a Hamiltonian satisfying Eq. (78) such that the number of vertices
$|\widetilde{V}|$ in the root graph is odd, then we have
$\displaystyle\mathcal{Z}(\mathcal{P}_{H})=Z_{H}.$ (79)
If the number of vertices in the root graph is even, then we have
$\displaystyle\mathcal{Z}(\mathcal{P}_{H})=\left\langle Z_{H},P\right\rangle.$
(80)
_Proof._ Let $G\equiv(E,F)\simeq L(R)$ be the connected line graph of a
connected root graph $R=(V,E)$, and let $G$ have adjacency matrix
$\mathbf{A}$. We will need the following well-known factorization of a line
graph adjacency matrix $\mathbf{A}$
$\displaystyle\mathbf{A}=\mathbf{B}\mathbf{B}^{\mathrm{T}}\ \mathrm{(mod\ 2)}$
(81)
where $\mathbf{B}$ is the edge-vertex incidence matrix of $R$. That is,
$\mathbf{B}$ is a $|E|\times|V|$ matrix such that
$\displaystyle B_{\boldsymbol{j}l}=\begin{cases}1&l\in\boldsymbol{j}\\\
0&\mathrm{otherwise}\end{cases}$ (82)
for all $\boldsymbol{j}\in E$ and $l\in V$. We can interpret $\mathbf{B}$ as
defining the map $\phi$ via
$\displaystyle\phi:\sigma^{\boldsymbol{j}}\mapsto\prod_{l\in
V}\gamma_{l}^{B_{\boldsymbol{j}l}}$ (83)
That is, $\phi_{1}(\boldsymbol{j})$ and $\phi_{2}(\boldsymbol{j})$ are the
indices of the nonzero elements in the row labeled by $\boldsymbol{j}$ in
$\mathbf{B}$. This then defines the adjacency matrix $\mathbf{A}$ through the
scalar commutator as
$\displaystyle[\\![\prod_{l\in V}\gamma_{l}^{B_{\boldsymbol{j}l}},\prod_{m\in
V}\gamma_{m}^{B_{\boldsymbol{k}l}}]\\!]$ $\displaystyle=\prod_{l,m\in
V}[\\![\gamma_{l}^{B_{\boldsymbol{j}l}},\gamma_{m}^{B_{\boldsymbol{k}m}}]\\!]$
(84) $\displaystyle=\prod_{l,m\in
V}(-1)^{(1-\delta_{lm})B_{\boldsymbol{j}l}B_{\boldsymbol{k}m}}$ (85)
$\displaystyle[\\![\prod_{l\in V}\gamma_{l}^{B_{\boldsymbol{j}l}},\prod_{m\in
V}\gamma_{m}^{B_{\boldsymbol{k}l}}]\\!]$
$\displaystyle=(-1)^{(\mathbf{B}\mathbf{B}^{\mathrm{T}})_{\boldsymbol{j}\boldsymbol{k}}+\left(\sum_{l}B_{\boldsymbol{j}l}\right)\left(\sum_{l}B_{\boldsymbol{k}l}\right)}$
(86) $\displaystyle(-1)^{A_{\boldsymbol{j}\boldsymbol{k}}}$
$\displaystyle=(-1)^{(\mathbf{B}\mathbf{B}^{\mathrm{T}})_{\boldsymbol{j}\boldsymbol{k}}}$
(87)
From the third to the fourth line, we replaced the left-hand side with the
definition of $\mathbf{A}$ and used the fact that the rows of $\mathbf{B}$
have exactly two nonzero elements. By the distributive property of the scalar
commutator Eq. (4), we can extend the above equation to products of
Hamiltonian terms
$\displaystyle\prod_{\boldsymbol{j}\in E}\left(\prod_{l\in
V}\gamma_{l}^{B_{\boldsymbol{j}l}}\right)^{v_{\boldsymbol{j}}}=\pm\prod_{l\in
V}\gamma_{l}^{\left(\mathbf{B}^{\mathrm{T}}\cdot\mathbf{v}\right)_{l}}\mathrm{,}$
(88)
where $\mathbf{v}\in\\{0,1\\}^{\times|E|}$, as
$\displaystyle[\\![\prod_{l\in
V}\gamma_{l}^{B_{\boldsymbol{j}l}},\prod_{\boldsymbol{k}\in
E}\left(\prod_{m\in
V}\gamma_{m}^{B_{\boldsymbol{k}m}}\right)^{v_{\boldsymbol{k}}}]\\!]=(-1)^{\left(\mathbf{B}\mathbf{B}^{\mathrm{T}}\cdot\mathbf{v}\right)_{\boldsymbol{j}}}$
(89)
since linear combinations of rows of $\mathbf{B}$ over $\mathds{F}_{2}$ will
have even-many ones. Every element of $\mathcal{P}_{H}$ is a (non-unique)
linear combination of rows of $\mathbf{B}$ over $\mathds{F}_{2}$, and so to
characterize the elements of $\mathcal{Z}(\mathcal{P}_{H})$, it is sufficient
to find a spanning set of the kernel of $\mathbf{A}$,
$\displaystyle\mathbf{A}\cdot\mathbf{v}=\mathbf{B}\mathbf{B}^{\mathrm{T}}\cdot\mathbf{v}=\mathbf{0}\
\mathrm{(mod\ 2)}$ (90)
It is again well-known that the $\mathds{F}_{2}$-kernel of
$\mathbf{B}^{\mathrm{T}}$ is the cycle space of $R$, and this specifies the
cycle subgroup $Z_{H}$ as being contained in $\mathcal{Z}(\mathcal{P}_{H})$.
All that is left is therefore to find all $\mathbf{v}$ such that
$\mathbf{B}^{\mathrm{T}}\cdot\mathbf{v}$ is in the kernel of $\mathbf{B}$.
Since we have assumed $G$ is connected, it is easy to see that the only
element in this kernel is $\mathbf{1}$, the all-ones vector. Thus,
$\mathbf{v}$ will also be in the kernel of $\mathbf{A}$ if it defines a T-join
of $G$. If $|V|$ is even, then we can construct a T-join by first pairing the
vertices along paths of $G$. We can then ensure that each edge appears at most
once in the T-join by taking the symmetric difference of all paths. If $|V|$
is odd, then no T-join exists. Indeed, assume that a T-join $T$ does exist for
$|V|$ odd, and let $\widetilde{G}=(V,T)\subseteq G$ be the subgraph of $G$
containing exactly the edges from the T-join. By construction $\widetilde{G}$
contains all the vertices of $G$ and has odd degree for every vertex, though
it may no longer be connected. Let these degrees be $\\{d_{j}\\}_{j\in V}$,
then by the handshaking lemma
$\displaystyle\sum_{j=1}^{|V|}d_{j}=2|T|\mathrm{.}$ (91)
However, the left side must be odd since we have assumed the degree of every
vertex in $\widetilde{G}$ is odd, and the number of vertices is also odd, and
so we have a contradiction. ∎
## 7 Discussion
We have seen how the tools of graph theory can be leveraged to solve a wide
class of spin models via mapping to free fermions, and given an explicit
procedure for constructing the free-fermion solution when one exists. A major
remaining open question, however, concerns the characterization of free-
fermion solutions beyond the generator-to-generator mappings we consider here.
That is, if $G(H)$ is not a line graph and no removal of twin vertices will
make it so, then it may _still_ be possible for a free-fermion solution for
$H$ to exist thanks to the continuum of locally equivalent Pauli-bases into
which $H$ may be expanded. Our fundamental theorem does not rule out the
possibility that special such bases may exist. The problem of finding such
bases is equivalent to finding specific unitary rotations of $H$ for which the
$G(H)$ again becomes a line graph. These rotations must be outside of the
Clifford group, since the frustration graph is a Clifford invariant. Their
existence may therefore depend on specific algebraic relationships between the
Pauli coefficients $h_{\boldsymbol{j}}$ in the Hamiltonian, since the
existence of a free-fermionization is a spectral invariant. We expect such
transformations will be hard to find in general, though perhaps progress can
be made for single-qubit rotations on 2-local Hamiltonians in a similar vein
as in Ref. [61] for stoquasticity. Recently, a local
spin-$\nicefrac{{1}}{{2}}$ model with a free-fermion solution – despite no
such generator-to-generator solution existing – has been found in Ref. [62].
An investigation of models which may be fermionized by these more general
transformations is therefore an interesting subject of future work.
It is natural to ask whether our results could have implications for
simulating quantum systems and quantum computation. We expect our
characterization to shed some light on the inverse problem of finding fermion-
to-qubit mappings, such as the Bravyi-Kitaev superfast encoding [63], Bravyi-
Kitaev transform [64], and generalized superfast encoding [65]. It is possible
to achieve further encodings by introducing ancillary fermion modes, as seen
in the Verstraete-Cirac mapping [7] and the contemporaneous mapping introduced
by Ball [66]. Encodings can be further improved through tailoring to specific
symmetries [67], connectivity structures [68, 69], and through the application
of Fenwick trees [70]. Recently, a treelike mapping was shown to achieve
optimal average-case Pauli-weight in Ref [71]. In their “Discussion” section,
the authors remark that an interesting future direction for their work would
involve introducing ancillary qubits to their mapping. We expect our
classification of the symmetries of free-fermion spin models to help guide
this investigation, though further work is required to fully characterize the
logical symmetry groups which can be realized by these models.
Finally, our characterization highlights the possibility of a “free-fermion
rank" for Hamiltonians as an important measure of classical simulability.
Namely, if there is no free-fermion solution for a given Hamiltonian, we can
still group terms into collections such that each collection independently has
such a solution. An interesting natural question for future work is whether
the minimal number of such collections required can be interpreted as a
quantum resource in an analogous way to the fermionic Gaussian rank [72] or
stabilizer rank [73] for states.
## Acknowledgements
We thank Samuel Elman, Ben Macintosh, Ryan Mann, Nick Menicucci, Andrew
Doherty, Stephen Bartlett, Sam Roberts, Alicia Kollár, Deniz Stiegemann,
Sayonee Ray, Chris Jackson, Jonathan Gross, and Nicholas Rubin for valuable
discussions throughout this project. This work was supported by the Australian
Research Council via EQuS project number CE170100009.
## References
* Lieb _et al._ [1961] E. Lieb, T. Schultz, and D. Mattis, Annals of Physics 16, 407 (1961).
* Jordan and Wigner [1928] P. Jordan and E. Wigner, Zeitschrift für Physik 47, 631 (1928).
* Fradkin [1989] E. Fradkin, Phys. Rev. Lett. 63, 322 (1989).
* Wang [1991] Y. R. Wang, Phys. Rev. B 43, 3786 (1991).
* Huerta and Zanelli [1993] L. Huerta and J. Zanelli, Phys. Rev. Lett. 71, 3622 (1993).
* Batista and Ortiz [2001] C. D. Batista and G. Ortiz, Phys. Rev. Lett. 86, 1082 (2001).
* Verstraete and Cirac [2005] F. Verstraete and J. I. Cirac, Journal of Statistical Mechanics: Theory and Experiment 2005, P09012 (2005).
* Nussinov _et al._ [2012] Z. Nussinov, G. Ortiz, and E. Cobanera, Phys. Rev. B 86, 085415 (2012).
* Chen _et al._ [2018] Y.-A. Chen, A. Kapustin, and Đ. Radičević, Annals of Physics 393, 234 (2018).
* Backens _et al._ [2019] S. Backens, A. Shnirman, and Y. Makhlin, Scientific reports 9, 2598 (2019).
* Tantivasadakarn [2020] N. Tantivasadakarn, arXiv e-prints , arXiv:2002.11345 (2020), arXiv:2002.11345 [cond-mat.str-el] .
* Kitaev [2006] A. Kitaev, Annals of Physics 321, 2 (2006).
* Knill [2001] E. Knill, ArXiv e-prints (2001), arXiv:quant-ph/0108033 .
* Terhal and DiVincenzo [2002] B. M. Terhal and D. P. DiVincenzo, Phys. Rev. A 65, 032325 (2002).
* Van Den Nest [2011] M. Van Den Nest, Quantum Info. Comput. 11, 784 (2011).
* Brod [2016] D. J. Brod, Phys. Rev. A 93, 062332 (2016).
* Jozsa and Miyake [2008] R. Jozsa and A. Miyake, Proc. R. Soc. A 464, 3089 (2008).
* Brod and Galvão [2011] D. J. Brod and E. F. Galvão, Phys. Rev. A 84, 022310 (2011).
* Bravyi [2006] S. Bravyi, Phys. Rev. A 73, 042313 (2006).
* Hebenstreit _et al._ [2019] M. Hebenstreit, R. Jozsa, B. Kraus, S. Strelchuk, and M. Yoganathan, Phys. Rev. Lett. 123, 080503 (2019).
* Brod and Childs [2014] D. J. Brod and A. M. Childs, Quant. Info. Comput. 14, 901 (2014).
* Valiant [2002] L. G. Valiant, SIAM Journal on Computing 31, 1229 (2002).
* Cai and Choudhary [2006] J.-Y. Cai and V. Choudhary, in _Proceedings of the Third International Conference on Theory and Applications of Models of Computation_, TAMC’06 (Springer-Verlag, Berlin, Heidelberg, 2006) pp. 248–261.
* Cai _et al._ [2007] J. Cai, V. Choudhary, and P. Lu, in _Twenty-Second Annual IEEE Conference on Computational Complexity (CCC’07)_ (2007) pp. 305–318.
* Valiant [2008] L. G. Valiant, SIAM Journal on Computing 37, 1565 (2008).
* Papadimitriou [1994] C. H. Papadimitriou, in _Encyclopedia of Computer Science_ (John Wiley and Sons Ltd., Chichester, UK, 1994) pp. 260–265.
* Kasteleyn [1961] P. Kasteleyn, Physica 27, 1209 (1961).
* Temperley and Fisher [1961] H. N. V. Temperley and M. E. Fisher, Philosophical Magazine 6, 1061 (1961).
* Planat and Saniga [2008] M. Planat and M. Saniga, Quant. Inf. Comput. 8, 127 (2008), arXiv:quant-ph/0701211 [quant-ph] .
* Jena _et al._ [2019] A. Jena, S. Genin, and M. Mosca, arXiv e-prints , arXiv:1907.07859 (2019), arXiv:1907.07859 [quant-ph] .
* Verteletskyi _et al._ [2020] V. Verteletskyi, T.-C. Yen, and A. F. Izmaylov, The Journal of Chemical Physics 152, 124114 (2020).
* Zhao _et al._ [2019] A. Zhao, A. Tranter, W. M. Kirby, S. F. Ung, A. Miyake, and P. Love, arXiv e-prints , arXiv:1908.08067 (2019), arXiv:1908.08067 [quant-ph] .
* Izmaylov _et al._ [2019] A. F. Izmaylov, T.-C. Yen, R. A. Lang, and V. Verteletskyi, Journal of Chemical Theory and Computation 16, 190 (2019).
* Yen _et al._ [2020] T.-C. Yen, V. Verteletskyi, and A. F. Izmaylov, Journal of Chemical Theory and Computation 16, 2400 (2020).
* Gokhale _et al._ [2019] P. Gokhale, O. Angiuli, Y. Ding, K. Gui, T. Tomesh, M. Suchara, M. Martonosi, and F. T. Chong, arXiv e-prints , arXiv:1907.13623 (2019), arXiv:1907.13623 [quant-ph] .
* Crawford _et al._ [2019] O. Crawford, B. van Straaten, D. Wang, T. Parks, E. Campbell, and S. Brierley, arXiv e-prints , arXiv:1908.06942 (2019), arXiv:1908.06942 [quant-ph] .
* Bonet-Monroig _et al._ [2019] X. Bonet-Monroig, R. Babbush, and T. E. O’Brien, arXiv e-prints , arXiv:1908.05628 (2019), arXiv:1908.05628 [quant-ph] .
* Roussopoulos [1973] N. D. Roussopoulos, Information Processing Letters 2, 108 (1973).
* Lehot [1974] P. G. H. Lehot, J. ACM 21, 569 (1974).
* Degiorgi and Simon [1995] D. G. Degiorgi and K. Simon, in _Graph-Theoretic Concepts in Computer Science_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 1995) pp. 37–48.
* Kollár _et al._ [2019a] A. J. Kollár, M. Fitzpatrick, and A. A. Houck, Nature 571, 45 (2019a).
* Kollár _et al._ [2019b] A. J. Kollár, M. Fitzpatrick, P. Sarnak, and A. A. Houck, Communications in Mathematical Physics , online only (2019b).
* Boettcher _et al._ [2019] I. Boettcher, P. Bienias, R. Belyansky, A. J. Kollár, and A. V. Gorshkov, arXiv e-prints , arXiv:1910.12318 (2019), arXiv:1910.12318 [quant-ph] .
* Jochym-O’Connor _et al._ [2019] T. Jochym-O’Connor, S. Roberts, S. Bartlett, and J. Preskill, “Frustrated hexagonal gauge 3d color code,” (2019), 5th International Conference on Quantum Error Correction (QEC 2019).
* Whitney [1932] H. Whitney, American Journal of Mathematics 54, 150 (1932).
* Goodmanson [1996] D. M. Goodmanson, American Journal of Physics 64, 870 (1996).
* Beineke [1970] L. W. Beineke, Journal of Combinatorial Theory 9, 129 (1970).
* Šoltés [1994] Ľ. Šoltés, Discrete Mathematics 132, 391 (1994).
* Yang _et al._ [2002] Y. Yang, J. Lin, and C. Wang, Discrete Mathematics 252, 287 (2002).
* Erdős _et al._ [1966] P. Erdős, A. W. Goodman, and L. Pósa, Canadian Journal of Mathematics 18, 106 (1966).
* Harary [1971] F. Harary, _Graph Theory_, Addison Wesley series in mathematics (Addison-Wesley, 1971).
* Krausz [1943] J. Krausz, Matematikai és Fizikai Lapok 50 (1943).
* Bednarek [1985] A. Bednarek, Discrete Mathematics 56, 83 (1985).
* Suchara _et al._ [2011] M. Suchara, S. Bravyi, and B. Terhal, Journal of Physics A: Mathematical and Theoretical 44, 155301 (2011).
* Bombín [2016] H. Bombín, New Journal of Physics 18, 043038 (2016).
* Bombín [2015] H. Bombín, Phys. Rev. X 5, 031043 (2015).
* Kubica and Beverland [2015] A. Kubica and M. E. Beverland, Phys. Rev. A 91, 032330 (2015).
* Bombín [2015] H. Bombín, New Journal of Physics 17, 083002 (2015).
* Brown _et al._ [2016] B. J. Brown, N. H. Nickerson, and D. E. Browne, Nature Communications 7, 12302 (2016).
* Kitaev [2001] A. Y. Kitaev, Physics-Uspekhi 44, 131 (2001).
* Klassen and Terhal [2019] J. Klassen and B. M. Terhal, Quantum 3, 139 (2019).
* Fendley [2019] P. Fendley, Journal of Physics A: Mathematical and Theoretical 52, 335002 (2019).
* Bravyi and Kitaev [2002] S. B. Bravyi and A. Y. Kitaev, Ann. Phys. (N. Y.) 298, 210 (2002).
* Seeley _et al._ [2012] J. T. Seeley, M. J. Richard, and P. J. Love, The Journal of Chemical Physics 137, 224109 (2012).
* Setia _et al._ [2019] K. Setia, S. Bravyi, A. Mezzacapo, and J. D. Whitfield, Phys. Rev. Research 1, 033033 (2019).
* Ball [2005] R. C. Ball, Phys. Rev. Lett. 95, 176407 (2005).
* Bravyi _et al._ [2017] S. Bravyi, J. M. Gambetta, A. Mezzacapo, and K. Temme, arXiv e-prints , arXiv:1701.08213 (2017), arXiv:1701.08213 [quant-ph] .
* Steudtner and Wehner [2018] M. Steudtner and S. Wehner, New Journal of Physics 20, 063010 (2018).
* Jiang _et al._ [2019] Z. Jiang, J. McClean, R. Babbush, and H. Neven, Phys. Rev. Applied 12, 064041 (2019).
* Havlíček _et al._ [2017] V. Havlíček, M. Troyer, and J. D. Whitfield, Phys. Rev. A 95, 032332 (2017).
* Jiang _et al._ [2019] Z. Jiang, A. Kalev, W. Mruczkiewicz, and H. Neven, arXiv e-prints , arXiv:1910.10746 (2019), arXiv:1910.10746 [quant-ph] .
* Bravyi and Gosset [2017] S. Bravyi and D. Gosset, Communications in Mathematical Physics 356, 451 (2017).
* Bravyi _et al._ [2019] S. Bravyi, D. Browne, P. Calpin, E. Campbell, D. Gosset, and M. Howard, Quantum 3, 181 (2019).
|
2024-09-04T02:54:59.341499 | 2020-03-11T18:53:02 | 2003.05485 | {
"authors": "Riccardo Fazio and Salvatore Iacono",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26175",
"submitter": "Riccardo Fazio",
"url": "https://arxiv.org/abs/2003.05485"
} | arxiv-papers | # A Non-Iterative Transformation Method for a Class of Free Boundary Value
Problems Governed by ODEs
Riccardo Fazio111Corresponding author: e-mail<EMAIL_ADDRESS>home-page:
http://mat521.unime.it/fazio and Salvatore Iacono
Department of Mathematics and Computer Science
University of Messina
Viale F. Stagno D’Alcontres 31, 98166 Messina, Italy
###### Abstract
The aim of this work is to point out that the class of free boundary problems
governed by second order autonomous ordinary differential equations can be
transformed to initial value problems. Interest in the numerical solution of
free boundary problems arises because these are always nonlinear problems. The
theoretical content of this paper is original: results already available in
literature are related to the invariance properties of scaling or spiral
groups of point transformations but here we show how it is also possible to
use e invariance properties of a translation group. We test the proposed
algorithm by solving three problems: a problem describing a rope configuration
against an obstacle, a dynamical problem with a nonlinear force, and a problem
related to the optimal length estimate for tubular flow reactors.
Key Words. Ordinary differential equations, free boundary problems, initial
value methods, non-iterative transformation method, translation group of point
transformations.
AMS Subject Classifications. 65L10, 65L99, 34B15, 34B99.
## 1 Introduction
Free boundary value problems (BVPs) occur in all branches of applied
mathematics and science. The oldest problem of this type was formulated by
Isaac Newton, in book II of his great “Principia Mathematica” of 1687, by
considering the optimal nose-cone shape for the motion of a projectile subject
to air resistance, see Edwards [1] or Fazio [2].
In the classical numerical treatment of a free BVP a preliminary reduction to
a BVP is introduced by considering a new independent variable; see, Stoer and
Bulirsch [3, p. 468], Ascher and Russell [4], or Ascher, Mattheij and Russell
[5, p. 471]. By rewriting a free BVP as a BVP it becomes evident that the
former is always a nonlinear problem; the first to point out the nonlinearity
of free BVPs was Landau [6]. Therefore, in that way free BVPs are BVPs. In
this paper we show that free BVPs invariant with respect to a translation
group can be solved non-iteratively by solving a related initial value problem
(IVP). Therefore in this way those free BVPs are indeed IVPs. Moreover, we are
able to characterize a class of free BVPs that can be solved non-iteratively
by solving related IVPs.
The non-iterative numerical solution of BVPs is a subject of past and current
research. Several different strategies are available in literature for the
non-iterative solution of BVPs: superposition [5, pp. 135-145], chasing [7,
pp. 30-51], and adjoint operators method [7, pp. 52-69] that can be applied
only to linear models; parameter differentiation [7, pp. 233-288] and
invariant imbedding [8] can be applied also to nonlinear problems. In this
context transformation methods (TMs) are founded on group invariance theory,
see Bluman and Cole [9], Dresner [10], or Bluman and Kumei [11]. These methods
are initial value methods because they achieve the numerical solution of BVPs
through the solution of related IVPs.
The first application of a non-iterative TM was given by Töpfer in [12] for
the Blasius problem, without any consideration of group invariance theory.
Töpfer’s algorithm is quoted in several books on fluid dynamics, see, for
instance, Goldstein [13, pp. 135-136]. Acrivos, Shah and Petersen [14] first
and Klamkin [15] later extended Töpfer’s method respectively to a more general
problem and to a class of problems. Along the lines of the work of Klamkin,
for a given problem Na [16, 17] showed the relation between the invariance
properties, with respect to a linear group of transformation (the scaling
group), and the applicability of a non-iterative TM. Na and Tang [18] proposed
a non-iterative TM based on the spiral group and applied it to a non-linear
heat generation model. Belford [19] first, and Ames and Adams [20, 21] later
defined non-iterative TMs for eigenvalue problems. A review paper was written
by Klamkin [22]. Extensions of non-iterative TM, by requiring the invariance
of one and of two or more physical parameters when they are involved in the
mathematical model, were respectively proposed by Na [23] and by Scott,
Rinschler and Na [24]; see also Na [7, Chapters 8 and 9]. A survey book,
written by Na [7, Chs 7-9] on the numerical solution of BVP, devoted three
chapters to numerical TMs.
As far as free BVPs are concerned, non-iterative and iterative TMs were
proposed by Fazio and Evans [25]. Fazio [26] has shown that we can extend the
applicability of non-iterative TMs by rewriting a given free BVP using a
variables transformation obtained by linking two different invariant groups.
However, non-iterative TMs are applicable only to particular classes of BVPs
so that they have been considered as ad hoc methods, see Meyer [8, pp. 35-36],
Na [7, p. 137] or Sachdev [27, p. 218].
The transformation of BVPs to IVPs has also a theoretical relevance. In fact,
existence and uniqueness results can be obtained as a consequence of the
invariance properties. For instance, for the Blasius problem, a simple
existence and uniqueness theorem was given by J. Serrin [28] as reported by
Meyer [29, pp. 104-105] or Hastings and McLeod [30, pp. 151-153]. Moreover,
using scaling invariance properties the error analysis of the truncated
boundary formulation of the Blasius problem was developed by Rubel [31]. On
this topic a first application of a numerical test, defined within group
invariance theory, to verify the existence and uniqueness of the solution of a
free BVPs was considered by Fazio in [32]. A formal definition of the
mentioned numerical test can be found in [33].
In this paper we consider the class of free BVPs governed by second order
autonomous differential equations, and define, for these problems, a non-
iterative TM using the invariance properties of a translation group. As far as
applications of the proposed algorithm are concerned, we solve three problems.
First we test our method with a problem describing a rope configuration
against an obstacle, where we compare the obtained numerical results with the
exact solution. Then we solve a dynamical problem with a nonlinear force, and
a problem related to the optimal length estimate for tubular flow reactors,
where in both cases our results are compared to numerical data available in
literature. Finally, the last section is concerned with concluding remarks
pointing out limitations and possible extensions of the proposed approach.
## 2 The non-iterative TM
Let us consider the class of second order free BVPs given by
$\displaystyle\frac{d^{2}u}{dx^{2}}=\Omega\left(u,\frac{du}{dx}\right)\
,\qquad x\in(0,s)$ (2.1) $\displaystyle A_{1}u(0)+A_{2}\frac{du}{dx}(0)=A_{3}\
,$ (2.2) $\displaystyle u(s)=B\quad,\quad\frac{du}{dx}(s)=C\ ,$ (2.3)
where $A_{i}$, for $i=1,2,3$, $B$ and $C$ are arbitrary constants, and $s>0$
is an unknown free boundary. The differential equation (2.1) and the two free
boundary conditions (2.3) are invariant with respect to the translation group
$\displaystyle x^{*}=x+\mu\quad;\quad s^{*}=s+\mu\quad;\quad u^{*}=u\ .$ (2.4)
By using this invariance, we can define the following non-iterative algorithm
for the numerical solution of (2.1)-(2.3):
* •
we fix freely a value of $s^{*}$;
* •
we integrate backwards from $s^{*}$ to $x_{0}^{*}$ the following auxiliary IVP
$\displaystyle\frac{d^{2}u^{*}}{dx^{*2}}=\Omega\left(u^{*},\frac{du^{*}}{dx^{*}}\right)$
$\displaystyle u^{*}(s^{*})=B\ ,\qquad\frac{du^{*}}{dx^{*}}(s^{*})=C\ ,$
using an event locator in order to find $x_{0}^{*}$ such that
$A_{1}u^{*}(x_{0}^{*})+A_{2}\frac{du^{*}}{dx^{*}}(x_{0}^{*})=A_{3}\ ;$ (2.6)
* •
finally, through the invariance property, we can deduce the similarity
parameter
$\mu=x_{0}^{*}\ ,$ (2.7)
from which we get the unknown free boundary
$s=s^{*}-\mu\ .$ (2.8)
The missing initial conditions are given by
$u(0)=u^{*}(x_{0}^{*})\
,\qquad\frac{du}{dx}(0)=\frac{du^{*}}{dx^{*}}(x_{0}^{*})\ .$ (2.9)
Let us define now a simple event locator suited to the class of problems
(2.1)-(2.3). We consider first the case where
$A_{1}u^{*}(s^{*})+A_{2}\frac{du^{*}}{dx^{*}}(s^{*})<A_{3}\ .$ (2.10)
We can integrate the auxiliary IVP (• ‣ 2) with a constant step size $\Delta
x^{*}$ until at a given mesh point $x_{k}^{*}$ we get
$A_{1}u^{*}(x_{k}^{*})+A_{2}\frac{du^{*}}{dx^{*}}(x_{k}^{*})>A_{3}\ ,$ (2.11)
and repeat the last step with the smaller step size
$\Delta x_{0}^{*}=\Delta x^{*}\frac{\displaystyle
A_{3}-A_{1}u^{*}(x_{k-1}^{*})-A_{2}\frac{du^{*}}{dx^{*}}(x_{k-1}^{*})}{\displaystyle
A_{1}u^{*}(x_{k}^{*})+A_{2}\frac{du^{*}}{dx^{*}}(x_{k}^{*})-A_{1}u^{*}(x_{k-1}^{*})-A_{2}\frac{du^{*}}{dx^{*}}(x_{k-1}^{*})}\
.$ (2.12)
In defining the last step size in equation (2.12) we use a linear
interpolation. As a consequence, we have that $x_{0}^{*}\approx
x_{k}^{*}-\Delta x_{0}^{*}$. We notice that the condition imposed by this
event locator converges to the correct condition (2.6) as the step size goes
to zero, cf. the second column of table 2.
The other case
$A_{1}u^{*}(s^{*})+A_{2}\frac{du^{*}}{dx^{*}}(s^{*})>A_{3}\ ,$ (2.13)
can be treated in a similar way. Of course, also in this second case the last
step size is smaller than the previous ones.
In the next section we apply the proposed non-iterative TM to three problems.
The reported numerical results were computed by the classical fourth-order
Runge-Kutta’s method, reported by Butcher [34, p. 166], coupled with the event
locator defined above.
## 3 The obstacle problem on a string
For the obstacle problem on a string, depicted on figure 1 within the
$(x,u)$-plane where the $x$ axis is taken overlying to the obstacle, we have
to consider the following mathematical model, see Collatz [35] or Glashoff and
Werner, [36]
$\displaystyle\frac{d^{2}u}{dx^{2}}=\theta\left[1+\left(\frac{du}{dx}\right)^{2}\right]^{1/2}\
,\qquad x\in(0,s)$ $\displaystyle u(0)=u_{0}\ ,\qquad u(s)=\frac{du}{dx}(s)=0\
,$
where the positive value of $\theta$ depends on the string properties. In this
problem we have to find the position of a uniform string of finite length $L$
under the action of gravity. The string has fixed end points, say $(0,u_{0})$
and $(b,0)$, where $u_{0}>0$ and $b>0$. Furthermore, we assume that the
condition $L^{2}>\left(u_{0}^{2}+b^{2}\right)$ is fulfilled; this condition
allows us to define a free boundary $s$ for this problem, where $s$ is the
detached rope position from the obstacle.
The free BVP (3) was solved by the first author in [37] by iterative methods,
namely a shooting method and the iterative extension of the TM derived by
using the invariance with respect to a scaling group.
The exact solution of the free BVP (3) is given by
$\displaystyle
u(x)=\theta^{-1}\left[\cosh\left(\theta\left(x-s\right)\right)-1\right]\ ,$
(3.2) $\displaystyle s=\theta^{-1}\ln\left[\theta u_{0}+1+\left(\left(\theta
u_{0}+1\right)^{2}-1\right)^{1/2}\right]\ ,$
from this we easily find
$\frac{du}{dx}(0)=\sinh(-\theta s)=\frac{1}{2}\left(e^{-\theta s}-e^{\theta
s}\right)\ ,$ (3.3)
and, therefore, for $\theta=0.1$ and $u_{0}=1$ from equations (3)-(3.3) we get
the values
$s=4.356825433\ ,\qquad\frac{du}{dx}(0)=-0.458257569\ ,$ (3.4)
that are correct to the ninth decimals.
Table 1: Convergence of numerical results for $\theta=0.1$. $\Delta x$ | $\frac{du}{dx}(0)$ | $e_{r}$ | $s$ | $e_{r}$
---|---|---|---|---
$-0.1$ | $-0.458227362$ | $6.59\mbox{D}-05$ | $4.435407932$ | $6.19\mbox{D}-05$
$-0.05$ | $-0.458250809$ | $1.47\mbox{D}-05$ | $4.435621088$ | $1.39\mbox{D}-05$
$-0.025$ | $-0.458255551$ | $4.40\mbox{D}-06$ | $4.435664194$ | $4.14\mbox{D}-06$
$-0.0125$ | $-0.458257313$ | $5.59\mbox{D}-07$ | $4.435680211$ | $5.26\mbox{D}-07$
$-0.00625$ | $-0.458257463$ | $2.31\mbox{D}-07$ | $4.435681576$ | $2.18\mbox{D}-07$
$-0.003125$ | $-0.458257538$ | $6.74\mbox{D}-08$ | $4.435682258$ | $6.43\mbox{D}-08$
$-0.0015625$ | $-0.458257565$ | $8.52\mbox{D}-09$ | $4.435682504$ | $9.05\mbox{D}-09$
Let us consider a convergence numerical test for our non-iterative TM. Table 1
reports the obtained numerical results for the missing initial condition and
the free boundary value for the free BVP (3) with $\theta=0.1$ and $u_{0}=1$,
as well as the corresponding relative errors denoted by $e_{r}$. An example of
the numerical solutions is shown in figure 1.
Figure 1: Picture of the numerical solution obtained for $\theta=0.1$,
$u_{0}=1$, and $b=4.5$; the obstacle, is superimposed to the $x$ axis, and is
displayed by a black solid line.
For this numerical solution, we applied a large step size in order to show the
mesh used and to empathize how our event locator reduces the last step.
## 4 A dynamical free BVP
Suppose a particle of unitary mass is moving against a nonlinear force, given
by $-1-u-\left(\frac{du}{dx}\right)^{2}$, from the origin $u=0$ to a final
position $u=1$, our goal is to determine the duration of motion $s$ and the
initial velocity that assures that the particle is momentarily at rest at
$u=1$; see Meyer [8, pp. 97-99]. This problem can be formulated as follows
$\displaystyle\frac{d^{2}u}{dx^{2}}=-1-u-\left(\frac{du}{dx}\right)^{2}\
,\qquad x\in(0,s)$ $\displaystyle u(0)=0\ ,\qquad u(s)=1\
,\qquad\frac{du}{dx}(s)=0\ ,$
where $u$ and $x$ are the particle position and the time variable,
respectively, on the right hand side of the governing differential equation we
have the nonlinear force acting on the particle and $s$ is the free boundary.
In table 2 we propose a numerical convergence test for decreasing values of
the step size.
Table 2: Convergence of numerical results for the free BVP dynamical model. $\Delta x$ | $u(0)$ | $\frac{du}{dx}(0)$ | $s$
---|---|---|---
$-0.1$ | $1.16\mbox{D}-02$ | $3.212263787$ | $0.867662139$
$-0.05$ | $3.54\mbox{D}-03$ | $3.240676696$ | $0.870143219$
$-0.025$ | $4.61\mbox{D}-04$ | $3.2516023692$ | $0.871089372$
$-0.0125$ | $1.90\mbox{D}-04$ | $3.252564659$ | $0.871172452$
$-0.00625$ | $5.42\mbox{D}-05$ | $3.253049203$ | $0.871214290$
$-0.003125$ | $9.25\mbox{D}-06$ | $3.253209165$ | $0.871228100$
$-0.0015625$ | $3.43\mbox{D}-06$ | $3.253229900$ | $0.871229890$
$-0.00078125$ | $5.12\mbox{D}-07$ | $3.253240276$ | $0.871230785$
$-0.000390625$ | $2.01\mbox{D}-07$ | $3.253241381$ | $0.871230881$
$-0.0001953125$ | $4.62\mbox{D}-08$ | $3.253241934$ | $0.871230929$
The obtained results can be contrasted with those reported by Meyer [8, pp.
97-99] where, by using the invariant imbedding method, he found $s=1.2651$ but
a value of $u(0)=0.0163$ instead of $u(0)=0$ as prescribed by the free BVP
(4). The behaviour of the solution can be seen in the figure 2.
Figure 2: Numerical solution for the dynamical free BVP (4) obtained with
$\Delta x=0.0125$.
Again we applied a large step size in order to show how our event locator
reduces the last step.
A free BVP similar to (4) was considered by Na [7, p. 88] where the nonlinear
force was replaced by $-u\exp(-u)$. However, in this case it is possible to
prove [33], using the conservation of energy principle, that the free BVP has
countable infinite many solutions, with the missing initial conditions given
by
$\frac{du}{dx}(0)=\pm 0.726967811\ .$ (4.2)
## 5 Length estimation for tubular flow reactors
Roughly speaking, a tubular flow chemical reactor can be seen as a device
where on one side it is introduced a material A that along its passage inside
the reactor undergoes a chemical reaction so that at the exit we get a product
B plus a residual part of A; see figure 3. A $n$th order chemical reactor is
usually indicated with the notation A${}^{n}\rightarrow$ B.
Figure 3: Schematic set-up of a tubular flow reactor.
A free BVP for a tubular reactor can be formulated as
$\displaystyle\frac{d^{2}u}{dx^{2}}=N_{Pe}\left(\frac{du}{dx}+Ru^{n}\right)\
,$ $\displaystyle u(0)-\frac{1}{N_{Pe}}\frac{du}{dx}(0)=1\ ,\qquad u(s)=\tau\
,\quad\frac{du}{dx}(s)=0\ ,$
where $u(x)$ is the ratio between the concentration of the reactant A at a
distance $x$ and the concentration of it at $x=0$, $N_{Pe}$, $R$, $n$ and
$\tau$ are, the Peclet group, the reaction rate group, the order of the
chemical reaction and the residual fraction of reactant A at exit,
respectively. Moreover, $N_{Pe}$ and $R$ are both greater than zero. Finally,
for the free BVP (5), the free boundary $s$ is the length of the flow reactor
we are trying to estimate.
This is an engineering problem that consists in determining the optimal length
of a tubular flow chemical reactor with axial missing and has been already
treated by Fazio in [32], through an iterative TM, whereas Fazio in [38] made
a comparison between the results obtained with a shooting method and the upper
bound of the free boundary value obtained by a non-iterative TM.
Here, for the sake of comparing the numerical results, we fix the parameters
as follows: $N_{Pe}=6$, $R=2$, $n=2$, and $\tau=0.1$. We apply the algorithm
outlined above to the numerical solution of the free BVP (5). Table 3 shows a
numerical convergence test for decreasing values of the step size.
Table 3: Convergence of numerical results. $N_{pe}=6$, $R=2$, $n=2$, and $\tau=0.1$. $\Delta x$ | $u(0)$ | $\frac{du}{dx}(0)$ | $s$
---|---|---|---
$-0.1$ | $0.829314641$ | $-1.008175212$ | $5.117905669$
$-0.05$ | $0.830537187$ | $-1.010745699$ | $5.119104349$
$-0.025$ | $0.831147822$ | $-1.012077034$ | $5.119707352$
$-0.0125$ | $0.831227636$ | $-1.012251496$ | $5.119786158$
$-0.00625$ | $0.831267467$ | $-1.012338738$ | $5.119825502$
$-0.003125$ | $0.831271635$ | $-1.012347868$ | $5.119829619$
$-0.0015625$ | $0.831273719$ | $-1.012352436$ | $5.119831678$
$-0.00078125$ | $0.831274182$ | $-1.012353449$ | $5.119832135$
$-0.000390625$ | $0.831274327$ | $-1.012353767$ | $5.119832278$
$-0.0001953125$ | $0.831274348$ | $-1.012353814$ | $5.119832299$
The obtained numerical results are reported on table 4 and compared with
numerical results available in literature.
Table 4: Comparison of numerical results for the tubular flow reactor model. | $u(0)$ | $\frac{du}{dx}(0)$ | $s$
---|---|---|---
iterative TM [32] | $0.831280$ | $-1.012298$ | $5.121648$
shooting method [38] | $0.831274$ | $-1.012354$ | $5.119832$
non-iterative TM | $0.831274$ | $-1.012354$ | $5.119832$
As it is easily seen the computed values are in good agreement with the ones
found in [32] and [38]. The behaviour of the solution can be seen in figure 4.
Figure 4: Numerical solution for length estimation of a tubular flow reactor
obtained with $\Delta x=0.1$.
Once again, we used a large step size to make clear how our event locator
reduces the last step size.
## 6 Conclusion
In closing, we can outline some further implications coming out from this
work. First of all, the algorithm proposed in this paper can be extended to
free BVPs governed by a system of first order autonomous differential
equations belonging to the general class of problems
$\displaystyle{\displaystyle\frac{d{\bf u}}{dx}}={\bf q}\left({\bf u}\right)\
,\quad x\in[0,\infty)\ ,$ $\displaystyle u_{j}(0)=u_{j0}\ ,\qquad{\bf
u}(s)={\bf u}_{s}\ ,$
where ${\bf u}(x),{\bf u}_{s}\in\hbox{I\kern-1.99997pt\hbox{R}}^{d}$, ${\bf
q}:\hbox{I\kern-1.99997pt\hbox{R}}^{d}\rightarrow~{}\hbox{I\kern-1.99997pt\hbox{R}}^{d}$,
with $d\geq 1$, $j\in\\{1,\dots,d\\}$, $u_{j0}$ and all components of ${\bf
u}_{s}$ are given constants and $s$ is the free boundary.
Moreover, our algorithm can be applied by using an integrator from the MATLAB
ODE suite written by Samphine and Reichelt [39], and available with the latest
releases of MATLAB, with the event locator option command set in
options = odeset(’Events’,@name)
where “name” is an external, problem dependent, event function.
As mentioned in the introduction, the first application of a non-iterative TM
was defined by Töpfer in [12] more than a century ago. In this paper, by
considering the invariance with respect to a translation group, we have
investigated a possible way to solve a large class of free BVPs by a non-
iterative TM.
However, it is a simple matter to show a differential equation not admitting
any group of transformations: e.g. the differential equation considered by
Bianchi [40, pp. 470-475]. Consequently, it is easy to realize that non-
iterative TMs cannot be extended to every BVPs. Therefore, non-iterative TMs
are ad hoc methods. Their applicability depends on the invariance properties
of the governing differential equation and the given boundary conditions.
On the other hand, free BVPs governed by the most general second order
differential equation, in normal form, can be solved iteratively by extending
a scaling group via the introduction of a numerical parameter so as to recover
the original problem as the introduced parameter goes to one, see Fazio [25,
26, 33, 41]. The extension of this iterative TM to problems in boundary layer
theory has been considered in [42, 43, 44, 45]. Moreover, a further extension
to the sequence of free BVPs obtained by a semi-discretization of parabolic
moving boundary problems was repoted in [46].
## References
* [1] C.H. Edwards. Newton’s nose-cone problem. Mathematica J., 7:64–71, 1997.
* [2] R. Fazio. A non-iterative transformation method for newton’s free boundary problem. Int. J. Non-Linear Mech., 59:23–27, 2014.
* [3] J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer-Verlag, Berlin, 1980.
* [4] U. M. Ascher and R. D. Russell. Reformulation of boundary value problems into “standard” form. SIAM Rev., 23:238–254, 1981.
* [5] U. M. Ascher, R. M. M. Mattheij, and R. D. Russell. Numerical Solution of Boundary Value Problems for Ordinary Differential Equations. Prentice Hall, Englewood Cliffs, New Jersey, 1988.
* [6] H. G. Landau. Heat conduction in melting solid. Quart. Appl. Math., 8:81–94, 1950.
* [7] T. Y. Na. Computational Methods in Engineering Boundary Value Problems. Academic Press, New York, 1979.
* [8] G. H. Meyer. Initial Value Methods for Boundary Value Problems; Theory and Application of Invariant Imbedding. Academic Press, New York, 1973.
* [9] G. W. Bluman and J. D. Cole. Similarity Methods for Differential Equations. Springer, Berlin, 1974.
* [10] L. Dresner. Similarity Solutions of Non-linear Partial Differential Equations, volume 88 of Research Notes in Math. Pitman, London, 1983.
* [11] G. W. Bluman and S. Kumei. Symmetries and Differential Equations. Springer, Berlin, 1989.
* [12] K. Töpfer. Bemerkung zu dem Aufsatz von H. Blasius: Grenzschichten in Flüssigkeiten mit kleiner Reibung. Z. Math. Phys., 60:397–398, 1912.
* [13] S. Goldstein. Modern Developments in Fluid Dynamics. Clarendon Press, Oxford, 1938.
* [14] A. Acrivos, M. J. Shah, and E. E. Petersen. Momentum and heat transfer in laminar boundary-layer flows of non-newtonian fluids past external surfaces. AIChE J., 6:312–317, 1960.
* [15] M. S. Klamkin. On the transformation of a class of boundary value problems into initial value problems for ordinary differential equations. SIAM Rev., 4:43–47, 1962.
* [16] T. Y. Na. Transforming boundary conditions to initial conditions for ordinary differential equations. SIAM Rev., 9:204–210, 1967.
* [17] T. Y. Na. Further extension on transforming from boundary value to initial value problems. SIAM Rev., 20:85–87, 1968.
* [18] T. Y. Na and S. C. Tang. A method for the solution of heat conduction with nonlinear heat generation. Z. Angew. Math. Mech., 49 $\frac{1}{2}$:45–52, 1969.
* [19] G. G. Belford. An initial value problem approach to the solution of eigenvalue problems. SIAM J. Numer. Anal., 6:99–103, 1969.
* [20] W. F. Ames and E. Adams. Exact shooting and eigenparameter problems. Nonlinear Anal., 1:75–82, 1976.
* [21] W. F. Ames and E. Adams. Non-linear boundary and eigenvalue problems for the Emden-Fowler equations by group methods. Int. J. Non-linear Mech., 14:35–42, 1979.
* [22] M. S. Klamkin. Transformation of boundary value problems into initial value problems. J. Math. Anal. Appl., 32:308–330, 1970.
* [23] T. Y. Na. An initial value method for the solution of a class of nonlinear equations in fluid mechanics. J. Basic Engrg. Trans. ASME, 92:503–509, 1970.
* [24] T. C. Scott, G. L. Rinschler, and T. Y. Na. Further extensions of an initial value method applied to certain nonlinear equations in fluid mechanics. J. Basic Engrg. Trans. ASME, 94:250–251, 1972.
* [25] R. Fazio and D. J. Evans. Similarity and numerical analysis for free boundary value problems. Int. J. Computer Math., 31:215–220, 1990. 39 : 249, 1991.
* [26] R. Fazio. Normal variables transformation method applied to free boundary value problems. Int. J. Computer Math., 37:189–199, 1990.
* [27] P. L. Sachdev. Nonlinear Ordinary Differential Equations and their Applications. Marcel Dekker, New York, 1991.
* [28] J. Serrin. Existence theorems for some compressible boundary layer problems. In Proc. of the Conference on Qualitative Theory of Nonlinear Differential and Integral Equations, volume 5 of SIAM studies in Applied Mathematics, pages 35–42, 1970.
* [29] R. E. Meyer. Introduction to Mathematical Fluid Dynamics. Wiley, New York, 1971.
* [30] S. P. Hastings and J. B. McLeod. Classical Methods in Ordinary Differential Equations With Applications to Boundary Value Problems, volume 129 of Graduate Studies in Mathematics. American Mathematical Society, Providence, 2012.
* [31] L. A. Rubel. An estimation of the error due to the truncated boundary in the numerical solution of the Blasius equation. Quart. Appl. Math., 13:203–206, 1955.
* [32] R. Fazio. The iterative transformation method and length estimation for tubular flow reactors. Appl. Math. Comput., 42:105–110, 1991.
* [33] R. Fazio. A numerical test for the existence and uniqueness of solution of free boundary problems. Appl. Anal., 66:89–100, 1997.
* [34] J. C. Butcher. The Numerical Analysis of Ordinary Differential Equations, Runge-Kutta and General Linear Methods. Whiley, Chichester, 1987.
* [35] L. Collatz. Monotonicity of free boundary value problems. In A. Dold and B. Eckmann, editors, Numerical Analysis, pages 31–45. Springer, Berlin, 1980. Lecture Notes in Mathematics, v. 773.
* [36] K. Glashoff and B. Werner. Inverse monotonicity of monotone L-operators with applications to quasilinear and free boundary problems. J. Math. Anal. Appl., 72:89–105, 1979.
* [37] R. Fazio. A free boundary test problem for a non-iterative transformation method and a shooting method. Atti Accad. Peloritana Pericolanti Cl. Sci. Fis. Mat. Natur., LXVIII:141–151, 1991.
* [38] R. Fazio. Numerical length estimation for tubular flow reactors. J. Comput. Appl. Math., 41:313–321, 1992.
* [39] L. F. Shampine and M. W. Reichelt. The MATLAB ODE suite. SIAM J. Sci. Comput., 18:1–22, 1997.
* [40] L. Bianchi. Lezioni sulla Teoria dei Gruppi Continui di Trasformazioni. Spoerri, Pisa, 1918.
* [41] R. Fazio. A similarity approach to the numerical solution of free boundary problems. SIAM Rev., 40:616–635, 1998.
* [42] R. Fazio. The Falkner-Skan equation: numerical solutions within group invariance theory. Calcolo, 31:115–124, 1994.
* [43] R. Fazio. A novel approach to the numerical solution of boundary value problems on infinite intervals. SIAM J. Numer. Anal., 33:1473–1483, 1996.
* [44] R. Fazio. Numerical transformation methods: Blasius problem and its variants. Appl. Math. Comput., 215:1513–1521, 2009.
* [45] R. Fazio. Blasius problem and Falkner-Skan model: Töpfer’s algorithm and its extension. Comput. & Fluids, 73:202–209, 2013.
* [46] R. Fazio. The iterative transformation method: numerical solution of one-dimensional parabolic moving boundary problems. Int. J. Computer Math., 78:213–223, 2001.
|
2024-09-04T02:54:59.350369 | 2020-03-11T19:05:24 | 2003.05489 | {
"authors": "Thomas Gerard, Christopher Parsonson, Zacharaya Shabka, Polina Bayvel,\n Domani\\c{c} Lavery, Georgios Zervas",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26176",
"submitter": "Thomas Gerard",
"url": "https://arxiv.org/abs/2003.05489"
} | arxiv-papers | SWIFT: Scalable Ultra-Wideband Sub-Nanosecond Wavelength Switching for Data
Centre Networks
Thomas Gerard*, Christopher Parsonson, Zacharaya Shabka, Polina Bayvel,
Domaniç Lavery and Georgios Zervas
Optical Networks Group, Dept. of Electronic and Electrical Engineering,
University College London, London, UK, WC1E 7JE.
<EMAIL_ADDRESS>
###### Abstract
We propose a time-multiplexed DS-DBR/SOA-gated system to deliver low-power
fast tuning across S-/C-/L-bands. Sub-ns switching is demonstrated, supporting
122$\times$50 GHz channels over 6.05 THz using AI techniques.
OCIS codes: 140.3600 Lasers, tunable, 060.6718 Switching, circuit, 060.1155
All-optical networks
## 1 Introduction
The most common data center network (DCN) packet length is $<$256 bytes which
translates to 20 ns slots in 100G links [1]. Optical circuit switching (OCS)
aims to transform data centre networks (DCNs) but needs to operate at packet
speed and granularity [2]. Recent breakthroughs have brought OCS closer to
reality. A hardware-based OCS scheduling algorithm has demonstrated
synchronous scheduling of up to 32,768 nodes within 2.3 ns [2]. A clock phase
caching method has enabled clock and data recovery in less than 625 ps,
supporting 10,000 remote nodes [1]. Yet, energy-efficient, sub-ns, many-
channel optical switching remains a challenge. Wideband fast tuneable lasers
have demonstrated switching on ns timescales [4, 5], and as low as 500 ps but
over limited bandwidths [6]. Static laser diodes (LDs) gated by semiconductor
optical amplifiers (SOAs) have achieved 912 ps 10-90% rise-fall times with
$\sim$2 ns settling time ($\pm 5\%$ of the target value) [3]; however, the
power consumption and device count limit the scalability of this approach. A
similar method used an optical comb where each wavelength was filtered then
gated by an SOA [7]; the power consumption and device count therefore also
increase linearly with number of channels, limiting scalability (see Fig.
1(b)).
In this paper, we introduce SWIFT: a modular system with Scalable Wideband
Interconnects for Fast Tuning. SWIFT combines pairs of optimised widely
tuneable lasers (TLs), multiplexing their wavelength reconfiguration on packet
timescales. The lasers are gated by pairs of fast switching SOAs, resulting in
wideband, sub-ns switching. The modular design of SWIFT (Fig. 1(a)) shows that
just two lasers and two SOAs cover each optical transmission band. SWIFT power
consumption is, therefore, practically independent of channel count; Fig. 1(b)
shows that SWIFT becomes more power efficient than alternative sub-ns
switching sources beyond 8$\times$50 GHz spaced channels.
| | |
---|---|---|---
(a) | (b) | (c) | (d)
Fig. 1: (a) Modular SWIFT architecture across S-, C- and L-bands. (b) Power
consumption comparison of laser switch designs vs. no. of channels, using data
reported in [8]. (c) PULSE DCN architecture with SWIFT modules (in red). (d)
Comparison of switching times (reported rise (solid) and estimated settling
(faded)) against no of channels for different switch systems.
The SWIFT modules can be deployed as transmitters in DCN architectures such as
PULSE [8], as shown in Fig. 1(c). In this architecture, each node has $x$
SWIFT transmitters (highlighted in red), each local pod has $N$ nodes, and
$x^{2}$ star couplers enable there to be $x$ source and $x$ destination pods.
Thus, PULSE network’s number of end-points scales with $N\times x$, where $N$
is limited by the number of wavelength channels. The large number of channels
supported by SWIFT therefore allows for significant scalability in the PULSE
DCN [2].
The concept of time-multiplexed, fast tuneable lasers was proposed in [8, 9],
but faced the challenge of optimising multiple lasers and SOAs for reliable
fast tuning. SWIFT overcomes this by applying artificial intelligence (AI)
techniques to the devices, enabling autonomous optimisation. This has allowed
us to demonstrate, for the first time, a time-multiplexed, gated laser tuning
system that can tune over 6.05 THz of bandwidth and consistently switch in 547
ps or better to support 20 ns timeslots. SWIFT outperforms other fast
switching systems in terms of rise time, settling time and channel count, as
shown in Fig 1(d).
## 2 Experimental Setup
The setup used to demonstrate SWIFT is shown in Fig. 2(a). A pair of
commercial Oclaro (now Lumentum) digital-supermode distributed Bragg reflector
(DS-DBR) lasers were driven by 250 MS/s arbitrary waveform generators (AWGs)
with 125 MHz bandwidth. Detailed IV measurements were used to map supplied
voltage to desired current. Each laser was connected to a commercial InPhenix
SOA, supporting 69 nm of bandwidth with typical characteristics of 7 dB noise
figure, 20 dB gain, and 10 dBm saturation power. Each SOA was driven with a 45
mA current source modulated by a 12 GS/s AWG with $\pm$0.5 V output and
amplified to $\pm$4 V using an electrical amplifier. All four optical devices
were held at 25∘C using temperature controllers. The SOAs were coupled
together and passed to a digital coherent receiver (50 GS/s, 22 GHz bandwidth)
and a digital sampling oscilloscope (50 GS/s, 30 GHz bandwidth), which
provided optimisation feedback to the DS-DBR lasers and to the SOAs
respectively.
\begin{overpic}[scale={0.45}]{Figures/setup_small_labels.png} \put(-5.0,55.0){(a)} \end{overpic} | \begin{overpic}[scale={0.14}]{Figures/fo.png} \put(12.0,50.0){(b)} \end{overpic} | \begin{overpic}[scale={0.14}]{Figures/dsdbr_cdf.png} \put(12.0,50.0){(c)} \end{overpic}
---|---|---
Fig. 2: (a) Experimental setup of time-multiplexed SWIFT tuneable lasers (TL)
gated by SOAs. (b) TL frequency offset (FO) of worst-case current swing w/ &
w/o optimiser. (c) CDF of all worst-case laser switch combinations w/ & w/o
optimiser.
## 3 Results and Discussion
### 3.1 Regression optimised laser switching
Fast wavelength switching can be achieved by applying ‘pre-emphasis’ to the
drive sections of an integrated semiconductor laser. Until recently, pre-
emphasis values had to be carefully tuned by hand for select samples then
extrapolated [4]. Here, we apply a linear regression optimiser to
automatically calculate the pre-emphasis values for reliable fast tuning. We
measured the output of the DS-DBR laser during a switching event using the
coherent receiver, then used the instantaneous frequency response as the error
term within a linear regression optimiser to iteratively update the applied
pre-emphasis values [5]. Fig 2(b) shows an example of the laser’s switching
response before and after application of the optimiser. We applied this
optimiser to 21 of the 122 supported channels, testing the extremes of lasing
frequency and drive current, covering 462 any-to-any switching events across
6.05 THz (1524.11-1572.48 nm). Fig. 2(c) shows the cumulative distribution of
the time taken to reach $\pm$5 GHz of the target wavelength. We measure a
worst case switch time of 14.7 ns, and a worst case frequency offset after 20
ns of $-$4.5 GHz. This indicates that SWIFT is potentially suitable for burst
mode coherent detection, as 28 GBd dual-polarisation quadrature phase shift
keying is tolerant of frequency offsets up to $\pm$7 GHz [10].
### 3.2 Particle swarm optimised SOA switching
SOA driving signals must also be optimised to approach their theoretical
rise/fall times of $\sim 100$ ps. Previous optimisation attempts did not
consider settling times nor the ability to automate the optimisation of
driving conditions for 1,000s of different SOAs in real DCNs [11, 12]. To
solve this, PSO (a population-based metaheuristic for optimising continuous
nonlinear functions by combining swarm theory with evolutionary programming)
was used in this work to optimise the SOA driving signals. PSO has previously
been applied to proportional-integral-derivative (PID) tuning in control
theory [13], but has not yet been used as an autonomous method for optical
switch control. In the optimisation, $n=160$ particles (driving signals) were
initialised in an $m=240$ (number of points in the signal) hyperdimensional
search space and iteratively ‘flown’ through the space by evaluating each
particle’s position with a fitness function $f$, defined as the mean squared
error between the drive signals’ corresponding optical outputs (recorded on
the oscilloscope) and an ideal target ‘set point’ (SP) with 0 overshoot,
settling time and rise time. As shown in Fig. 3(a) and (b), the $\pm$ 5%
settling time (effective switching time) of the SOA was reduced from 3.72 ns
(when driven by a simple square driving signal) to 547 ps, with the 10-90%
rise time also reduced from 697 ps to 454 ps. The PSO routine required no
prior knowledge of the SOA, therefore provides a flexible, automated and
scalable method for optimising SOA gating.
\begin{overpic}[scale={0.15}]{Figures/soa_outputs_annotated.png} \put(15.0,52.0){(a)} \end{overpic} | \begin{overpic}[scale={0.15}]{Figures/soa_outputs_zoomed_annotated.png} \put(12.0,52.0){(b)} \end{overpic} | \begin{overpic}[scale={0.03}]{Figures/1572p48_1524p11_1565p5_1530p72_2_channel_gating.png} \put(5.0,40.0){(c)} \end{overpic}
---|---|---
\begin{overpic}[scale={0.22}]{Figures/transitions.png} \put(3.0,45.0){(d)} \end{overpic} | \begin{overpic}[scale={0.15}]{Figures/osa_outputs_all_pairs.png} \put(12.0,50.0){(e)} \end{overpic} | \begin{overpic}[scale={0.037}]{Figures/burst_fo_data.png} \put(-1.0,45.0){(f)} \end{overpic}
Fig. 3: SOA outputs showing (a) step & (b) PSO rise & settling times, (c)
SWIFT system output, (d) $\lambda$-to-$\lambda$ 90-90% switching times, (e)
optical spectrum of the 21 worst-case channels, (f) frequency offset (FO) of
DSDBR (top) & SWIFT (bottom).
### 3.3 SWIFT module demonstration
After optimisation, the DS-DBR lasers were driven with with 12.5 MHz pre-
emphasised square waves, resulting in $\leq$40 ns bursts on each wavelength.
The lasers were driven 20 ns out of phase, so that one lased while the other
reconfigured. The SOAs were driven by 25 MHz PSO-optimised signals, resulting
in 20 ns gates, and aligned to block the first 15 ns and last 5 ns of each
laser burst, yielding four wavelength bursts of 20 ns each (see Fig. 2(a)).
Fig. 3(c) shows the oscilloscope output for the most difficult switching
instance, where DS-DBR laser 1 switched from 1572.48 nm to 1524.11 nm,
incurring a large rear current swing of 45 mA. The oscilloscope shows a flat
intensity response across each wavelength for 20 ns bursts, thereby providing
twice the granularity reported in [3]. Packet-to-packet power variations are
due to slight variations in laser wavelength power; these can be addressed by
applying slot specific SOA drive currents (not possible in our setup).
Measuring switch time by the 90-90% transition time, we report switch times
for the four transitions of 771, 812, 521, and 792 ps, respectively. These are
shown in Fig. 3(d). Furthermore, Fig. 3(f) shows the coherent receiver output
of the four wavelength slots with and without gating. The observed frequency
ripples are a result of the low sample rate of our 250 MS/s AWG that introduce
Fourier components to the driving square wave; these can be easily suppressed
by using a higher sample rate. Despite this, each slot stays within 5 GHz of
its target.
We repeated this process for each of the channels under test. Fig. 3(e) shows
the optical spectrum for all channels, all undergoing gated switching. We
measured a worst case value for the side mode suppression ratio of 35 dB,
optical power output of 0.8 dBm for a single wavelength (at 1572.48 nm) and
corresponding extinction ratio of 22 dB. The fully time-multiplexed optical
output power of SWIFT was $>$6 dBm. This represents the largest number of sub-
ns switching channels from a single sub-system ever reported, supporting
122$\times{}$50 GHz spaced channels.
In conclusion, we propose a scalable, low power, tuneable wavelength subsystem
capable of sub-ns switching. Using pairs of time-multiplexed tuneable lasers,
gated by SOAs, we have experimentally demonstrated switching times of less
than 900 ps for 122 x 50 GHz channels. Reliable and fast tuning was achieved
for each laser and SOA using regression and particle swarm optimisation AI
techniques. This enables automated, device-specific optimisation and
represents a critically important technology in OCS architectures, potentially
transforming DCN architectures.
This work is supported by EPSRC (EP/R035342/1), IPES CDT, iCASE and Microsoft
Research.
## References
* [1] K. Clark, et al., “Sub-Nanosecond Clock and Data Recovery in an Optically-Switched Data Centre Network”, ECOC, pdp, 2018.
* [2] J. Benjamin, et al., “Scaling PULSE Datacenter Network Architecture and Scheduling Optical Circuits in Sub-$\mu$seconds”, OFC, W1F.3, 2020.
* [3] K. Shi, et al., “System Demonstration of Nanosecond Wavelength Switching with Burst-mode PAM4 Transceiver,” ECOC, pdp, 2019.
* [4] J. Simsarian, et al., “Less than 5-ns wavelength switching with an SG-DBR laser”, PTL, 18(4), 2006.
* [5] T. Gerard et al., “Packet Timescale Wavelength Switching Enabled by Regression Optimisation,” arXiv:2002.11640v1 [eess.SP] 2020.
* [6] Y. Ueda, et al., “Electro-Optically Tunable Laser with $<$10-mW Tuning Power Dissipation and High-Speed $\lambda$-Switching for Coherent Networks”, ECOC, pdp, 2019.
* [7] S. Lange et al., “Sub-Nanosecond Optical Switching Using Chip-Based Soliton Microcombs,” in _OFC_ , W2A.4, 2020.
* [8] J. Benjamin, et al., “PULSE: Optical Circuit Switched Data Center Architecture Operating at Nanosecond Timescales”, arXiv:2002.04077v1 [cs.N1], 2020.
* [9] N. Ryan, et al., “A 10Gbps Optical Burst Switching Network Incorporating Ultra-fast (5ns) Wavelength Switched Tunable Laser”, ICSO, 2008.
* [10] J. Simsarian, “Fast-Tuning Coherent Burst-Mode Receiver for Metropolitan Networks”, PTL, 26(8), 2014.
* [11] C. Gallep and E. Conforti, “Reduction of semiconductor optical amplifier switching times by preimpulse step-injected current technique,”, _PTL_ , 14(7), 2002.
* [12] R. C. Figueiredo et al., “Hundred-Picoseconds Electro-Optical Switching With Semiconductor Optical Amplifiers Using Multi-Impulse Step Injection Current,”, _JLT_ , 13(1), 2015.
* [13] D. H. Kusuma et al., “The comparison of optimization for active steering control on a vehicle using PID controller based on artificial intelligence techniques,” in _International Seminar on Applications for Technology of Information and Communication_ , 2016.
|
2024-09-04T02:54:59.358682 | 2020-03-11T19:15:47 | 2003.05492 | {
"authors": "Philippe Gagnon and Florian Maire",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26177",
"submitter": "Philippe Gagnon",
"url": "https://arxiv.org/abs/2003.05492"
} | arxiv-papers | 1.87cm1.87cm1.87cm1.87cm capbtabboxtable[][]
# Lifted samplers for partially ordered discrete state-spaces
Philippe Gagnon 1, Florian Maire 1
###### Abstract
A technique called lifting is employed in practice for avoiding that the
Markov chains simulated for sampling backtrack too often. It consists in
lifting the state-space to include direction variables for guiding these
chains. Its implementation is direct when the probability mass function
targeted is defined on a totally ordered set, such as that of a univariate
random variable taking values on the integers. In this paper, we adapt this
technique to the situation where only a partial order can be established and
explore its benefits. Important applications include simulation of systems
formed from binary variables, such as those described by the Ising model, and
variable selection when the marginal model probabilities can be evaluated, up
to a normalising constant. To accommodate for the situation where one does not
have access to these marginal model probabilities, a lifted trans-dimensional
sampler for partially ordered model spaces is introduced. We show through
theoretical analyses and empirical experiments that the lifted samplers
outperform their non-lifted counterparts in some situations, and this at no
extra computational cost. The code to reproduce all experiments is available
online.111See the ArXiv page of this paper.
1Department of Mathematics and Statistics, Université de Montréal.
Keywords: Bayesian statistics; binary random variables; Ising model; Markov
chain Monte Carlo methods; Peskun-Tierney ordering; trans-dimensional
samplers; variable selection.
## 1 Introduction
### 1.1 Partially ordered state-spaces
A partially ordered set $\bm{\mathcal{X}}$ is such that there exists a
reflexive, antisymmetric, and transitive binary relation defined through a set
$\bm{\mathcal{R}}$ in which it is possible to establish that
$\mathbf{x}\leq\mathbf{y}$ and $\mathbf{y}\geq\mathbf{x}$ as a consequence of
$(\mathbf{x},\mathbf{y})\in\bm{\mathcal{R}}$ for
$\mathbf{x},\mathbf{y}\in\bm{\mathcal{X}}$. In such a set, pairs are
comparable when either $\mathbf{x}<\mathbf{y}$ or $\mathbf{y}<\mathbf{x}$, and
are incomparable when neither $\mathbf{x}<\mathbf{y}$ nor
$\mathbf{y}<\mathbf{x}$. This represents the difference with totally ordered
sets such as $\operatorname{\mathbb{R}}$ or $\mathbb{N}$ for which every pair
of different elements is comparable. We refer the reader to Trotter (1992) for
more details on partially ordered sets.
An important example of such sets is when any $\mathbf{x}\in\bm{\mathcal{X}}$
can be written as a vector $\mathbf{x}:=(x_{1},\ldots,x_{n})$ for which each
component $x_{i}$ can be of two types, say Type A or Type B, denoted by
$x_{i}\in\\{\text{A},\text{B}\\}$. In this case, $\bm{\mathcal{R}}$ can be set
to
$\bm{\mathcal{R}}:=\\{(\mathbf{x},\mathbf{y})\in\bm{\mathcal{X}}\times\bm{\mathcal{X}}:n_{\text{A}}(\mathbf{x})\leq
n_{\text{A}}(\mathbf{y})\\},$
where $n_{\text{A}}(\mathbf{x})$ is the number of components of Type A in
$\mathbf{x}$:
$n_{\text{A}}(\mathbf{x})=\sum_{i=1}^{n}\mathds{1}_{x_{i}=\text{A}}$. The
function $n_{\text{B}}(\mathbf{x})$ is defined analogously. Note that
$n=n_{\text{A}}(\mathbf{x})+n_{\text{B}}(\mathbf{x})$ for all $\mathbf{x}$ and
that the order can be symmetrically established by instead considering
$n_{\text{B}}(\mathbf{x})$ and $n_{\text{B}}(\mathbf{y})$.
Two main statistical problems fit within this framework: simulation of binary
random variables such as graphs or networks and variable selection. Indeed,
for the former, $\bm{\mathcal{X}}$ can be parameterized such that
$\bm{\mathcal{X}}=\\{-1,+1\\}^{n}$, where for example for an Ising model,
$x_{i}\in\\{-1,+1\\}$ represents the state of a spin, or, for networks,
$\bm{\mathcal{X}}=\mathcal{M}_{n}(\\{0,1\\})$ where $x_{i,j}\in\\{0,1\\}$
indicates whether nodes $i$ and $j$ are connected or not. For variable
selection, $\bm{\mathcal{X}}=\\{0,1\\}^{n}$ and $x_{i}\in\\{0,1\\}$ indicates
whether or not the $i$th covariate is included in the model characterised by
the vector $\mathbf{x}\in\bm{\mathcal{X}}$.
### 1.2 Sampling on partially ordered state-spaces
Let $\pi$ be a probability distribution defined on a measurable space
$(\bm{\mathcal{X}},\bm{\mathsf{X}})$ where $\bm{\mathcal{X}}$ is a partially
ordered set and $\bm{\mathsf{X}}$ a sigma algebra on $\bm{\mathcal{X}}$ and
consider the problem of sampling from $\pi$. We assume that it is not possible
to generate independent draws from $\pi$. Given the complex dependency
structure of modern statistical models such as graphical models and the
intractable nature of some distributions that arise, for instance, in Bayesian
statistics, this is a realistic assumption. We turn to Markov chains based
sampling methods which typically rely on an ergodic stochastic process
$\\{\mathbf{X}(m)\,:\,m\in\mathbb{N}\\}$ whose limiting distribution is $\pi$.
A typical Markov chain based sampler, such as the Glauber dynamics for
graphical models or the tie-no-tie sampler for network models, selects
uniformly at random one of the coordinates of $\mathbf{x}$, say $x_{i}$, and
proposes to flip its value from B to A (if $x_{i}=\text{A}$), and accept or
reject this move according to a prescribed probability that guarantees that
the Markov chain limiting distribution is $\pi$. Such moves are often rejected
when the mass concentrates on a manifold of the ambient space. Zanella (2019)
recently proposed a locally informed generic approach for which the
probability to select the $i$th coordinate depends on the relative mass of the
resulting proposal, i.e. $\pi(\mathbf{y})/\pi(\mathbf{x})$, aiming at
proposing less naive moves. Yet, the sampler is of reversible
Metropolis–Hastings (MH, (Metropolis et al., 1953; Hastings, 1970)) type,
implying that the chain may often go back to recently visited states. When
this is the case, the state-space is explored through a random walk, a process
exhibiting a diffusive behaviour.
The lifting technique, which can be traced back to Gustafson (1998) and even
to Horowitz (1991), aim at producing Markov chains which do not suffer from
such a behaviour. This is achieved by introducing a momentum variable
$\nu\in\\{-1,+1\\}$ which provides the process
$\\{\mathbf{X}(m)\,:\,m\in\mathbb{N}\\}$ with some memory of its past in order
to avoid backtracking. To remain Markovian, the process is thus enlarged to
$\\{(\mathbf{X},\nu)(m)\,:\,m\in\mathbb{N}\\}$ and the momentum variable is
flipped at random times according to a prescribed dynamic which guarantees
that, marginally, the process $\\{\mathbf{X}(m)\,:\,m\in\mathbb{N}\\}$ retains
its limiting distribution. Lifted Markov chains have been quantitatively
studied in Chen et al. (1999) and Diaconis et al. (2000) and have been shown
to reduce, sometimes dramatically, the mixing time of random walks on groups.
Over the past few years, there has been a renewed interest for lifted
techniques in the computational statistics community: in addition to speeding-
up the mixing time of random walk Markov chains they are also suspected to
reduce asymptotic variances of empirical averages of observables of interests,
see Andrieu and Livingstone (2019) for some precise results. We also refer to
Gagnon and Doucet (2019), Syed et al. (2019) and Neal (2020) for examples
where popular Markov chain Monte Carlo (MCMC) algorithms such as reversible
jump (Green, 1995), parallel tempering (Geyer, 1991) and slice sampling (Neal,
2003) have seen their performance improved in some situations by considering
their lifted version. Remarkably, those lifted samplers are implemented at no
additional computational cost over their non-lifted counterparts, and also
with no additional implementation difficulty.
All these successful applications of the lifting technique have been carried
out in contexts where $\bm{\mathcal{X}}$ is one-dimensional (Gustafson, 1998)
or exhibits a one-dimensional parameter which plays a central role in the
sampling scheme: the annealing parameter in Syed et al. (2019), the model
indicator reflecting the size/complexity of the nested models in Gagnon and
Doucet (2019), and the height of the level-lines
$\\{\pi(\mathbf{X}(m))\,:\,m\in\mathbb{N}\\}$ in Neal (2020). When such a one-
dimensional feature does not exist, there does not exist a straightforward way
of lifting the state-space without facing issues of reducibility or obtaining
inefficient samplers. A possibility, deemed as naive in Gustafson (1998), is
to lift each marginal component of the state-space and update them one at the
time. When the state-space is uncountable, it is possible to construct a
persistent walk by introducing bounces at random event times which change the
direction of propagation (see Vanetti et al. (2017)). However, when the state-
space is countable and partially ordered, such an approach is infeasible.
The objective of this paper is to present and analyse generic methods based on
the lifting technique to sample from a given probability mass function (PMF)
with a partially ordered countable support. In particular, we break free from
the requirement of having a one-dimensional parameter by exploiting the local
one-dimensional neighborhood structure induced by the partial order on
$\bm{\mathcal{X}}$: the neighbourhood of $\mathbf{x}$, denoted by
$\mathcal{N}(\mathbf{x})$, is separated into two parts where one comprises
states with $\mathbf{y}>\mathbf{x}$, denoted by
$\mathcal{N}_{+1}(\mathbf{x})$, and the other one comprises states with
$\mathbf{y}<\mathbf{x}$, denoted by $\mathcal{N}_{-1}(\mathbf{x})$
(considering that $\mathcal{N}(\mathbf{x})$ is only composed of states that
can be compared with $\mathbf{x}$). Looking for instance at the variable
selection example, the partial order is defined by mean of the model sizes, or
in other words, the number of covariates included in the models. If the
momentum is $\nu=+1$, the chain is forced to attempt visiting models with more
variables until a move is rejected, then $\nu$ is flipped to $\nu=-1$. As a
consequence, the momentum variable remains one-dimensional while the Markov
chain is often irreducible and efficiently explore the state-space. An
illustration showing the benefit of this approach is provided at Figure 1.
Again, we stress that the typical lifted sampler is implemented at no
additional computational cost over its non-lifted counterpart, and also with
no additional implementation difficulty.
$\begin{array}[]{cc}\textbf{Random walk behaviour}&\textbf{Persistent
movement}\cr\vspace{-1mm}\textbf{ESS = 0.12 per it.}&\textbf{ESS = 0.33 per
it.}\cr\includegraphics[width=173.44534pt]{Fig_1_a.pdf}&\includegraphics[width=173.44534pt]{Fig_1_b.pdf}\end{array}$
Figure 1: Trace plots for the statistic of number of covariates included in
the model for a MH sampler with a locally informed proposal distribution
(discussed in more details in Section 3.2) and its lifted counterpart, when
applied to solve a real variable selection problem (presented in Section 4.2);
ESS stands for effective sample size
### 1.3 Overview of our contributions
In this paper, we focus on the simulation of two-dimensional Ising models and
variable selection problems, without restricting ourselves to these examples
of applications when we present the samplers and analyse them. For these
examples, a generic sampler that we study corresponds to the discrete-time
version of a specific sampler independently developed in Power and Goldman
(2019), a paper in which the focus is rather on exploiting any structure of
$\bm{\mathcal{X}}$ identified by users. The structure identified here is, in a
sense, that $\bm{\mathcal{X}}$ exhibits a partial order. We consequently do
not claim originality for the samplers that will be presented. Our
contributions are the following:
* •
statement of the sampling problem within the specific framework of partially
ordered discrete state-spaces (so that it becomes straightforward to implement
a sampler using the lifting technique in this framework);
* •
identification of situations in which the lifted samplers are expected to
outperform their non-lifted counterparts, based on theoretical analyses and
numerical experiments;
* •
introduction of a trans-dimensional lifted sampler useful, among others, for
variable selection when it is not possible to integrate out the parameters.
### 1.4 Organisation of the paper
The generic algorithm is first presented in Section 2. We next analyse in
Section 3 two important versions with uniform proposal distributions and
locally informed ones, allowing to establish that they can outperform their
non-lifted counterparts under some assumptions. In Section 4, we show the
difference in empirical performance for a Ising model (Section 4.1) and real
variable selection problem (Section 4.2). In Section 5, we consider that
$\bm{\mathcal{X}}$ is a model space and propose a lifted trans-dimensional
sampler allowing to simultaneously achieve model selection and parameter
estimation. The paper finishes in Section 6 with retrospective comments and
possible directions for future research.
## 2 Generic algorithm
The sampler that we present is a MCMC algorithm that generates proposals
belonging to a subset of $\mathcal{N}(\mathbf{x})$ chosen according to a
“direction” $\nu\in\\{-1,+1\\}$, when the current state is
$\mathbf{x}\in\bm{\mathcal{X}}$. In particular, the proposals belong to
$\mathcal{N}_{+1}(\mathbf{x}):=\\{\mathbf{y}\in\mathcal{N}(\mathbf{x}):\mathbf{y}>\mathbf{x}\\}\subseteq\mathcal{N}(\mathbf{x})$
when $\nu=+1$ or
$\mathcal{N}_{-1}(\mathbf{x}):=\\{\mathbf{y}\in\mathcal{N}(\mathbf{x}):\mathbf{y}<\mathbf{x}\\}\subseteq\mathcal{N}(\mathbf{x})$
when $\nu=-1$, where $\mathcal{N}_{-1}(\mathbf{x})$ and
$\mathcal{N}_{+1}(\mathbf{x})$ thus denote two subsets of
$\mathcal{N}(\mathbf{x})$ such that
$\mathcal{N}_{-1}(\mathbf{x})\cup\mathcal{N}_{+1}(\mathbf{x})=\mathcal{N}(\mathbf{x})$.
More formally, assuming that the Markov chain state is
$\mathbf{x}\in\bm{\mathcal{X}}$, the proposal distribution
$q_{\mathbf{x},\nu}$ has its support restricted to
$\mathcal{N}_{\nu}(\mathbf{x})$. There exist natural candidates for such
distributions, as will be explained in the following. This makes the
implementation of the proposed sampler almost straightforward; once the
neighbourhood structure has been specified. In our framework, the partial
ordering induces a natural neighbourhood structure.
The sampler is based on the well known technique of lifting: the state-space
$\bm{\mathcal{X}}$ is lifted to an extended state-space
$\bm{\mathcal{X}}\times\\{-1,+1\\}$ such that the marginal and the conditional
distributions of the direction variable $\nu$ is the uniform distribution on
$\\{-1,+1\\}$. The algorithm, which bares a strong resemblance with the guided
walk (Gustafson, 1998), is now presented in Algorithm 1.
Algorithm 1 A lifted sampler for partially ordered discrete state-spaces
1. 1.
Generate $\mathbf{y}\sim q_{\mathbf{x},\nu}$ and $u\sim\mathcal{[}0,1]$.
2. 2.
If
$\displaystyle
u\leq\alpha_{\nu}(\mathbf{x},\mathbf{y}):=1\wedge\frac{\pi(\mathbf{y})\,q_{\mathbf{y},-\nu}(\mathbf{x})}{\pi(\mathbf{x})\,q_{\mathbf{x},\nu}(\mathbf{y})},$
(1)
set the next state of the chain to $(\mathbf{y},\nu)$. Otherwise, set it to
$(\mathbf{x},-\nu)$.
3. 3.
Go to Step 1.
If $\bm{\mathcal{X}}$ is finite, there exists a boundary, in the sense that
there may be no mass beyond a state $\mathbf{x}$ when the direction followed
is $\nu$. For instance in variable selection, the posterior probability of a
model with more covariates than the maximum number is zero. Algorithm 1 may
thus seem incomplete, in the sense that it is not explicitly specified what to
do on the boundary. We in fact consider that $q_{\mathbf{x},\nu}$ is defined
even on the boundary, and that it is defined to generate a point outside of
$\bm{\mathcal{X}}$. This point will be automatically rejected (because its
mass is 0) and the chain will remain at $\mathbf{x}$ and the direction will be
reversed. Note that this is a technical requirement. In practice, one can
simply skip Step 1 in this case and set the next state to $(\mathbf{x},-\nu)$.
It is possible to establish the correctness of the algorithm through that of a
more general version based on the lifted algorithm presented in Andrieu and
Livingstone (2019). Before presenting this more general version which has
interesting features, we introduce necessary notation. Let
$\rho_{\nu}:\bm{\mathcal{X}}\to[0,1]$, for $\nu\in\\{-1,+1\\}$, be a user-
defined function for which we require that:
$\displaystyle 0\leq\rho_{\nu}(\mathbf{x})\leq
1-T_{\nu}(\mathbf{x},\bm{\mathcal{X}}),$ (2)
$\displaystyle\rho_{\nu}(\mathbf{x})-\rho_{-\nu}(\mathbf{x})=T_{-\nu}(\mathbf{x},\bm{\mathcal{X}})-T_{\nu}(\mathbf{x},\bm{\mathcal{X}}),$
(3)
where we have set, for all
$(\mathbf{x},\nu)\in\bm{\mathcal{X}}\times\\{-1,+1\\}$,
$\displaystyle
T_{\nu}(\mathbf{x},\bm{\mathcal{X}}):=\sum_{\mathbf{x}^{\prime}\in\bm{\mathcal{X}}}q_{\mathbf{x},\nu}(\mathbf{x}^{\prime})\,\alpha_{\nu}(\mathbf{x},\mathbf{x}^{\prime})=\sum_{\mathbf{x}^{\prime}\in\mathcal{N}_{\nu}(\mathbf{x})}q_{\mathbf{x},\nu}(\mathbf{x}^{\prime})\,\alpha_{\nu}(\mathbf{x},\mathbf{x}^{\prime}).$
(4)
These conditions make the algorithm valid and are thus considered satisfied in
the sequel. Finally, let $Q_{\mathbf{x},\nu}$ be a PMF such that
$Q_{\mathbf{x},\nu}(\mathbf{x}^{\prime})\propto
q_{\mathbf{x},\nu}(\mathbf{x}^{\prime})\,\alpha_{\nu}(\mathbf{x},\mathbf{x}^{\prime})$.
The more general algorithm is now presented in Algorithm 2.
Algorithm 2 A more general lifted sampler for partially ordered discrete
state-spaces
1. 1.
Generate $u\sim\mathcal{U}[0,1]$.
1. (i)
If $u\leq T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$, generate $\mathbf{y}\sim
Q_{\mathbf{x},\nu}$ and set the next state of the chain to $(\mathbf{y},\nu)$;
2. (ii)
if $T_{\nu}(\mathbf{x},\bm{\mathcal{X}})<u\leq
T_{\nu}(\mathbf{x},\bm{\mathcal{X}})+\rho_{\nu}(\mathbf{x})$, set the next
state of the chain to $(\mathbf{x},-\nu)$;
3. (iii)
if $u>T_{\nu}(\mathbf{x},\bm{\mathcal{X}})+\rho_{\nu}(\mathbf{x})$, set the
next state of the chain to $(\mathbf{x},\nu)$.
2. 2.
Go to Step 1.
###### Proposition 1.
The transition kernel of the Markov chain
$\\{(\mathbf{X},\nu)(m):m\in\operatorname{\mathbb{N}}\\}$ simulated by
Algorithm 2 admits $\pi\otimes\mathcal{U}\\{-1,1\\}$ as invariant
distribution.
###### Proof.
See Section 7. ∎
It is interesting to notice that $T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$
represents the probability to accept a proposal when the current state is
$(\mathbf{x},\nu)$. In Algorithm 2, we thus first decide if we accept a
proposal or not, and if it is the case, in Step 1.(i), we randomly select the
value of the proposal (using the conditional distribution). It can be readily
checked that choices for $\rho_{\nu}$ include
$\rho_{\nu}(\mathbf{x})=1-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$ and
$\rho_{\nu}(\mathbf{x})=\max\\{0,T_{-\nu}(\mathbf{x},\bm{\mathcal{X}})-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})\\}$.
If $\rho_{\nu}(\mathbf{x})=1-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$, the
condition for Case (iii) of Step 1 is never satisfied, and the algorithm
either accepts the proposal and keeps the same direction, or the proposal is
rejected and the direction is reversed. In this case, one can show that
Algorithm 2 corresponds to Algorithm 1, which is what allows ensuring the
correctness of Algorithm 1 as well. Setting $\rho_{\nu}(\mathbf{x})$ otherwise
than $\rho_{\nu}(\mathbf{x})=1-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$ allows in
Case (iii) of Step 1 to keep following the same direction, even when the
proposal is rejected. Intuitively, this is desirable when the rejection is due
to “bad luck”, and not because there is no mass in the direction followed. The
function $\rho_{\nu}(\mathbf{x})$ aims at incorporating this possibility in
the sampler.
In the typical MCMC framework, when one wants to sample from a probability
density function, it is not possible to directly compute
$T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$ as it requires computing an integral
with respect to this density function. In such a case, it is therefore usually
not possible to set $\rho_{\nu}(\mathbf{x})$ otherwise than
$1-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$. This contrasts with our discrete
state-space framework in which it is often possible to directly compute
$T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$. A corollary of Theorem 3.15 in Andrieu
and Livingstone (2019) states that the best choice of function $\rho_{\nu}$ in
terms of asymptotic variance is
$\displaystyle\rho_{\nu}^{*}(\mathbf{x}):=\max(0,T_{-\nu}(\mathbf{x},\bm{\mathcal{X}})-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})),$
(5)
and that the worst choice is
$\rho_{\nu}^{\text{w}}(\mathbf{x}):=1-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$.
###### Corollary 1.
If $\bm{\mathcal{X}}$ is finite, then for any function $\rho_{\nu}$ and
$f\in\mathcal{L}_{2}^{*}(\bar{\pi}):=\left\\{f:\int
d\bar{\pi}f^{2}<\infty\text{ and }f(\mathbf{x},-1)=\right.$
$\left.f(\mathbf{x},+1)\text{ for all }\mathbf{x}\right\\}$,
$\mathrm{var}(f,P_{\rho^{*}})\leq\mathrm{var}(f,P_{\rho})\leq\mathrm{var}(f,P_{\rho^{\text{w}}}),$
where $\bar{\pi}:=\pi\otimes\mathcal{U}\\{-1,+1\\}$,
$\mathrm{var}(f,P_{\rho}):=\mathbb{V}\mathrm{ar}f(\mathbf{X},\nu)+2\sum_{k>0}\left\langle
f,P_{\rho}^{k}f\right\rangle$ and $P_{\rho}$ is the transition kernel
simulated by Algorithm 2, $\left\langle f,P_{\rho}^{k}f\right\rangle$ being
the inner product, i.e. $\left\langle f,P_{\rho}^{k}f\right\rangle:=\int
d\bar{\pi}fP_{\rho}^{k}f$.
###### Proof.
See Section 7. ∎
The price to pay for using $\rho_{\nu}^{*}$ instead of
$\rho_{\nu}^{\text{w}}$, for instance, is that the algorithm is more
complicated to implement because it is required to systematically compute
$T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$ at each iteration (it is also sometimes
required to compute $T_{-\nu}(\mathbf{x},\bm{\mathcal{X}})$). Using
$\rho_{\nu}^{*}$ also comes with an additional computation cost (which is
discussed in Section 3.2). In our numerical experiments, it is seen that the
gain in efficiency of using Algorithm 2 with $\rho_{\nu}^{*}$ over Algorithm 1
is not significant. One may thus opt for simplicity and implement Algorithm 1.
## 3 Analysis of specific samplers
In the previous section, we presented generic algorithms with conditions
ensuring that they are valid. A necessary input to implement them is the
proposal distribution $q_{\mathbf{x},\nu}$ to be used. In this section, we
explore two natural choices and analyse their asymptotic variances. We start
in Section 3.1 with the common situation where the proposal is picked
uniformly at random. As mentioned in the introduction, this choice may lead to
poor mixing. We thus go on in Section 3.2 to a distribution incorporating
information about the target. This section finishes with a brief discussion in
Section 3.3 on the computational costs associated to these choices in regular
MH samplers and their lifted counterparts.
### 3.1 Uninformed uniform proposal
In the reversible MH sampler, it is common to set the proposal distribution,
denoted by $q_{\mathbf{x}}$ for this algorithm, to
$q_{\mathbf{x}}:=\mathcal{U}\\{\mathcal{N}(\mathbf{x})\\}$. In our framework,
the analogous proposal distribution is naturally defined as
$q_{\mathbf{x},\nu}:=\mathcal{U}\\{\mathcal{N}_{\nu}(\mathbf{x})\\}$. In this
case, the acceptance probability (1) of a proposal becomes
$\alpha_{\nu}(\mathbf{x},\mathbf{y})=1\wedge\frac{\pi(\mathbf{y})\,|\mathcal{N}_{\nu}(\mathbf{x})|}{\pi(\mathbf{x})\,|\mathcal{N}_{-\nu}(\mathbf{y})|}.$
For ease of presentation of the analysis, consider again the important example
described in Section 1.1 where each component $x_{i}$ of
$\mathbf{x}=(x_{1},\ldots,x_{n})$ can be of two types. We in this section
highlight the dependency on $n$ (the dimension) of the state-space and target
because it will be relevant in our analysis. We thus write $\pi_{n}$ for the
target and $\bm{\mathcal{X}}_{n}$ for the state-space, where each state is of
the form $\mathbf{x}:=(x_{1},\ldots,x_{n})$ with
$x_{i}\in\\{\text{A},\text{B}\\}$. For now on, consider for that $\text{A}=-1$
and $\text{B}=+1$, corresponding to the case of Ising model. We note that
there is no loss of generality of considering this special case within the
important example.
In a MH sampler, one sets
$\mathcal{N}(\mathbf{x}):=\\{\mathbf{y}\in\bm{\mathcal{X}}_{n}:\sum_{i}|x_{i}-y_{i}|=2\\}=\\{\mathbf{y}\in\bm{\mathcal{X}}_{n}:\exists
j\text{ such that }y_{j}=-x_{j}\\}$, so that the algorithm proposes to flip a
single bit at each iteration. It thus chooses uniformly at random which bit to
flip. Therefore, the size of the neighbourhoods in this sampler is constant
for any $\mathbf{x}$ and is given by $n$. This implies that the acceptance
probability in this sampler, denoted by $\alpha(\mathbf{x},\mathbf{y})$,
reduces to
$\alpha(\mathbf{x},\mathbf{y})=1\wedge\pi(\mathbf{y})/\pi(\mathbf{x})$.
In the lifted case, the acceptance probability can be rewritten as
$\displaystyle\alpha_{\nu}(\mathbf{x},\mathbf{y})=1\wedge\frac{\pi(\mathbf{y})\,n_{-\nu}(\mathbf{x})}{\pi(\mathbf{x})\,n_{\nu}(\mathbf{y})}.$
(6)
Indeed,
$\mathcal{N}_{\nu}(\mathbf{x}):=\\{\mathbf{y}\in\bm{\mathcal{X}}_{n}:\exists
j\text{ such that }y_{j}=-x_{j}=\nu\\}$, which implies that
$|\mathcal{N}_{\nu}(\mathbf{x})|=n_{-\nu}(\mathbf{x})$ (with the analogous
implication for $\mathcal{N}_{-\nu}(\mathbf{y})$). The acceptance probability
$\alpha_{\nu}$ thus depends on an additional term
$n_{-\nu}(\mathbf{x})/n_{\nu}(\mathbf{y})$ compared to that in the MH sampler.
This term may have an negative impact by decreasing the acceptance
probability. This represents in fact the price to pay for using the lifted
sampler Algorithm 2 (including Algorithm 1 as a special case): the reversible
sampler is allowed to backtrack, which makes the sizes of the neighbourhoods
constant, whereas it is the opposite for Algorithm 2. The size of the
neighbourhoods diminishes in the lifted sampler as the chain moves further in
a direction (making the neighbourhoods in the reverse direction bigger and
bigger).
The impact is alleviated when $n$ is large and the mass of $\pi_{n}$
concentrates in the interior of $\bm{\mathcal{X}}_{n}$ in an area where
$|n_{-\nu}(\mathbf{x})-n/2|\leq\kappa$, where $\kappa$ is a positive integer.
Indeed, $n_{\nu}(\mathbf{y})=n_{\nu}(\mathbf{x})+1=n-n_{-\nu}(\mathbf{x})+1$,
which implies that $\alpha_{\nu}(\mathbf{x},\mathbf{y})\approx
1\wedge\pi(\mathbf{y})/\pi(\mathbf{x})=\alpha(\mathbf{x},\mathbf{y})$. In
fact, in an ideal situation where
$|\mathcal{N}_{-1}(\mathbf{x})|=|\mathcal{N}_{+1}(\mathbf{x})|=|\mathcal{N}(\mathbf{x})|/2$
for all $\mathbf{x}\in\bm{\mathcal{X}}_{n}$ (which is impossible for the
important example with types A and B), a corollary of Theorem 3.17 in Andrieu
and Livingstone (2019) establishes that Algorithm 2 with any function
$\rho_{\nu}$ dominates the MH algorithm in terms of asymptotic variances. In
particular, Algorithm 1 dominates the MH algorithm. Before presenting this
corollary, we define the transition kernel simulated by Algorithm 2 with
$q_{\mathbf{x},\nu}:=\mathcal{U}\\{\mathcal{N}_{\nu}(\mathbf{x})\\},\mathbf{x}\in\bm{\mathcal{X}}_{n},\nu\in\\{-1,+1\\}$,
as $P_{\rho,n}$ and that simulated by the MH sampler with
$q_{\mathbf{x}}:=\mathcal{U}\\{\mathcal{N}(\mathbf{x})\\},\mathbf{x}\in\bm{\mathcal{X}}_{n}$,
as $P_{\text{MH},n}$.
###### Corollary 2.
If
(a)
$\bm{\mathcal{X}}_{n}$ is finite,
(b)
$|\mathcal{N}_{-1}(\mathbf{x})|=|\mathcal{N}_{+1}(\mathbf{x})|=|\mathcal{N}(\mathbf{x})|/2=n^{*}/2$
for all $\mathbf{x}\in\bm{\mathcal{X}}_{n}$, where $n^{*}$ is a positive
integer that does not depend on $\mathbf{x}$ (but that may depend on $n$),
then for any $f\in\mathcal{L}_{2}^{*}(\bar{\pi})$ and $n$,
$\mathrm{var}(f,P_{\rho,n})\leq\mathrm{var}(f,P_{\text{MH},n}).$
###### Proof.
See Section 7. ∎
In light of the above, one might expect the inequality to (approximately) hold
(up to an error term) when the mass is highly concentrated on the points
$\mathbf{x}$ that are not too far from the center of the domain, where here
the notion of centrality is defined in terms of the distance between
$n_{-1}(\mathbf{x})$ or $n_{+1}(\mathbf{x})$ to $n/2$, suggesting that the
lifted sampler outperforms the reversible MH algorithm in this situation. The
rest of the section is dedicated to the introduction of conditions under which
this statement is true.
The key argument in proving Corollary 2 is to show that
$\displaystyle
q_{\mathbf{x}}(\mathbf{y})\,\alpha(\mathbf{x},\mathbf{y})=(1/2)\,q_{\mathbf{x},+1}(\mathbf{y})\,\alpha_{+1}(\mathbf{x},\mathbf{y})+(1/2)\,q_{\mathbf{x},-1}(\mathbf{y})\,\alpha_{-1}(\mathbf{x},\mathbf{y}),$
(7)
for all $\mathbf{x}$ and $\mathbf{y}$. Indeed, once this is done, Theorem 3.17
in Andrieu and Livingstone (2019) can be directly applied, which yields the
result. This in fact implies that once one has designed a lifted sampler, it
is possible to identify its non-lifted counterpart through (7), and to
establish that the latter is inferior. In particular, we can establish that
the sampler that flips a coin at each iteration to next decide which PMF to
use between $q_{\mathbf{x},+1}$ and $q_{\mathbf{x},-1}$ to generate a proposal
$\mathbf{y}$ that will be subject to approval using
$\alpha_{+1}(\mathbf{x},\mathbf{y})$ or $\alpha_{-1}(\mathbf{x},\mathbf{y})$
is inferior. Denote by $P_{\text{rev.},n}$ the Markov kernel simulated by this
algorithm. Now, what we would ideally do is to show a Peskun-Tierney ordering
(Peskun, 1973; Tierney, 1998) between $P_{\text{rev.},n}$ and
$P_{\text{MH},n}$ to establish the domination of the lifted sampler over the
reversible MH algorithm. Such an ordering is difficult to obtain as one needs
to show that for any pair $(\mathbf{x},\mathbf{y})$ such that
$\mathbf{x}\neq\mathbf{y}$, $P_{\text{rev.},n}(\mathbf{x},\mathbf{y})\geq
P_{\text{MH},n}(\mathbf{x},\mathbf{y})$.
A more general ordering is presented in Zanella (2019): if
$P_{\text{rev.},n}(\mathbf{x},\mathbf{y})\geq\omega
P_{\text{MH},n}(\mathbf{x},\mathbf{y})$ for all $\mathbf{x}\neq\mathbf{y}$,
then
$\mathrm{var}(f,P_{\text{rev.},n})\leq\mathrm{var}(f,P_{\text{MH},n})/\omega+((1-\omega)/\omega)\mathbb{V}\mathrm{ar}f(\mathbf{X})$,
where $\omega$ is a positive constant. Note that $f$ is a function of $\nu$ as
well, but because of the restriction $f(\mathbf{x},-1)=f(\mathbf{x},+1)$,
$\nu$ can be treated as a constant. We show in this section that if
$P_{\text{rev.},n}(\mathbf{x},\mathbf{y})\geq\omega
P_{\text{MH},n}(\mathbf{x},\mathbf{y})$ with $\omega$ close to 1 for all
$\mathbf{x}\neq\mathbf{y}$ belonging to a specific set having a high
probability, then
$\mathrm{var}(f,P_{\text{rev.},n})\leq\mathrm{var}(f,P_{\text{MH},n})+\text{small
error term}$ (under regularity conditions).
We start by defining this set:
$\displaystyle\bm{\mathcal{X}}_{\varphi(n)}:=\\{\mathbf{x}\in\bm{\mathcal{X}}_{n}:n/2-\varphi(n)\leq
n_{-1}(\mathbf{x}),n_{+1}(\mathbf{x})\leq n/2+\varphi(n)\\},$ (8)
where $\varphi$ is a function such that $0\leq\varphi(n)<n/2$ and, for
$\epsilon>0$, there exists $N>0$ such that for all $n\geq N$, we have that
$\varphi(n)/n<\epsilon$. For $\mathbf{x}\neq\mathbf{y}$ belonging to
$\bm{\mathcal{X}}_{\varphi(n)}$, if we consider $\omega_{n}$ now a function of
$n$, it is possible to establish that: for $\epsilon>0$, there exists a $N>0$
such that for all $n\geq N$,
$P_{\text{rev.},n}(\mathbf{x},\mathbf{y})\geq\omega_{n}P_{\text{MH},n}(\mathbf{x},\mathbf{y})$
with $1-\omega_{n}<\epsilon$. The explicit form of $\omega_{n}$ is
$\displaystyle\omega_{n}:=\left(1+\frac{\varphi(n)}{n/2}\right)^{-2}\left(1-\frac{\varphi(n)}{n/2}\right).$
(9)
One can imagine that if: the mass concentrates on
$\bm{\mathcal{X}}_{\varphi(n)}$, the chains do not often leave this set, and
when they do they do not take too much time to come back, then the asymptotic
variances should not be too different to those of chains with stationary
distribution $\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$, which assigns null
mass on $\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}$ and is thus the
normalised version of $\pi_{n}$ on $\bm{\mathcal{X}}_{\varphi(n)}$:
$\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}(\mathbf{x}):=\begin{cases}\pi_{n}(\mathbf{x})\big{/}\sum_{\mathbf{x}^{\prime}\in\bm{\mathcal{X}}_{\varphi(n)}}\pi_{n}(\mathbf{x}^{\prime})&\text{if}\quad\mathbf{x}\in\bm{\mathcal{X}}_{\varphi(n)},\cr
0&\text{if}\quad\mathbf{x}\notin\bm{\mathcal{X}}_{\varphi(n)}.\end{cases}$
This is what we obtain assuming such a behaviour for the chains generated by
$P_{\text{rev.},n}$ and $P_{\text{MH},n}$. We also require that the chains
generated by the Markov kernels with the same proposal mechanisms as
$P_{\text{rev.},n}$ and $P_{\text{MH},n}$, but whose stationary distribution
is $\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$, mix sufficiently well (in a
sense that will be made precise). We denote by $\tilde{P}_{\text{rev.},n}$ and
$\tilde{P}_{\text{MH},n}$ these Markov kernels.
###### Theorem 1.
Pick $f\in\mathcal{L}_{2}^{*}(\bar{\pi})$ and consider that all states
$\mathbf{x}\in\bm{\mathcal{X}}_{n}$ are such that
$\mathbf{x}=(x_{1},\ldots,x_{n})$ with $x_{i}\in\\{-1,+1\\}$. If, for
$\epsilon>0$, there exists $N>0$ such that for all $n\geq N$ it is possible to
choose a constant that may depend on $n$, $\varrho(n)$, with
(a)
$\sum_{k=\varrho(n)+1}^{\infty}\left\langle f,P^{k}f\right\rangle<\epsilon$,
for all
$P\in\\{P_{\text{rev.},n},\tilde{P}_{\text{rev.},n},P_{\text{MH},n},\tilde{P}_{\text{MH},n}\\}$,
(b)
$\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cup_{m=0}^{k}A_{m}^{\mathsf{c}}(\bm{\mathcal{X}}_{\varphi(n)-1})}]<\epsilon$,
for all $P\in\\{P_{\text{rev.},n},P_{\text{MH},n}\\}$, where it is considered
that the chain starts at stationarity and evolves using $P$ (and
$\mathbb{E}[f(\mathbf{X}(k))]=0$ without loss of generality) and
$A_{m}(\bm{\mathcal{X}}_{\varphi(n)-1}):=\\{\mathbf{X}(m)\in\bm{\mathcal{X}}_{\varphi(n)-1}\\}$
(see (8) for the definitions of $\bm{\mathcal{X}}_{\varphi(n)-1}$ and
$\varphi(n)$),
(c)
$(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cap_{m=0}^{k}A_{m}(\bm{\mathcal{X}}_{\varphi(n)-1})}]<\epsilon$,
for all $P\in\\{P_{\text{rev.},n},P_{\text{MH},n}\\}$,
(d)
$(1/\omega_{n}-1)\mathrm{var}(f,\tilde{P}_{\text{MH},n})<\epsilon$ and
$((1-\omega_{n})/\omega_{n})\mathbb{V}\mathrm{ar}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}f(\mathbf{X})<\epsilon$,
where $\omega_{n}$ is defined in (9) and
$\mathbb{V}\mathrm{ar}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}$ denotes
a variance computed with respect to
$\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$,
then there exists a positive constant $\kappa$ such that
$\mathrm{var}(f,P_{\rho,n})\leq\mathrm{var}(f,P_{\text{rev.},n})\leq\mathrm{var}(f,P_{\text{MH},n})+\kappa\epsilon.$
###### Proof.
See Section 7. ∎
There are several assumptions involved in Theorem 1. But, to put this into
perspective, Assumptions (c) and (d) are automatically verified if
$\mathrm{var}(f,\tilde{P}_{\text{MH},n})$ and
$\mathbb{V}\mathrm{ar}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}f(\mathbf{X})$
are bounded by constants that do not depend on $n$. Assumptions (a) and (b)
are really the crucial ones.
### 3.2 Locally informed proposal
In this section, we discuss and analyse the use of locally-balanced proposal
distributions, as defined by Zanella (2019) in the reversible MH framework as
$\displaystyle
q_{\mathbf{x}}(\mathbf{y}):=g\left(\frac{\pi_{n}(\mathbf{y})}{\pi_{n}(\mathbf{x})}\right)\bigg{/}c_{n}(\mathbf{x}),\quad\mathbf{y}\in\mathcal{N}(\mathbf{x}),$
(10)
where $c_{n}(\mathbf{x})$ represents the normalising constant, i.e.
$c_{n}(\mathbf{x}):=\sum_{\mathbf{x}^{\prime}\in\mathcal{N}(\mathbf{x})}g\left(\pi_{n}(\mathbf{x}^{\prime})/\pi_{n}(\mathbf{x})\right)$,
and $g$ is a strictly positive continuous function such that $g(x)/g(1/x)=x$.
Note that we used the same notation as in Section 3.1 for the proposal
distribution; this is to simplify. We will use the same notation as in Section
3.1 for the proposal distribution of the lifted sampler and for the Markov
kernels as well. In this section, it will be implicit that the proposal
distributions are informed proposals and the Markov kernels are those induced
by these informed proposals. Note also that we again highlight the
dependencies on $n$ of some terms that will appear in our analysis.
Such a function $g$ defined in (10) implies that the acceptance probability in
the MH algorithm is given by
$\alpha(\mathbf{x},\mathbf{y})=1\wedge\frac{\pi_{n}(\mathbf{y})\,q_{\mathbf{y}}(\mathbf{x})}{\pi_{n}(\mathbf{x})\,q_{\mathbf{x}}(\mathbf{y})}=1\wedge\frac{c_{n}(\mathbf{x})}{c_{n}(\mathbf{y})}.$
Zanella (2019) shows that $c_{n}(\mathbf{x})/c_{n}(\mathbf{y})\longrightarrow
1$ as $n\longrightarrow\infty$ under some assumptions. More precisely, the
author defines $\mathbf{x}:=(x_{1},\ldots,x_{n})$ and considers that at any
given iteration, only a small fraction of the $n$ components is proposed to
change values. The result holds when the random variables
$(X_{1},\ldots,X_{n})$ exhibit a structure of conditional independence, which
implies that the normalising constants $c_{n}(\mathbf{x})$ and
$c_{n}(\mathbf{y})$ share a lot of terms. This is again a consequence of the
backtracking of the reversible sampler and is thus in contrast with what we
observe for the lifted algorithm.
Two choices for $g$ are $g(x)=\sqrt{x}$ and $g(x)=x/(1+x)$, the latter being
called the Barker proposal in reference to Barker (1965)’s acceptance
probability choice. The advantage of the latter choice is that it is a bounded
function of $x$, which stabilises the normalising constants and thus the
acceptance probability. This is shown in Zanella (2019) and in the continuous
random variable case in Livingstone and Zanella (2019). We use this function
in our numerical analyses.
The proposal distribution in the lifted sampler is given by
$q_{\mathbf{x,\nu}}(\mathbf{y}):=g\left(\frac{\pi_{n}(\mathbf{y})}{\pi_{n}(\mathbf{x})}\right)\bigg{/}c_{n,\nu}(\mathbf{x}),\quad\mathbf{y}\in\mathcal{N}_{\nu}(\mathbf{x}),$
where $c_{n,\nu}(\mathbf{x})$ is the normalising constant. In this case,
$\displaystyle\alpha_{\nu}(\mathbf{x},\mathbf{y}):=1\wedge\frac{\pi_{n}(\mathbf{y})\,q_{\mathbf{y},-\nu}(\mathbf{x})}{\pi_{n}(\mathbf{x})\,q_{\mathbf{x},\nu}(\mathbf{y})}=1\wedge\frac{c_{n,\nu}(\mathbf{x})}{c_{n,-\nu}(\mathbf{y})}.$
(11)
There are two main differences with the reversible sampler. Firstly, the
normalising constants $c_{n,\nu}(\mathbf{x})$ and $c_{n,-\nu}(\mathbf{y})$ are
sums with (in general) not the same number of terms. Consider again, for ease
of presentation of the analysis, the important example described in Section
1.1 where each component $x_{i}$ of $\mathbf{x}=(x_{1},\ldots,x_{n})$ can be
of two types and more specifically the special case of Ising model. We know
that in this case $c_{n,\nu}(\mathbf{x})$ is a sum of $n_{-\nu}(\mathbf{x})$
terms (see (6)), whereas $c_{n,-\nu}(\mathbf{y})$ is a sum of
$n_{\nu}(\mathbf{y})$ terms. The second main difference is that, in the MH
sampler, it is proposed to flip one of the $n_{-1}(\mathbf{x})$ components to
$+1$ or one of the $n_{+1}(\mathbf{x})$ components to $-1$, and
$c_{n}(\mathbf{x})$ is formed from these proposals. The constant
$c_{n}(\mathbf{y})$ is also formed from proposals to flip components to $+1$
or $-1$ with $n_{-1}(\mathbf{y})$ and $n_{+1}(\mathbf{y})$ close to
$n_{-1}(\mathbf{x})$ and $n_{+1}(\mathbf{x})$ (and this is why the ratio of
the two constants converges to 1 provided that there exists a structure of
conditional independence). In contrast, $c_{n,\nu}(\mathbf{x})$ is formed from
proposals to flip one of the $n_{-\nu}(\mathbf{x})$ components to $\nu$ and
$c_{n,-\nu}(\mathbf{y})$ from proposals to flip one of the
$n_{\nu}(\mathbf{y})=n_{\nu}(\mathbf{x})+1$ components to $-\nu$; the
compositions of these constants are thus fundamentally opposite. There is
therefore no guarantee that
$c_{n,\nu}(\mathbf{x})/c_{n,-\nu}(\mathbf{y})\longrightarrow 1$ even under the
conditions stated in Zanella (2019).
Nevertheless, there exists as in the previous section an ideal situation in
which the lifter sampler outperforms the reversible MH algorithm.
###### Corollary 3.
If
(a)
$\bm{\mathcal{X}}_{n}$ is finite,
(b)
$c_{n,-1}(\mathbf{x})=c_{n,+1}(\mathbf{x})=c_{n}(\mathbf{x})/2=c_{n}^{*}/2$
for all $\mathbf{x}\in\bm{\mathcal{X}}_{n}$, where $c_{n}^{*}$ is a positive
constant that does not depend on $\mathbf{x}$ (but that may depend on $n$),
then for any $f\in\mathcal{L}_{2}^{*}(\bar{\pi})$ and $n$,
$\mathrm{var}(f,P_{\rho,n})\leq\mathrm{var}(f,P_{\text{MH},n}).$
###### Proof.
Analogous to that of Corollary 2. ∎
A sufficient condition for Assumption (b) to be verified is:
$|\mathcal{N}_{-1}(\mathbf{x})|=|\mathcal{N}_{+1}(\mathbf{x})|=|\mathcal{N}(\mathbf{x})|/2=n^{*}/2$
(Assumption (b) in Corollary 2) and
$\frac{1}{n^{*}/2}\sum_{\mathbf{x}^{\prime}\in\mathcal{N}_{-1}(\mathbf{x})}g\left(\frac{\pi_{n}(\mathbf{x}^{\prime})}{\pi_{n}(\mathbf{x})}\right)=\frac{1}{n^{*}/2}\sum_{\mathbf{x}^{\prime}\in\mathcal{N}_{+1}(\mathbf{x})}g\left(\frac{\pi_{n}(\mathbf{x}^{\prime})}{\pi_{n}(\mathbf{x})}\right)=\frac{1}{n^{*}}\sum_{\mathbf{x}^{\prime}\in\mathcal{N}(\mathbf{x})}g\left(\frac{\pi_{n}(\mathbf{x}^{\prime})}{\pi_{n}(\mathbf{x})}\right)=\mu,$
for all $\mathbf{x}\in\bm{\mathcal{X}}_{n}$, where $\mu$ is a positive
constant. We thus notice that the acceptance probabilities can be directly
rewritten in terms of averages when
$|\mathcal{N}_{-1}(\mathbf{x})|=|\mathcal{N}_{+1}(\mathbf{x})|=|\mathcal{N}(\mathbf{x})|/2=n^{*}/2$
and that an additional condition to Assumption (b) in Corollary 2 is
sufficient in the locally informed case for ordering the asymptotic variances.
This thus allows establishing a connection with the uniform case.
As in the previous section, it is possible to derive conditions under which
the inequality in Corollary 3 holds approximately. They are based as before on
the definition of a set, which in this case involves states $\mathbf{x}$ that
are such that $c_{n,-1}(\mathbf{x})$ and $c_{n,+1}(\mathbf{x})$ are close to
$c_{n}^{*}/2$. These states do not have to be in an area such that
$n_{-1}(\mathbf{x})$ and $n_{+1}(\mathbf{x})$ are close to $n/2$, but in
return the mass have to be (in some sense) evenly spread out in the area to
which they belong.
We now define this set:
$\displaystyle\bm{\mathcal{X}}_{\varphi(n)}:=\\{\mathbf{x}\in\bm{\mathcal{X}}_{n}:c_{n}^{*}/2-\varphi(n)\leq
c_{n,-1}(\mathbf{x}),c_{n,+1}(\mathbf{x}),c_{n}(\mathbf{x})/2\leq
c_{n}^{*}/2+\varphi(n)\\},$ (12)
where $\varphi$ is in this section a function such that
$0\leq\varphi(n)<c_{n}^{*}/2$ and, for $\epsilon>0$, there exists $N>0$ such
that for all $n\geq N$, we have that $\varphi(n)/c_{n}^{*}<\epsilon$. We
consider to simplify that for any
$\mathbf{x},\mathbf{y}\in\bm{\mathcal{X}}_{\varphi(n)}$ there exists a
probable path from $\mathbf{x}$ to $\mathbf{y}$ generated by $P_{\text{MH},n}$
(and marginally for $P_{\rho,n}$) with all intermediate states belonging to
$\bm{\mathcal{X}}_{\varphi(n)}$ as well. We now define a restricted version of
$\bm{\mathcal{X}}_{\varphi(n)}$ for which from any state
$\mathbf{x}\in\bm{\mathcal{X}}_{\varphi(n)}$, all the possible proposals
$\mathbf{y}$ belong to $\bm{\mathcal{X}}_{\varphi(n)}$ as well; denote this
set by $\bm{\mathcal{X}}_{\varphi(n)}^{0}$. For $\mathbf{x}\neq\mathbf{y}$
belonging to $\bm{\mathcal{X}}_{\varphi(n)}$, it is possible to establish
that: for $\epsilon>0$, there exists a $N>0$ such that for all $n\geq N$,
$P_{\text{rev.},n}(\mathbf{x},\mathbf{y})\geq\omega_{n}P_{\text{MH},n}(\mathbf{x},\mathbf{y})$
with $1-\omega_{n}<\epsilon$. The explicit form of $\omega_{n}$ is
$\displaystyle\omega_{n}:=\left(1+\frac{\varphi(n)}{c_{n}^{*}/2}\right)^{-3}\left(1-\frac{\varphi(n)}{c_{n}^{*}/2}\right)^{3}.$
(13)
We are now ready to present the analogous result to Theorem 1, in which here
$\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$ is the normalised version of
$\pi_{n}$ on $\bm{\mathcal{X}}_{\varphi(n)}$ defined in (12).
###### Theorem 2.
Pick $f\in\mathcal{L}_{2}^{*}(\bar{\pi})$ and consider that all states
$\mathbf{x}\in\bm{\mathcal{X}}_{n}$ are such that
$\mathbf{x}=(x_{1},\ldots,x_{n})$ with $x_{i}\in\\{-1,+1\\}$. If, for
$\epsilon>0$, there exists $N>0$ such that for all $n\geq N$ it is possible to
choose a constant that may depend on $n$, $\varrho(n)$, with
(a)
$\sum_{k=\varrho(n)+1}^{\infty}\left\langle f,P^{k}f\right\rangle<\epsilon$,
for all
$P\in\\{P_{\text{rev.},n},\tilde{P}_{\text{rev.},n},P_{\text{MH},n},\tilde{P}_{\text{MH},n}\\}$,
(b)
$\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cup_{m=0}^{k}A_{m}^{\mathsf{c}}(\bm{\mathcal{X}}_{\varphi(n)}^{0})}]<\epsilon$,
for all $P\in\\{P_{\text{rev.},n},P_{\text{MH},n}\\}$, where it is considered
that the chain starts at stationarity and evolves using $P$ (and
$\mathbb{E}[f(\mathbf{X}(k))]=0$ without loss of generality) and
$A_{m}(\bm{\mathcal{X}}_{\varphi(n)}^{0}):=\\{\mathbf{X}(m)\in\bm{\mathcal{X}}_{\varphi(n)}^{0}\\}$,
(c)
$(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cap_{m=0}^{k}A_{m}(\bm{\mathcal{X}}_{\varphi(n)}^{0})}]<\epsilon$,
for all $P\in\\{P_{\text{rev.},n},P_{\text{MH},n}\\}$,
(d)
$(1/\omega_{n}-1)\mathrm{var}(f,\tilde{P}_{\text{MH},n})<\epsilon$ and
$((1-\omega_{n})/\omega_{n})\mathbb{V}\mathrm{ar}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}f(\mathbf{X})<\epsilon$,
where $\omega_{n}$ is defined in (13),
then there exists a positive constant $\kappa$ such that
$\mathrm{var}(f,P_{\rho,n})\leq\mathrm{var}(f,P_{\text{rev.},n})\leq\mathrm{var}(f,P_{\text{MH},n})+\kappa\epsilon.$
###### Proof.
Analogous to that of Theorem 1. ∎
It is possible to establish a connection with Theorem 1 as we did for
Corollary 3 with Corollary 2. Consider indeed that the set
$\bm{\mathcal{X}}_{\varphi(n)}$ is comprised of states $\mathbf{x}$ with
$n_{-1}(\mathbf{x})$ and $n_{+1}(\mathbf{x})$ close to $n/2$ and
$(1/n_{+1}(\mathbf{x}))\sum_{\mathbf{x}^{\prime}\in\mathcal{N}_{-1}(\mathbf{x})}g(\pi(\mathbf{x}^{\prime})/\pi(\mathbf{x}))$,
$(1/n_{-1}(\mathbf{x}))\sum_{\mathbf{x}^{\prime}\in\mathcal{N}_{+1}(\mathbf{x})}g(\pi(\mathbf{x}^{\prime})/\pi(\mathbf{x}))$
and
$(1/n)\sum_{\mathbf{x}^{\prime}\in\mathcal{N}(\mathbf{x})}g(\pi(\mathbf{x}^{\prime})/\pi(\mathbf{x}))$
all close to a positive constant $\mu$. We thus notice that under a precise
sense of what we mean by close to, this special case fits within the
definition of $\bm{\mathcal{X}}_{\varphi(n)}$ (12), under an additional
condition compared to the definition of the set in the previous section (8).
### 3.3 Computational costs
We provide in this section an overview of the computational costs associated
to using the proposal distributions described in Sections 3.1 and 3.2. The
uniform distribution is the least expensive: at each iteration, one has to
generate from a uniform and then evaluate the acceptance probability which
requires the computation of a ratio $\pi(\mathbf{y})/\pi(\mathbf{x})$.
Consider that the cost of the latter is the important one, in the sense that
all the other costs are comparatively negligible. The approach of Zanella
(2019) thus costs twice as much, if we assume that the cost of computing any
ratio is the same and that the ratios
$\pi(\mathbf{x}^{\prime})/\pi(\mathbf{x})$ for all $\mathbf{x}^{\prime}$ in
the neighbourhood are all computed in parallel. Indeed, these ratios are
necessary to generate the proposal $\mathbf{y}$, but once the latter has been
generated, the process has to be repeated for the reverse move. This is true
for the reversible MH sampler and Algorithm 1. Therefore, if the informed
proposal leads to a sampler at least twice as effective (in terms of ESS for
instance), then it is beneficial. It is the case in all our numerical
experiments. Note that in light of the above, implementing the reversible MH
sampler or Algorithm 1 costs essentially the same.
Algorithm 2 is more costly. When used with a uniform distribution and
$\rho:=\rho^{*}$ (5), at each iteration, $|\mathcal{N}_{\nu}(\mathbf{x})|$
ratio evaluations are required to compute
$T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$ (4), and it is afterwards required to
compute $T_{-\nu}(\mathbf{x},\bm{\mathcal{X}})$ from time to time. This makes
the implementation cost somewhere in between that of Algorithm 1 with a
uniform and Algorithm 1 using the approach of Zanella (2019). When Algorithm 2
is used with an informed proposal and $\rho:=\rho^{*}$ (5), the cost may
explode. Consider that parallel computing is used to compute any normalising
constant $c_{\nu}(\mathbf{x})$, but that the normalising constants are
computed sequentially, then at each iteration it is required to compute at
least $1+|\mathcal{N}_{\nu}(\mathbf{x})|$ normalising constants compared with
two in Algorithm 1. The computation time per iteration will thus roughly be at
least $(1+|\mathcal{N}_{\nu}(\mathbf{x})|)/2$ times larger.
## 4 Numerical experiments
We first consider in Section 4.1 simulation of an Ising model and use this as
a toy example for which we can control the dimension, the roughness of the
target and where the mass concentrates to show how the performance of the
lifted and non-lifted samplers varies when these parameters change. In Section
4.2, we evaluate their performance when employed to solve a real variable
selection problem.
### 4.1 Ising model
For the two-dimensional model, the ambiant space $(V_{\eta},E_{\eta})$ is a
$\eta\times\eta$ square lattice regarded here as a square matrix in which each
element takes either the value $-1$ or $+1$. We write each state as a vector
as before: $\mathbf{x}=(x_{1},\ldots,x_{n})$, where $n=\eta^{2}$. The states
can be encoded as for instance: the values of the components on the first line
are $x_{1},\ldots,x_{\eta}$, those on the second line
$x_{\eta+1},\ldots,x_{2\eta}$, and so on. The PMF is given by
$\pi(\mathbf{x})=\frac{1}{Z}\exp\left(\sum_{i}\alpha_{i}x_{i}+\lambda\sum_{\langle
ij\rangle}x_{i}x_{j}\right),$
where $\alpha_{i}\in\operatorname{\mathbb{R}}$ and $\lambda>0$ are known
parameters, $Z$ is the normalising constant and the notation $\langle
ij\rangle$ indicates that sites i and j are nearest neighbours.
We first consider a base target distribution for which $n=50^{2}$, the spatial
correlation is moderate and more precisely $\lambda:=0.5$, and which has the
external field (the values for the $\alpha_{i}$’s) presented in Figure 2.
Figure 2: External field of the base target
We generated the $\alpha_{i}$ independently as follows:
$\alpha_{i}=-\mu+\epsilon_{i}$ if the column index is smaller than or equal to
$\ell=\lfloor\eta/2\rfloor$ and $\alpha_{i}=\mu+\epsilon_{i}$ otherwise, where
$\mu=1$, the $\epsilon_{i}$ are independent uniform random variables on the
interval $(-0.1,+0.1)$ and $\lfloor\cdot\rfloor$ is the floor function. In the
simulation study, while keeping the other parameters fixed, we gradually
increase $\eta$ from 50 to 500 to observe the impact of dealing with larger
systems. Next, while keeping the other parameters fixed (with $\eta=50$), we
gradually increase the value of $\mu$ from 1 to 3. This has for effect of
increasing the contrast so that it is clearer which values the $x_{i}$ should
take, thus making the target rougher and concentrated on fewer configurations.
One could vary $\lambda$ as well, but this would also make the target rougher
and concentrated on fewer configurations.
We observe the impact on Algorithm 1 with uniform and informed proposal
distributions, and their non-lifted MH counterparts. For such a simulation
study, it would be simply too long to obtain the results for Algorithm 2 with
$\rho_{\nu}^{*}$ (5). We tried to vary the value of $\ell$ to observe what
happens with the acceptance rates for the uniform lifted sampler when the
ratios $n_{-\nu}(\mathbf{x})/n_{\nu}(\mathbf{y})$ (see (6)) are far from 1.
The impact in this example is however not the one expected: the acceptance
rates increase instead of deteriorating. To see why, consider for instance
that $\ell=5$. The ratios $n_{-\nu}(\mathbf{x})/n_{\nu}(\mathbf{y})$ with
$\nu=-1$ are on average around 9 (45 columns with $+1$’s and 5 with $-1$’s).
With $\nu=-1$, it is tried to flip a bit from $+1$ to $-1$ and
$\pi(\mathbf{y})/\pi(\mathbf{x})$ is thus multiplied by a factor of around 9.
It is likely that this bit is on the yellow side (see Figure 2). For such a
move, $\pi(\mathbf{y})/\pi(\mathbf{x})$ is often around $\exp(-2(1+0.5\times
4))=0.002$. Therefore, it is more likely to accept this move compared to in
the reversible MH sampler (in the MH sampler $\pi(\mathbf{y})/\pi(\mathbf{x})$
is not multiplied by 9). When $\nu=+1$, the multiplicative factor is thus
around $1/9$, but it is relatively likely that the proposal will be to flip a
bit from $-1$ to $+1$ on the yellow side, because there are some bits with the
value $-1$ on this larger side, and this move is often automatically accepted
(because $\pi(\mathbf{y})/\pi(\mathbf{x})$ is often around $1/0.002)$. Note
that these conclusions are not in contradiction with our analysis of Section
3.1, because what we observed here is specific to the Ising model. We do not
present the results because the graph is uninteresting: the performance is
essentially constant for the informed samplers and that of the uniform ones is
so low that we do not see the ESS vary.
We present the other results in Figure 3. They are based on 1,000 independent
runs of 100,000 iterations for each algorithm and each value of $\mu$ and
$\eta$, with burn-ins of 10,000. For each run, an ESS per iteration is
computed for $f(\mathbf{x},-1)=f(\mathbf{x},+1)=\sum_{i}x_{i}$ and then the
results are averaged out. This function is proportional to the magnetisation.
Monitoring such a statistic is relevant as a quicker variation of its value
(leading to a higher ESS) indicates that the whole state-space is explored
quicker.
For the base target (represented by the starting points on the left of the
graphs in Figure 3), the mass is concentrated on a manifold of several
configurations, which allows for persistent movement for informed samplers.
The lifted one indeed takes advantage of this; it is approximately 7 times
more efficient than its non-lifted counterpart. The gap widens as $\eta$
increases; it is approximately 20 and 70 times more efficient when $\eta$ is
3.2 and 10 times larger (i.e. when $n$ is 10 and 100 times larger),
respectively. We observed that the ratio of ESSs increases linearly with
$\eta$. The non-informed samplers both perform poorly (the lines are on top of
each other).
As $\mu$ increases, the target becomes rougher and concentrated on fewer
configurations. When the roughness and concentration level are too severe the
performance of the lifted informed sampler stagnates, whereas that of the non-
lifted MH sampler continues to improve. There are two reasons for this.
Firstly, the acceptance rates deteriorate more rapidly for the lifted than the
non-lifted sampler (as a consequence of the difference in the acceptance
probability, see (11)). Secondly, when the mass is concentrated on few
configurations, it leaves not much room for persistent movement for the lifted
sampler. The latter thus loses its avantage. Again, the non-informed samplers
both perform poorly (the lines are on top of each other).
$\begin{array}[]{cc}\includegraphics[width=216.81pt]{Fig_Ising_n.pdf}&\includegraphics[width=216.81pt]{Fig_Ising_mu.pdf}\cr\textbf{(a)}&\textbf{(b)}\end{array}$
Figure 3: ESS per iteration of
$f(\mathbf{x},-1)=f(\mathbf{x},+1)=\sum_{i}x_{i}$ for Algorithm 1 with uniform
and informed proposal distributions and their non-lifted MH counterparts when:
(a) $\eta$ increases from 50 to 500 and the other parameters are kept fixed
($\mu=1$, $\lambda=0.5$ and $\ell=25$); (b) $\mu$ increases from 1 to 3 and
the other parameters are kept fixed ($\eta=50$, $\lambda=0.5$ and $\ell=25$)
### 4.2 Variable selection: US crime data
A study of crime rate was first presented in Erhlich (1973) and then an
expended and corrected version appeared in Vandaele (1978) in which corrected
data were provided; the topic was more precisely on the connection between
crime rate and 15 covariates (some were added by Vandaele (1978)) such as
percentage of males of age between 14 and 23 and mean years of schooling in a
given state. The data were indeed aggregated by state and were from 47 U.S.
states in 1960. They were analysed in several statistics papers (see, e.g.,
Raftery et al. (1997)) and are available in the R package MASS.
The data are modelled using the usual linear regression with normal errors.
Here we set the prior distribution of the regression coefficients and scaling
of the errors to be, conditionally on a model, the non-informative Jeffreys
prior. It is proved in Gagnon (2019) that a simple modification to the uniform
prior on the model random variable (represented here by $\mathbf{X}$) prevents
the Jeffreys–Lindley (Lindley, 1957; Jeffreys, 1967) paradox from arising.
With the resulting likelihood function and prior density on the parameters,
the latter can be integrated out. It is thus possible to evaluate the exact
marginal posterior probability of any of the $2^{15}=$ 32,768 models, up to a
normalising constants. This allows us to implement the MH sampler with the
locally informed proposal distribution of Zanella (2019) and its lifted
counterparts (Algorithm 1 and Algorithm 2 with $\rho_{\nu}^{*}$ (5)). In the
previous statistical studies, it was noticed that the mass is diffused over
several models, so that we expect the lifted chains to exhibit persistent
movement (as seen in Figure 1) and to perform well. To simplify the
presentation, we do not show the performance of the uniform samplers because,
as in the previous section, it is very poor.
The performances of the algorithms are summarised in Figure 4. The results are
based on 1,000 independent runs of 10,000 iterations for each algorithm, with
burn-ins of 1,000. Each run is started from a distribution which approximates
the target. On average, Algorithm 1 and Algorithm 2 with $\rho_{\nu}^{*}$ are
$2.7$ and $3.3$ times more efficient than their non-lifted counterpart,
respectively. The benefits of persistent movement thus compensate for a
decrease in acceptance rates; the rate indeed decreases from 0.92 for the MH
sampler to 0.71 for Algorithm 1 and Algorithm 2 with $\rho_{\nu}^{*}$ (5).
Figure 4: ESS per iteration for
$f(\mathbf{x},-1)=f(\mathbf{x},+1)=\sum_{i}x_{i}$ of 1,000 independent runs
for the MH sampler with the locally informed proposal distribution and its
lifted counterparts (Algorithm 1 and Algorithm 2 with $\rho_{\nu}^{*}$)
## 5 Lifted trans-dimensional sampler
In this section, we introduce a trans-dimensional algorithm which is a non-
reversible version of the popular reversible jump (RJ) algorithms introduced
by Green (1995). We consider that $\bm{\mathcal{X}}$ is a model space and
$\mathbf{X}$ a model indicator. The latter indicates, for instance, with 0’s
and 1’s which covariates are included in the model in variable selection
contexts as in Section 4.2. Such an algorithm is useful when it is not
possible to integrate out the parameters, contrarily to the linear regression
with normal errors and suitable priors. Examples of such situations include
analyses based on linear regression with super heavy-tailed errors ensuring
whole robustness (Gagnon et al., 2018) and generalised linear models and
generalised linear mixed models (Forster et al., 2012).
The parameters of a given model $\mathbf{x}$ are denoted by
$\bm{\theta}_{\mathbf{x}}\in\bm{\Theta}_{\mathbf{x}}$. Trans-dimensional
algorithms are MCMC methods that one uses to sample from a target distribution
$\pi$ defined on a union of sets
$\cup_{\mathbf{x}\in\bm{\mathcal{X}}}\\{\mathbf{x}\\}\times\bm{\Theta}_{\mathbf{x}}$,
which corresponds in Bayesian statistics to the joint posterior of the model
indicator $\mathbf{X}$ and the parameters of Model $\mathbf{X}$,
$\bm{\theta}_{\mathbf{X}}$. Such a posterior allows to jointly infer about
$(\mathbf{X},\bm{\theta}_{\mathbf{X}})$, or in other words, simultaneously
achieve model selection and parameter estimation. In this section, we assume
for simplicity that the parameters of all models are continuous random
variables.
Given the current state of the Markov chain
$(\mathbf{x},\bm{\theta}_{\mathbf{x}})$, a trans-dimensional sampler generates
the next state by first proposing a model candidate $\mathbf{y}\sim
q_{\mathbf{x}}(\mathbf{y})$ and then a proposal for its corresponding
parameter values. When $\mathbf{y}=\mathbf{x}$, we say that a parameter update
is proposed, whereas we say that a model switch is proposed when
$\mathbf{y}\neq\mathbf{x}$. Note that $\mathbf{x}\in\mathcal{N}(\mathbf{x})$,
contrarily to the previous sections. This is to allow parameter updates. When
a parameter update is proposed, we allow any fixed-dimensional methods to be
used; we only require that the Markov kernels leave the conditional
distributions $\pi(\,\cdot\mid\mathbf{x})$ invariant. When a model switch is
proposed, a vector of auxiliary variables
$\mathbf{u}_{\mathbf{x}\mapsto\mathbf{y}}$ is typically generated and this is
followed by a proposal mechanism leading to
$(\bm{\theta}^{\prime}_{\mathbf{y}},\mathbf{u}_{\mathbf{y}\mapsto\mathbf{x}})$,
where $\bm{\theta}^{\prime}_{\mathbf{y}}$ is the proposal for the parameter
values in Model $\mathbf{y}$. We require the whole proposal mechanism for
$\bm{\theta}^{\prime}_{\mathbf{y}}$ to be valid in a RJ framework, in the
sense that the model switch transitions are reversible in this framework. The
non-reversibility in the lifted trans-dimensional sampler lies in the
transitions for the $\mathbf{x}$ variable during model switches. More
precisely, $\mathbf{y}$ is generated from $q_{\mathbf{x},\nu}(\mathbf{y})$
instead, but the proposal mechanism for $\bm{\theta}^{\prime}_{\mathbf{y}}$
during model switches is the same. We consider that the acceptance probability
of these model switches in RJ is given by
$\displaystyle\alpha_{\text{RJ}}((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}})):=1\wedge\frac{q_{\mathbf{y}}(\mathbf{x})}{q_{\mathbf{x}}(\mathbf{y})}\,r((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}})),$
(14)
where the function $r$ depends on the method and may depend on several other
variables.
The algorithm is now presented in Algorithm 3. In it, we consider that the
current model $\mathbf{x}$ always belongs to the neighbourhood
$\mathcal{N}_{\nu}(\mathbf{x})$, regardless of the current direction $\nu$,
and that $q_{\mathbf{x},-1}(\mathbf{x})=q_{\mathbf{x},+1}(\mathbf{x})$, which
will typically be the case in practice.
Algorithm 3 A lifted trans-dimensional sampler for partially ordered model
spaces
1. 1.
Generate $\mathbf{y}\sim q_{\mathbf{x},\nu}$, a PMF with support restricted to
$\mathcal{N}_{\nu}(\mathbf{x})$.
2. 2.(a)
If $\mathbf{y}=\mathbf{x}$, attempt a parameter update using a MCMC kernel of
invariant distribution $\pi(\,\cdot\mid\mathbf{x})$ while keeping the current
value of the model indicator $\mathbf{x}$ and direction $\nu$ fixed.
3. 2.(b)
If $\mathbf{y}\neq\mathbf{x}$, attempt a model switch from Model $\mathbf{x}$
to Model $\mathbf{y}$. Generate $\bm{\theta}^{\prime}_{\mathbf{y}}$ using a
method which is valid in RJ and $u\sim\mathcal{U}[0,1]$. If
$\displaystyle
u\leq\alpha_{\text{NRJ}}((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}})):=1\wedge\frac{q_{\mathbf{y},-\nu}(\mathbf{x})}{q_{\mathbf{x},\nu}(\mathbf{y})}\,r((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}})),$
(15)
set the next state of the chain to
$(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}},\nu)$. Otherwise, set it to
$(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)$.
4. 3.
Go to Step 1.
###### Proposition 2.
The transition kernel of the Markov chain
$\\{(\mathbf{X},\bm{\theta}_{\mathbf{X}},\nu)(m):m\in\operatorname{\mathbb{N}}\\}$
simulated by Algorithm 3 admits $\pi\otimes\mathcal{U}\\{-1,1\\}$ as invariant
distribution.
###### Proof.
See Section 7. ∎
The main difficulty with the implementation of trans-dimensional samplers is
the construction of efficient proposal schemes for
$(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}})$ during model switches. Gagnon
(2019) discusses this in depth in the RJ framework. The author proposes a
scheme and proves that it is possible to arbitrarily approach an asymptotic
regime in which one is able to generate $\bm{\theta}^{\prime}_{\mathbf{y}}$
from $\pi(\,\cdot\mid\mathbf{y})$ (the correct conditional distribution) and
evaluate exactly the ratios of marginal probabilities
$\pi(\mathbf{y})/\pi(\mathbf{x})$ (and is therefore able to adequately
construct $q_{\mathbf{x},\nu}$). In particular, for this scheme,
$r((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}}))$
is a consistent estimator of $\pi(\mathbf{y})/\pi(\mathbf{x})$. We refer the
reader to that paper for the details.
We thus conclude that the marginal behaviour of
$\\{(\mathbf{X},\nu)(m):m\in\operatorname{\mathbb{N}}\\}$ is the same as that
of the stochastic process generated by Algorithm 1 in the asymptotic regime
and considering only iterations for which model switches are proposed. All
conclusions previously drawn thus hold, at least approximatively. In
particular, one may analyse the same data as in Section 4.2, but using the
super heavy-tailed regression of Gagnon et al. (2018) for robust inference and
outlier detection. The results would be essentially the same because, as
mentioned in Raftery et al. (1997), “standard diagnostic checking (see, e.g.,
Draper and Smith (1981)) did not reveal any gross violations of the
assumptions underlying normal linear regression” and the robust method is
designed for leading to similar results in the absence of outliers. We thus
omit further analysis of Algorithm 3 and we do not illustrate how it performs
for brevity. We nevertheless highlight that it is important for
$r((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}^{\prime}_{\mathbf{y}}))$
to be a low variance estimator of $\pi(\mathbf{y})/\pi(\mathbf{x})$ as
persistent movement may be interrupted otherwise, as shown in Gagnon and
Doucet (2019).
## 6 Discussion
In this paper, we presented and analysed generic algorithms allowing
straightforward sampling from any PMF $\pi$ with a support $\bm{\mathcal{X}}$
on which a partial order can be established. The algorithms rely on the
technique of lifting. We showed that these are expected to perform well when
the shape of target (the level of concentration of the mass) allows for
persistent movement. This is true even when the target concentrates on a
manifold of the ambient space in the case where the lifting technique is
combined with locally informed proposal distributions (provided that the shape
of the manifold allows for persistent movement).
The samplers are in particular useful for the simulation of binary random
variables and variable selection. Algorithm 1 can be directly employed for the
latter when the parameters of the models can be integrated out. A lifted
trans-dimensional sampler for partially ordered model spaces have been
introduced in Section 5 for, among others, variable selection when it is not
possible to integrate out the parameters. We believe it would be interesting
to continue this line of research by taking steps towards automatic generic
samplers using the technique of lifting for any discrete state-space.
## References
* Andrieu and Livingstone (2019) Andrieu, C. and Livingstone, S. (2019) Peskun-Tierney ordering for Markov chain and process Monte Carlo: beyond the reversible scenario. arXiv:1906.06197.
* Barker (1965) Barker, A. A. (1965) Monte Carlo calculations of the radial distribution functions for a proton-electron plasma. Austral. J. Phys., 18, 119–134.
* Chen et al. (1999) Chen, F., Lovász, L. and Pak, I. (1999) Lifting Markov chains to speed up mixing. In Proceedings of the thirty-first annual ACM symposium on Theory of computing, 275–281.
* Diaconis et al. (2000) Diaconis, P., Holmes, S. and Neal, R. M. (2000) Analysis of a nonreversible Markov chain sampler. Ann. Appl. Probab., 726–752.
* Draper and Smith (1981) Draper, N. R. and Smith, H. (1981) Applied regression analysis (2nd ed.). New York: Wiley.
* Erhlich (1973) Erhlich, I. (1973) Participation in illegitimate activities: a theoretical and empirical analysis. J. Polit. Econ., 81, 521–567.
* Forster et al. (2012) Forster, J. J., Gill, R. C. and Overstall, A. M. (2012) Reversible jump methods for generalised linear models and generalised linear mixed models. Stat. Comput., 22, 107–120.
* Gagnon (2019) Gagnon, P. (2019) A step further towards automatic and efficient reversible jump algorithms. ArXiv:1911.02089.
* Gagnon et al. (2018) Gagnon, P., Desgagné, A. and Bédard, M. (2018) A new Bayesian approach to robustness against outliers in linear regression. Bayesian Anal. Advance publication.
* Gagnon and Doucet (2019) Gagnon, P. and Doucet, A. (2019) Non-reversible jump algorithms for Bayesian nested model selection. ArXiv:1911.01340.
* Geyer (1991) Geyer, C. J. (1991) Markov chain monte carlo maximum likelihood. Interface Proceedings.
* Green (1995) Green, P. J. (1995) Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82, 711–732.
* Gustafson (1998) Gustafson, P. (1998) A guided walk Metropolis algorithm. Stat. Comput., 8, 357–364.
* Hastings (1970) Hastings, W. K. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97–109.
* Horowitz (1991) Horowitz, A. M. (1991) A generalized guided Monte carlo algorithm. Phys. Lett. B, 268, 247–252.
* Jeffreys (1967) Jeffreys, H. (1967) Theory of Probability. Oxford Univ. Press, London.
* Lindley (1957) Lindley, D. V. (1957) A statistical paradox. Biometrika, 44, 187–192.
* Livingstone and Zanella (2019) Livingstone, S. and Zanella, G. (2019) On the robustness of gradient-based MCMC algorithms. arXiv:1908.11812.
* Metropolis et al. (1953) Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953) Equation of state calculations by fast computing machines. J. Chem. Phys., 21, 1087.
* Neal (2003) Neal, R. M. (2003) Slice sampling. Ann. Statist., 705–741.
* Neal (2020) — (2020) Non-reversibly updating a uniform [0, 1] value for Metropolis accept/reject decisions. arXiv:2001.11950.
* Peskun (1973) Peskun, P. (1973) Optimum Monte-Carlo sampling using Markov chains. Biometrika, 60, 607–612.
* Power and Goldman (2019) Power, S. and Goldman, J. V. (2019) Accelerated sampling on discrete spaces with non-reversible Markov processes. arXiv:1912.04681.
* Raftery et al. (1997) Raftery, A. E., Madigan, D. and Hoeting, J. A. (1997) Bayesian model averaging for linear regression models. J. Amer. Statist. Assoc., 92, 179–191.
* Roberts and Rosenthal (2004) Roberts, G. O. and Rosenthal, J. S. (2004) General state space Markov chains and MCMC algorithms. Probab. Surv., 1, 20–71.
* Syed et al. (2019) Syed, S., Bouchard-Côté, A., Deligiannidis, G. and Doucet, A. (2019) Non-reversible parallel tempering: an embarassingly parallel MCMC scheme. arXiv:1905.02939.
* Tierney (1998) Tierney, L. (1998) A note on Metropolis-Hastings kernels for general state spaces. Ann. Appl. Probab., 8, 1–9.
* Trotter (1992) Trotter, W. T. (1992) Combinatorics and partially ordered sets: Dimension theory, vol. 59. Johns Hopkins University Press Baltimore.
* Vandaele (1978) Vandaele, W. (1978) Participation in illegitimate activities; Ehrlich revisited. In Deterrence and incapacitation, 270–335. Washington, D.C.: National Academy of Sciences Press.
* Vanetti et al. (2017) Vanetti, P., Bouchard-Côté, A., Deligiannidis, G. and Doucet, A. (2017) Piecewise-deterministic Markov chain Monte Carlo. arXiv:1707.05296.
* Zanella (2019) Zanella, G. (2019) Informed proposals for local MCMC in discrete spaces. To appear in J. Amer. Statist. Assoc.
## 7 Proofs
###### Proof of Proposition 1.
It suffices to prove that the probability to reach the state
$(\mathbf{y},\nu^{\prime})$ in one step is equal to the probability of this
state under the target:
$\displaystyle\sum_{\mathbf{x},\nu}\pi(\mathbf{x})\,(1/2)\,P((\mathbf{x},\nu),(\mathbf{y},\nu^{\prime}))=\pi(\mathbf{y})\,(1/2).$
(16)
where $P$ is the transition kernel.
The probability to reach the state $(\mathbf{y},\nu^{\prime})$ from some
$(\mathbf{x},\nu)$ is given by:
$\displaystyle P((\mathbf{x},\nu),(\mathbf{y},\nu^{\prime}))$
$\displaystyle=T_{\nu}(\mathbf{x},\bm{\mathcal{X}})\,Q_{\mathbf{x},\nu}(\mathbf{y})\,\mathds{1}(\nu=\nu^{\prime})$
$\displaystyle\qquad+\mathds{1}(\nu=-\nu^{\prime},\mathbf{x}=\mathbf{y})\left[(\rho_{\nu}(\mathbf{x})+T_{\nu}(\mathbf{x},\bm{\mathcal{X}}))-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})\right]$
$\displaystyle\qquad+\mathds{1}(\nu=\nu^{\prime},\mathbf{x}=\mathbf{y})\left[1-\rho_{\nu}(\mathbf{x})-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})\right]$
$\displaystyle=q_{\mathbf{x},\nu}(\mathbf{y})\,\alpha_{\nu}(\mathbf{x},\mathbf{y})\,\mathds{1}(\nu=\nu)$
$\displaystyle\qquad+\mathds{1}(\nu=-\nu^{\prime},\mathbf{x}=\mathbf{y})\,\rho_{\nu}(\mathbf{x})$
$\displaystyle\qquad+\mathds{1}(\nu=\nu^{\prime},\mathbf{x}=\mathbf{y})\left[1-\rho_{\nu}(\mathbf{x})-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})\right].$
We have that
$\displaystyle\pi(\mathbf{x})\,(1/2)\,P((\mathbf{x},\nu),(\mathbf{y},\nu^{\prime}))$
$\displaystyle=(1/2)\,\pi(\mathbf{y})\,q_{\mathbf{y},-\nu^{\prime}}(\mathbf{x})\,\alpha_{-\nu^{\prime}}(\mathbf{y},\mathbf{x})\,\mathds{1}(-\nu^{\prime}=-\nu)$
$\displaystyle\qquad+(1/2)\,\pi(\mathbf{y})\,\mathds{1}(-\nu^{\prime}=\nu,\mathbf{y}=\mathbf{x})\,\rho_{-\nu^{\prime}}(\mathbf{y})$
$\displaystyle\qquad+(1/2)\,\pi(\mathbf{y})\mathds{1}(-\nu^{\prime}=-\nu,\mathbf{y}=\mathbf{x})\left[1-\rho_{-\nu^{\prime}}(\mathbf{y})-T_{-\nu^{\prime}}(\mathbf{y},\bm{\mathcal{X}})\right]$
$\displaystyle=(1/2)\,\pi(\mathbf{y})\,T_{-\nu^{\prime}}(\mathbf{y},\bm{\mathcal{X}})\,Q_{\mathbf{y},-\nu^{\prime}}(\mathbf{x})\mathds{1}(-\nu^{\prime}=-\nu)$
$\displaystyle\qquad+(1/2)\,\pi(\mathbf{y})\,\mathds{1}(-\nu^{\prime}=\nu,\mathbf{y}=\mathbf{x})\left[(\rho_{-\nu^{\prime}}(\mathbf{y})+T_{-\nu^{\prime}}(\mathbf{y},\mathcal{X}))-T_{-\nu^{\prime}}(\mathbf{y},\mathcal{X})\right]$
$\displaystyle\qquad+(1/2)\,\pi(\mathbf{y})\,\mathds{1}(-\nu^{\prime}=-\nu,\mathbf{y}=\mathbf{x})\left[1-\rho_{-\nu^{\prime}}(\mathbf{y})-T_{-\nu^{\prime}}(\mathbf{y},\mathcal{X})\right],$
where we used the definition of $\alpha$ for the first term and that
$\rho_{\nu}(\mathbf{x})-\rho_{-\nu}(\mathbf{x})=T_{-\nu}(\mathbf{x},\bm{\mathcal{X}})-T_{\nu}(\mathbf{x},\bm{\mathcal{X}})$
for the third term. Notice the sum on the right-hand side (RHS) is equal to
the probability to reach some $(\mathbf{x},-\nu)$, starting from
$(\mathbf{y},-\nu^{\prime})$:
$(1/2)\,\pi(\mathbf{y})\,P((\mathbf{y},-\nu^{\prime}),(\mathbf{x},-\nu))$.
Therefore,
$\displaystyle\sum_{\mathbf{x},\nu}\pi(\mathbf{x})\,(1/2)\,P((\mathbf{x},\nu),(\mathbf{y},\nu^{\prime}))$
$\displaystyle=\sum_{\mathbf{x},\nu}(1/2)\,\pi(\mathbf{y})\,P((\mathbf{y},-\nu^{\prime}),(\mathbf{x},-\nu))$
$\displaystyle=(1/2)\,\pi(\mathbf{y}),$
which concludes the proof. ∎
We now present a lemma that will be useful in the next proofs.
###### Lemma 1.
Let $Q$ be the Markov kernel of the Markov chain simulated by Algorithm 2, for
any valid switching function $\rho_{\nu}$, $\nu\in\\{-1,1\\}$. Assume that
$\bm{\mathcal{X}}$ is finite. Then, for any
$f\in\mathcal{L}_{2}^{*}(\bar{\pi})$,
$\lim_{\lambda\to
1}\sum_{k>0}\lambda^{k}\langle\,f,Q^{k}f\,\rangle=\sum_{k>0}\langle\,f,Q^{k}f\,\rangle\,.$
(17)
###### Proof.
For each $f\in\mathcal{L}_{2}^{*}(\bar{\pi})$, define the sequence of
functions $S_{n}:\lambda\mapsto\sum_{0<k\leq
n}\lambda^{k}\langle\,f,Q^{k}f\,\rangle$ defined for $\lambda\in[0,1)$ and its
limit $S(\lambda)=\sum_{k>0}\lambda^{k}\langle\,f,Q^{k}f\,\rangle$ (the
dependance of $S_{n}$ and $S$ on $f$ and $Q$ is implicit). We now show that
the partial sum $S_{n}$ converges uniformly to $S$ on $[0,1)$, and since for
each $n\in\mathbb{N}$, the function
$\lambda\to\lambda^{n}\langle\,f,Q^{n}f\,\rangle$ admits a limit when
$\lambda\to 1$, we have that $S$ admits a limit when $\lambda\to 1$, given by
$\lim_{\lambda\to 1}S(\lambda)=S(1)=\sum_{k>0}\langle\,f,Q^{k}f\,\rangle\,,$
which is Eq. (17). First, note that
$\sup_{\lambda\in[0,1)}\left|S_{n}(\lambda)-S(\lambda)\right|=\sup_{\lambda\in[0,1)}\left|\sum_{k>n}\lambda^{k}\langle\,f,Q^{k}f\,\rangle\right|\leq\sup_{\lambda\in[0,1)}\sum_{k>n}\lambda^{k}\left|\langle\,f,Q^{k}f\,\rangle\right|=\sum_{k>n}\left|\langle\,f,Q^{k}f\,\rangle\right|\,.$
(18)
Thus, to prove that
$\sup_{\lambda\in[0,1)}\left|S_{n}(\lambda)-S(\lambda)\right|\to 0$, it is
sufficient to prove that the series
$\sum_{k>0}\left|\langle\,f,Q^{k}f\,\rangle\right|$ converges.
By bilinearity of the inner product and by linearity of the iterated operators
$Q,Q^{2},\ldots$, it can be checked that for any linear mapping $\phi$ on
$\mathcal{L}_{2}^{\ast}(\bar{\pi})$
$\sum_{k=1}^{\infty}\left|\left\langle
f,Q^{k}f\right\rangle\right|<\infty\Leftrightarrow\sum_{k=1}^{\infty}\left|\left\langle\phi(f),Q^{k}\phi(f)\right\rangle\right|<\infty\,.$
(19)
Since $\bm{\mathcal{X}}$ is finite, if $f\in\mathcal{L}_{2}^{\ast}(\bar{\pi})$
then $\sup|f|<\infty$. As a consequence, we may use
$\phi(f):=(f-\bar{\pi}f)/\sup|f|$ and $\bar{\pi}f:=\int f\mathrm{d}\bar{\pi}$.
In the following we denote by $\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi})$ the
subset of $\mathcal{L}_{2}^{\ast}(\bar{\pi})$ such that
$\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi}):=\left\\{f\in\mathcal{L}_{2}^{\ast}(\bar{\pi})\,:\,\bar{\pi}f=0\,,\;\sup|f|\leq
1\right\\}\,.$
By Eq. (19), we only need to check that the series
$\sum_{k>0}\left|\langle\,f,Q^{k}f\,\rangle\right|$ converges for each
$f\in\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi})$. Since $\bm{\mathcal{X}}$ is
finite, $Q$ is uniformly ergodic and there exists constants $\varrho\in(0,1)$
and $C\in(0,\infty)$ such that for any $t\in\mathbb{N}$,
$\sup_{(x,\nu)\in\bm{\mathcal{X}}\times\\{-1,+1\\}}\|\delta_{x,\nu}Q^{t}-\bar{\pi}\|_{\mathrm{tv}}\leq
C\varrho^{t}\,,\ $ (20)
where for any signed measure $\mu$, $\|\mu\|_{\mathrm{tv}}$ denotes its total
variation. On the one hand, note that for each
$f\in\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi})$
$\langle\,f,Q^{k}f\,\rangle=\mathbb{E}f(X,\nu)Q^{k}f(X,\nu)\leq\mathbb{E}|f(X,\nu)||Q^{k}f(X,\nu)|\,,$
(21)
and on the other hand, we have that for any
$(x,\nu)\in\bm{\mathcal{X}}\times\\{-1,1\\}$,
$|Q^{k}f(x,\nu)|=|Q^{k}f(x,\nu)-\bar{\pi}f|\leq\sup_{f\in\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi})}|Q^{k}f(x,\nu)-\bar{\pi}f|\,.$
(22)
But $\|\mu\|_{\mathrm{tv}}=(1/2)\sup_{g:\bm{\mathcal{X}}\to[-1,1]}|\mu g|$,
see for instance (Roberts and Rosenthal, 2004, Proposition 3). Since
$f\in\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi})$, $|f|\leq 1$ and we have by
inclusion that for all $(x,\nu)\in\bm{\mathcal{X}}\times\\{-1,1\\}$
$|Q^{k}f(x,\nu)|\leq\sup_{f\in\mathcal{L}_{2}^{\ast,0,1}(\bar{\pi})}|Q^{k}f(x,\nu)-\bar{\pi}f|\leq\sup_{g:\bm{\mathcal{X}}\times\\{-1,1\\}\to[-1,1]}|Q^{k}g(x,\nu)-\bar{\pi}g|\leq
2\|\delta_{x,\nu}Q^{k}-\bar{\pi}\|_{\mathrm{tv}}\,.$ (23)
Taking the supremum over all $(x,\nu)\in\bm{\mathcal{X}}\times\\{-1,1\\}$ in
Eq. (23), and combining with Eq. (20) yields
$\sup_{(x,\nu)\in\bm{\mathcal{X}}\times\\{-1,1\\}}|Q^{k}f(x,\nu)|\leq
2C\varrho^{k}\,.$
Plugging this into Eq. (21), we have
$\left|\langle\,f,Q^{k}f\,\rangle\right|\leq
2C\mathbb{E}|f(X,\nu)|\varrho^{k}\,.$ (24)
which is clearly summable. As a consequence, $S_{n}$ converges uniformly to
$S$ on $[0,1)$ which concludes the proof. ∎
###### Proof of Corollary 1.
The results of Theorem 3.15 in Andrieu and Livingstone (2019) holds in our
framework, implying that
$\mathrm{var}_{\lambda}(f,P_{\rho^{*}})\leq\mathrm{var}_{\lambda}(f,P_{\rho})\leq\mathrm{var}_{\lambda}(f,P_{\rho}^{\text{w}}),$
where
$\mathrm{var}_{\lambda}(f,P_{\rho}):=\mathbb{V}\mathrm{ar}f(\mathbf{X},\nu)+2\sum_{k>0}\lambda^{k}\left\langle
f,P_{\rho}^{k}f\right\rangle$ with $\lambda\in[0,1)$. Lemma 1 allows to
conclude. ∎
###### Proof of Corollary 2.
The proof is an application of Theorem 3.17 in Andrieu and Livingstone (2019)
which will allow to establish that
$\mathrm{var}_{\lambda}(f,P_{\rho})\leq\mathrm{var}_{\lambda}(f,P_{\text{MH}}).$
We will thus be able to conclude using Lemma 1.
In order to apply Theorem 3.17, we must verify that
$q_{\mathbf{x}}(\mathbf{y})\,\alpha(\mathbf{x},\mathbf{y})=(1/2)\,q_{\mathbf{x},+1}(\mathbf{y})\,\alpha_{+1}(\mathbf{x},\mathbf{y})+(1/2)\,q_{\mathbf{x},-1}(\mathbf{y})\,\alpha_{-1}(\mathbf{x},\mathbf{y}),$
for all $\mathbf{x}$ and $\mathbf{y}$. This is straightforward to verify under
the assumptions of Corollary 2:
$\displaystyle(1/2)\,q_{\mathbf{x},+1}(\mathbf{y})\,\alpha_{+1}(\mathbf{x},\mathbf{y})+(1/2)\,q_{\mathbf{x},-1}(\mathbf{y})\,\alpha_{-1}(\mathbf{x},\mathbf{y})$
$\displaystyle\qquad=\frac{1}{2}\frac{1}{(|\mathcal{N}(\mathbf{x})|/2)}\left(1\wedge\frac{\pi(\mathbf{y})}{\pi(\mathbf{x})}\right)\mathds{1}_{\mathbf{y}\in\mathcal{N}_{+1}(\mathbf{x})}+\frac{1}{2}\frac{1}{(|\mathcal{N}(\mathbf{x})|/2)}\left(1\wedge\frac{\pi(\mathbf{y})}{\pi(\mathbf{x})}\right)\mathds{1}_{\mathbf{y}\in\mathcal{N}_{-1}(\mathbf{x})}$
$\displaystyle\qquad=\frac{1}{|\mathcal{N}(\mathbf{x})|}\left(1\wedge\frac{\pi(\mathbf{y})}{\pi(\mathbf{x})}\right)\left(\mathds{1}_{\mathbf{y}\in\mathcal{N}_{+1}(\mathbf{x})}+\mathds{1}_{\mathbf{y}\in\mathcal{N}_{-1}(\mathbf{x})}\right)=q_{\mathbf{x}}(\mathbf{y})\,\alpha(\mathbf{x},\mathbf{y}).$
∎
###### Proof of Theorem 1.
We first prove that
$\mathrm{var}(f,P_{\rho,n})\leq\mathrm{var}(f,P_{\text{rev.},n})$. This is
done as in the proof of Corollary 2.
We now analyse $\mathrm{var}(f,P_{\text{rev.},n})$:
$\mathrm{var}(f,P_{\text{rev.},n})=\mathbb{V}\mathrm{ar}f(\mathbf{X})+2\sum_{k>0}\left\langle
f,P_{\text{rev.},n}^{k}f\right\rangle=\mathbb{V}\mathrm{ar}f(\mathbf{X})+2\sum_{k>0}\text{Cov}_{P_{\text{rev.},n}}[f(\mathbf{X}(0)),f(\mathbf{X}(k))],$
where we omitted the dependence on $\nu$ because, as we mentioned in Section
3.1, it can be treated as a constant as a consequence of the restrictions on
$f$. In the expression above, it is considered that the chain starts at
stationarity and evolves using $P_{\text{rev.},n}$. We consider without loss
of generality that $\mathbb{E}[f(\mathbf{X}(k))]=0$ (for any $k$).
We first write
$\displaystyle\mathbb{V}\mathrm{ar}f(\mathbf{X})$
$\displaystyle=\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}}]+\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}}]$
(25)
$\displaystyle=\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})^{2}]-\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}+\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}+(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}}]$
(26)
$\displaystyle\qquad+\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}}],$
(27)
where $\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}$ denotes an
expectation with respect to $\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$.
Also,
$\displaystyle\sum_{k>0}\text{Cov}_{P_{\text{rev.},n}}[f(\mathbf{X}(0)),f(\mathbf{X}(k))]=\sum_{k=1}^{\varrho(n)}\text{Cov}_{P_{\text{rev.},n}}[f(\mathbf{X}(0)),f(\mathbf{X}(k))]+\sum_{k=\varrho(n)+1}^{\infty}\text{Cov}_{P_{\text{rev.},n}}[f(\mathbf{X}(0)),f(\mathbf{X}(k))],$
where $\varrho(n)$ is chosen according to the statement of Theorem 1;
therefore the second term on the RHS can be made as small as we want. For the
first term, we have
$\displaystyle\sum_{k=1}^{\varrho(n)}\text{Cov}_{P_{\text{rev.},n}}[f(\mathbf{X}(0)),f(\mathbf{X}(k))]$
$\displaystyle=\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))]$
$\displaystyle=\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cap_{m=0}^{k}A_{m}(\bm{\mathcal{X}}_{\varphi(n)-1})}]+\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cup_{m=0}^{k}A_{m}^{\mathsf{c}}(\bm{\mathcal{X}}_{\varphi(n)-1})}],$
where
$A_{m}(\bm{\mathcal{X}}_{\varphi(n)-1}):=\\{\mathbf{X}(m)\in\bm{\mathcal{X}}_{\varphi(n)-1}\\}$.
By assumption, the second term on the RHS can be made as small as we want.
We have
$\displaystyle\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cap_{m=0}^{k}A_{m}(\bm{\mathcal{X}}_{\varphi(n)-1})}]$
$\displaystyle=\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{P}_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))]$
$\displaystyle+(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\sum_{k=1}^{\varrho(n)}\mathbb{E}_{P_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))\mathds{1}_{\cap_{m=0}^{k}A_{m}(\bm{\mathcal{X}}_{\varphi(n)-1})}],$
where $\tilde{P}_{\text{rev.},n}$ is the Markov kernel whose stationary
distribution is $\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$. This equality
holds because the paths involving transition probabilities
$P_{\text{rev.},n}(\mathbf{x},\mathbf{y})$ with
$\mathbf{x},\mathbf{y}\in\bm{\mathcal{X}}_{\varphi(n)-1}$ have the same
probabilities as those of the chain with stationary distribution
$\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}$. We just have to renormalise the
probabilities of the starting point by dividing by
$\pi(\bm{\mathcal{X}}_{\varphi(n)})$ to complete the argument. Note that, by
assumption, the second term on the RHS can be made as small as we want.
Now,
$\displaystyle\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{P}_{\text{rev.},n}}[f(\mathbf{X}(0))f(\mathbf{X}(k))]=\sum_{k=1}^{\varrho(n)}\text{Cov}_{\tilde{P}_{\text{rev.},n}}[f(\mathbf{X}(0)),f(\mathbf{X}(k))]+\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}.$
By assumption, the sum from $\varrho(n)+1$ to $\infty$ of the covariances is
small as well. If we combine this with (25), we have
$\displaystyle\mathrm{var}(f,P_{\text{rev.},n})$
$\displaystyle=\mathrm{var}(f,\tilde{P}_{\text{rev.},n})+\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}+(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}}]$
$\displaystyle\qquad+\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}}]+2\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}$
$\displaystyle\leq\mathrm{var}(f,\tilde{P}_{\text{MH},n})/\omega_{n}+((1-\omega_{n})/\omega_{n})\mathbb{V}\mathrm{ar}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}f(\mathbf{X})+\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}$
$\displaystyle\qquad+(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}}]+\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}}]+2\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}$
$\displaystyle=\mathrm{var}(f,\tilde{P}_{\text{MH},n})+\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}+(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}}]+\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}}]$
$\displaystyle\qquad+2\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}+(1/\omega_{n}-1)\mathrm{var}(f,\tilde{P}_{\text{MH},n})+((1-\omega_{n})/\omega_{n})\mathbb{V}\mathrm{ar}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}f(\mathbf{X}),$
using Theorem 2 of Zanella (2019) for the inequality and omitting the terms
that can be made as small as we want. This theorem can be used as a result of
the bound
$\tilde{P}_{\text{rev.},n}(\mathbf{x},\mathbf{y})\geq\omega_{n}\tilde{P}_{\text{MH},n}(\mathbf{x},\mathbf{y})$
with $\omega_{n}$ defined in (9) (see Section 3.1). By assumption, the last
two terms can be made as small as we want.
Now we used the previous arguments in the reverse order to show that
$\displaystyle\mathrm{var}(f,\tilde{P}_{\text{MH},n})+\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}+(1-1/\pi(\bm{\mathcal{X}}_{\varphi(n)}))\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}}]+\mathbb{E}[f(\mathbf{X})^{2}\mathds{1}_{\mathbf{X}\in\bm{\mathcal{X}}_{\varphi(n)}^{\mathsf{c}}}]$
$\displaystyle\qquad+2\sum_{k=1}^{\varrho(n)}\mathbb{E}_{\tilde{\pi}_{\bm{\mathcal{X}}_{\varphi(n)}}}[f(\mathbf{X})]^{2}=\mathrm{var}(f,P_{\text{MH},n})+\text{small
error term},$
which yields the result. ∎
###### Proof of Proposition 2.
It suffices to prove that the probability to reach the state
$\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime}\in A,\nu^{\prime}$ in one step
is equal to the probability of this state under the target:
$\displaystyle\sum_{\mathbf{x},\nu}\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime}))\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$
$\displaystyle=\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\,d\bm{\theta}_{\mathbf{y}}^{\prime},$
(28)
where $P$ is the transition kernel. Note that we abuse notation here by
denoting the measure $d\bm{\theta}_{\mathbf{y}}^{\prime}$ on the left-hand
side (LHS) given that we may in fact use vectors of auxiliary variables to
generate the proposal when switching models, which often do not have the same
dimension as $\bm{\theta}_{\mathbf{y}}^{\prime}$.
We consider two distinct events: a model switch is proposed, that we denote
$S$, and a parameter update is proposed (therefore denoted $S^{\mathsf{c}}$).
We know that the probabilities of these events are
$1-q_{\mathbf{x},\nu}(\mathbf{x})$ and $q_{\mathbf{x},\nu}(\mathbf{x})$,
respectively. We rewrite the LHS of (28) as
$\displaystyle\sum_{\mathbf{x},\nu}\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime}))\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$
(29)
$\displaystyle\quad=\sum_{\mathbf{x},\nu}\,(1-q_{\mathbf{x},\nu}(\mathbf{x}))\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime})\mid
S)\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$ (30)
$\displaystyle\qquad+\sum_{\mathbf{x},\nu}\,q_{\mathbf{x},\nu}(\mathbf{x})\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime})\mid
S^{\mathsf{c}})\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}.$
(31)
We analyse the two terms separately. We know that
$P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime})\mid
S^{\mathsf{c}})=\delta_{(\mathbf{x},\nu)}(\mathbf{y},\nu^{\prime})\,P_{S^{\mathsf{c}}}(\bm{\theta}_{\mathbf{x}},\bm{\theta}_{\mathbf{y}}^{\prime}),$
where $P_{S^{\mathsf{c}}}$ is the transition kernel associated with the method
used to update the parameters. Therefore, the second term on the RHS of (29)
is equal to
$\displaystyle\sum_{k,\nu}\,q_{\mathbf{x},\nu}(\mathbf{x})\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime})\mid
S^{\mathsf{c}})\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$
$\displaystyle\quad=q_{\mathbf{y},\nu^{\prime}}(\mathbf{y})\,\pi(\mathbf{y})\times(1/2)\int\pi(\bm{\theta}_{\mathbf{y}}\mid\mathbf{y})\left(\int_{A}P_{S^{\mathsf{c}}}(\bm{\theta}_{\mathbf{y}},\bm{\theta}_{\mathbf{y}}^{\prime})\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{y}}.$
We also know that $P_{S^{\mathsf{c}}}$ leaves the conditional distribution
$\pi(\,\cdot\mid\mathbf{y})$ invariant, implying that
$\displaystyle
q_{\mathbf{y},\nu^{\prime}}(\mathbf{y})\,\pi(\mathbf{y})\times(1/2)\int\pi(\bm{\theta}_{\mathbf{y}}\mid\mathbf{y})\left(\int_{A}P_{S^{\mathsf{c}}}(\bm{\theta}_{\mathbf{y}},\bm{\theta}_{\mathbf{y}}^{\prime})\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{y}}$
(32)
$\displaystyle\quad=q_{\mathbf{y},\nu^{\prime}}(\mathbf{y})\,\pi(\mathbf{y})\times(1/2)\int_{A}\pi(\bm{\theta}_{\mathbf{y}}^{\prime}\mid\mathbf{y})\,d\bm{\theta}_{\mathbf{y}}^{\prime}=q_{\mathbf{y},\nu^{\prime}}(\mathbf{y})\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\,d\bm{\theta}_{\mathbf{y}}^{\prime}.$
(33)
For the model switching case (the first term on the RHS of (29)), we use the
fact that there is a connection between
$P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime})\mid
S)$ and the kernel associated to a specific RJ. Consider that
$q_{\mathbf{x}}(\mathbf{y})=(1/2)\,q_{\mathbf{x},-1}(\mathbf{y})+(1/2)\,q_{\mathbf{x},+1}(\mathbf{y})$
for all $\mathbf{x},\mathbf{y}$ and that all other proposal distributions in
RJ are the same as in Algorithm 3 during model switches. In this case,
$\alpha_{\text{RJ}}=\alpha_{\text{NRJ}}$ in the case where at the current
iteration it is chosen to use $q_{\mathbf{x},\nu}$ (which happens with
probability $1/2$) and in the reverse move it is chosen to use
$q_{\mathbf{y},-\nu}$ (which also happens with probability $1/2$).
Consider the case where Model $\mathbf{y}$ is reached from Model
$\mathbf{x}\neq\mathbf{y}$ coming from direction $\nu^{\prime}=\nu$. Given the
reversibility of RJ, the probability to go from Model $\mathbf{x}$ with
parameters in $B$ to Model $\mathbf{y}\neq\mathbf{x}$ with parameters in $A$
is
$\displaystyle\int_{B}\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\left(\int_{A}P_{\text{RJ}}((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime}))\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}=\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\left(\int_{B}P_{\text{RJ}}((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime}),(\mathbf{x},\bm{\theta}_{\mathbf{x}}))\,d\bm{\theta}_{\mathbf{x}}\right)\,d\bm{\theta}_{\mathbf{y}}^{\prime},$
(34)
where $P_{\text{RJ}}$ is the transition kernel of the RJ. Note that
$P_{\text{RJ}}((\mathbf{x},\bm{\theta}_{\mathbf{x}}),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime}))=(1/2)\,(1-q_{\mathbf{x},\nu}(\mathbf{x}))\,P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu)\mid
S),$
given that the difference between both kernels is that in RJ, it is first
decided to use $q_{\mathbf{x},\nu}$, there is thus an additional probability
of $1/2$. Analogously,
$P_{\text{RJ}}((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime}),(\mathbf{x},\bm{\theta}_{\mathbf{x}}))=(1/2)\,(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\,P((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},-\nu),(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)\mid
S)$. Using this and taking $B$ equals the whole parameter (and auxiliary)
space in (34), we have
$\displaystyle(1-q_{\mathbf{x},\nu}(\mathbf{x}))\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu)\mid
S)\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$
$\displaystyle\qquad=(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\left(\int
P((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},-\nu),(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)\mid
S)\,d\bm{\theta}_{\mathbf{x}}\right)\,d\bm{\theta}_{\mathbf{y}}^{\prime}.$
The only other case to consider for model switches is where Model $\mathbf{y}$
is reached from Model $\mathbf{y}$ (because the proposal is rejected) and the
direction is $\nu^{\prime}=-\nu$. The probability of this transition is
$(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\left(1-\sum_{\mathbf{x}\neq\mathbf{y}}\int
P((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},-\nu),(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)\mid
S)\,d\bm{\theta}_{\mathbf{x}}\right)\,d\bm{\theta}_{\mathbf{y}}^{\prime}.$
So, the total probability of reaching
$\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime}\in A,\nu^{\prime}$ through a
model switch is (recalling (29)):
$\displaystyle\sum_{\mathbf{x},\nu}\,(1-q_{\mathbf{x},\nu}(\mathbf{x}))\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu^{\prime})\mid
S)\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$
$\displaystyle\quad=\sum_{\mathbf{x}\neq\mathbf{y}}(1-q_{\mathbf{x},\nu}(\mathbf{x}))\int\pi(\mathbf{x},\bm{\theta}_{\mathbf{x}})\times(1/2)\left(\int_{A}P((\mathbf{x},\bm{\theta}_{\mathbf{x}},\nu),(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},\nu)\mid
S)\,d\bm{\theta}_{\mathbf{y}}^{\prime}\right)\,d\bm{\theta}_{\mathbf{x}}$
$\displaystyle\qquad+(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\left(1-\sum_{\mathbf{x}\neq\mathbf{y}}\int
P((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},-\nu),(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)\mid
S)\,d\bm{\theta}_{\mathbf{x}}\right)\,d\bm{\theta}_{\mathbf{y}}^{\prime}$
$\displaystyle\quad=\sum_{\mathbf{x}\neq\mathbf{y}}(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\left(\int
P((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},-\nu),(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)\mid
S)\,d\bm{\theta}_{\mathbf{x}}\right)\,d\bm{\theta}_{\mathbf{y}}^{\prime}$
$\displaystyle\qquad+(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2)\left(1-\sum_{\mathbf{x}\neq\mathbf{y}}\int
P((\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime},-\nu),(\mathbf{x},\bm{\theta}_{\mathbf{x}},-\nu)\mid
S)\,d\bm{\theta}_{\mathbf{x}}\right)\,d\bm{\theta}_{\mathbf{y}}^{\prime}$
$\displaystyle\quad=(1-q_{\mathbf{y},-\nu}(\mathbf{y}))\int_{A}\pi(\mathbf{y},\bm{\theta}_{\mathbf{y}}^{\prime})\times(1/2).$
Combining this with (32) allows to conclude the proof. ∎
|
2024-09-04T02:54:59.374374 | 2020-03-11T20:31:20 | 2003.05511 | {
"authors": "Tong Bai, Cunhua Pan, Hong Ren, Yansha Deng, Maged Elkashlan, and\n Arumugam Nallanathan",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26178",
"submitter": "Pan Cunhua",
"url": "https://arxiv.org/abs/2003.05511"
} | arxiv-papers | # Resource Allocation for Intelligent Reflecting Surface Aided Wireless
Powered Mobile Edge Computing in OFDM Systems
Tong Bai, , Cunhua Pan, ,
Hong Ren, , Yansha Deng, ,
Maged Elkashlan, , and Arumugam Nallanathan T. Bai, C. Pan, H. Ren, M.
Elkashlan and A. Nallanathan are with the School of Electronic Engineering and
Computer Science, Queen Mary University of London, London E1 4NS, U.K.
(e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). Y. Deng is with the
Department of Engineering, King’s College London, London, WC2R 2LS, U.K.
(e-mail: [email protected]).
###### Abstract
Wireless powered mobile edge computing (WP-MEC) has been recognized as a
promising technique to provide both enhanced computational capability and
sustainable energy supply to massive low-power wireless devices. However, its
energy consumption becomes substantial, when the transmission link used for
wireless energy transfer (WET) and for computation offloading is hostile. To
mitigate this hindrance, we propose to employ the emerging technique of
intelligent reflecting surface (IRS) in WP-MEC systems, which is capable of
providing an additional link both for WET and for computation offloading.
Specifically, we consider a multi-user scenario where both the WET and the
computation offloading are based on orthogonal frequency-division multiplexing
(OFDM) systems. Built on this model, an innovative framework is developed to
minimize the energy consumption of the IRS-aided WP-MEC network, by optimizing
the power allocation of the WET signals, the local computing frequencies of
wireless devices, both the sub-band-device association and the power
allocation used for computation offloading, as well as the IRS reflection
coefficients. The major challenges of this optimization lie in the strong
coupling between the settings of WET and of computing as well as the unit-
modules constraint on IRS reflection coefficients. To tackle these issues, the
technique of alternative optimization is invoked for decoupling the WET and
computing designs, while two sets of locally optimal IRS reflection
coefficients are provided for WET and for computation offloading separately
relying on the successive convex approximation method. The numerical results
demonstrate that our proposed scheme is capable of monumentally outperforming
the conventional WP-MEC network without IRSs. Quantitatively, about $80\%$
energy consumption reduction is attained over the conventional MEC system in a
single cell, where $3$ wireless devices are served via $16$ sub-bands, with
the aid of an IRS comprising of $50$ elements.
## I Introduction
### I-A Motivation and Scope
In the Internet-of-Things (IoT) era, a myriad of heterogeneous devices are
envisioned to be interconnected [1]. However, due to the stringent constraints
both on device sizes and on manufacturing cost, many of them have to be
equipped with either life-limited batteries or low-performance processors.
Consequently, if only relying on their local computing, these resource-
constrained devices are incapable of accommodating the applications that
require sustainable and low-latency computation, e.g. wireless body area
networks [2] and environment monitoring [3]. Fortunately, wireless powered
mobile edge computing (WP-MEC)[4, 5, 6, 7, 8, 9, 10, 11, 12, 13], which
incorporates radio frequency (RF) based wireless energy transmission (WET)
[14, 15, 16] and mobile edge computing (MEC) [17, 18], constitutes a promising
solution of this issue. Specifically, at the time of writing, the commercial
RF-based WET has already been capable of delivering $0.05\leavevmode\nobreak\
\rm{mW}$ to a distance of $12-14\leavevmode\nobreak\ \rm{m}$ [14], which is
sufficient to charge many low-power devices, whilst the MEC technique may
provide the cloud-like computing service at the edge of mobile networks [18].
In WP-MEC systems, hybrid access points (HAP) associated with edge computing
nodes are deployed in the proximity of wireless devices, and the computation
of these devices is typically realized in two phases, namely the WET phase and
the computing phase. To elaborate, the batteries of the devices are
replenished by harvesting WET signals from the HAP in the first phase, while
in the computing phase, devices may decide whether to process their
computational tasks locally or offload them to edge computing nodes via the
HAP.
Given that these wireless devices are fully powered by WET in WP-MEC systems,
the power consumption at HAPs becomes substantial, which inevitably increases
the expenditure on energy consumption and may potentially saturate power
rectifiers. At the time of writing, the existing research contributions that
focus on reducing the power consumption mainly rely on the joint optimization
of the WET and of computing [5], as well as cooperative computation offloading
[10, 11]. However, wireless devices are still suspicious to severe channel
attenuation, which limits the performance of WP-MEC systems. To resolve this
issue, we propose to deploy the emerging intelligent reflecting surfaces (IRS)
[19, 20, 21] in the vicinity of devices, for providing an additional
transmission link both for WET and for computation offloading. Then, the power
consumption can be beneficially reduced both for the downlink and for the
uplink. To elaborate, an IRS comprises of an IRS controller and a large number
of low-cost passive reflection elements. Regulated by the IRS controller, each
IRS reflection element may adapt both the amplitude and the phase of the
incident signals reflected, for collaboratively modifying the signal
propagation environment. The gain attained by IRSs is based on the combination
of so-called the virtual array gain and the reflection-enabled beamforming
gain [19]. More explicitly, the virtual array gain is achieved by combining
the direct and IRS-reflected links, while the reflection-enabled beamforming
gain is realized by proactively adjusting the reflection coefficients of the
IRS elements. By combining these two types of gain together, IRSs are capable
of reducing the power required both for WET and for computation offloading,
thus improving the energy efficiency of WP-MEC systems. In this treatise, we
aim for providing a holistic scheme to minimize the energy consumption of WP-
MEC systems, relying on IRSs.
### I-B Related Works
The current state-of-the-art contributions are reviewed from the perspectives
of WP-MEC and of IRS-aided networks, as follows.
#### I-B1 Wireless Powered Mobile Edge Computing
This topic has attracted an increasing amount of research attention [4, 5, 6,
7, 8, 9, 10, 11, 12, 13]. Specifically, You _et al._ firstly proposed the WP-
MEC framework [4], where the probability of successfully computing was
maximized subject to the constraints both on energy harvesting and on latency.
The single-user system considered in this first trial limits its application
in large-scale scenarios. For eliminating this shortage, an energy-
minimization algorithm was proposed for the multi-user scenario [5], where the
devices’ computation offloading was realized by the time division multiple
access (TDMA) technique. Following this, Bi and Zhang maximized the weighted
sum computation rate in a similar TDMA system [6], while an orthogonal
frequency division multiple access (OFDMA) based multi-user WP-MEC system was
investigated in [7]. A holistic online optimization algorithm was proposed for
the WP-MEC in industrial IoT scenarios [8]. In the aforementioned works, the
associated optimization is commonly realized with the aid of the alternative
optimization (AO) method, because the pertinent optimization problems are
usually not jointly convex. This inevitably imposes a delay on decision
making. To mimic this issue, Huang _et al._ proposed a deep reinforcement
learning based algorithm for maximizing the computation rate of WP-MEC systems
[9], which may replace the aforementioned complicated optimization by a pre-
trained look-up table. Furthermore, as for the system where both near and far
devices have to be served, the energy consumption at the HAP has to be vastly
increased, because the farther device harvests less energy while a higher
transmit power is required for its computation offloading. Aimed for releasing
this so-called “doubly near-far” issue, the technique of user cooperation was
revisited [10, 11]. At the time of writing, the WET and computation offloading
in WP-MEC systems in the face of hostile communication environments has not
been well addressed. Against this background, we aim for tackling this issue
by invoking IRSs. Let us now continue by reviewing the relevant research
contributions on IRSs as follows.
#### I-B2 IRS-Aided Networks
In order to exploit the potential of IRSs, an upsurging number of research
efforts have been devoted in its channel modeling [22, 23], analyzing the
impact of limited-resolution phase shifts [24, 25], channel estimation [26,
27] as well as IRS reflection coefficient designs [28, 29, 30, 31]. Inspired
by these impressive research contributions, the beneficial role of IRSs was
evaluated in various application scenarios [32, 33, 34, 35, 36, 37, 38].
Specifically, an IRS was employed in multi-cell communications systems for
mitigate the severe inter-cell interference [32], where an IRS comprising of
$100$ reflection elements was shown to be capable of doubling the sum rate of
the multi-cell system. Yang _et al._ investigated an IRS-enhanced OFDMA system
[33], whose common rate was improved from around $2.75\leavevmode\nobreak\
\rm{bps/Hz}$ to $4.4\leavevmode\nobreak\ \rm{bps/Hz}$, with the aid of a
$50$-element IRS. Apart from assisting the aforementioned throughput
maximization in the conventional communications scenario, a sophisticated
design of IRSs may also eminently upgrade the performance of diverse emerging
wireless networks, e.g. protecting data transmission security [34, 35],
assisting simultaneous wireless information and power transfer (SWIPT) [36],
enhancing the user cooperation in wireless powered communications networks
[37], as well as reducing the latency in IRS-aided MEC systems [38]. These
impressive research contributions inspire us to exploit the beneficial role of
IRSs in this momentous WP-MEC scenario.
### I-C Novelty and Contributions
In this paper, an innovative IRS-aided WP-MEC framework is proposed, where we
consider orthogonal frequency-division multiplexing (OFDM) systems for its WET
and devices’ computation offloading. Under this framework, a joint WET and
computing design is conceived for minimizing its energy consumption, by
optimizing the power allocation of the WET signals over OFDM sub-bands, the
local computing frequencies of wireless devices, both the sub-band-device
association and the power allocation used for computation offloading, as well
as the pertinent IRS reflection coefficient design. Let us now detail our
contributions as follows.
* •
_Energy minimization problem formulation for the new IRS-aided WP-MEC design:_
In order to reduce the energy consumption of WP-MEC systems, we employ an IRS
in WP-MEC systems and formulate a pertinent energy minimization problem. Owing
to the coupling effects between the designs of WET and of computing, it is
difficult to find its globally optimal solution. Alternatively, we provide an
alternative optimization (AO) based solution to approach a locally optimal
solution, by iteratively optimizing settings of WET and of computing.
* •
_WET design:_ The WET setting is realized by alternatively optimizing the
power allocation of energy-carrying signals over OFDM sub-bands and the IRS
reflection coefficients. Specifically, given a set of fixed IRS reflection
coefficients, the power allocation problem can be simplified to be a linear
programming problem, which can be efficiently solved by the existing
optimization software. Given a fixed power allocation, the IRS reflection
coefficient design becomes a feasibility-check problem, the solution of which
is incapable of ensuring a rapid convergence. To tackle this issue, we
reformulate the problem by introducing a number of auxiliary variables, and
provide a locally optimal design of IRS reflection coefficients, with the aid
of several steps of mathematical manipulations and of the successive convex
approximation (SCA) method.
* •
_Computing design:_ The settings at the computing phase are specified by
alternatively optimizing the joint sub-band-device association for and the
power allocation for devices’ computation offloading, IRS reflection
coefficients at the computing phase as well as the local computing
frequencies. Specifically, as verified by [39], the duality gap vanishes when
the number of sub-bands exceeds $8$. Hence, we provide a near-optimal solution
for the joint sub-band-device association and power allocation problem,
relying on the Lagrangian duality method. The IRS reflection coefficients are
designed using the similar approach devised for that in the WET phase.
Finally, our analysis reveals that the optimal local computing frequencies can
be obtained by selecting their maximum allowable values.
* •
_Numerical validations:_ Our numerical results validates the benefits of
employing IRSs in WP-MEC systems, and quantify the energy consumption of our
proposed framework in diverse simulation environments, together with two
benchmark schemes.
The rest of the paper is organized as follows. In Section II, we describe the
system model and formulate the pertinent problem. A solution of this problem
is provided in Section III. The numerical results are presented in Section IV.
Finally, our conclusions are drawn in Section V.
Figure 1: An illustration of our IRS-aided WP-MEC system, where $K$ single-
antenna devices are served by a mobile edge computing node via a single-
antenna hybrid access point, with the aid of an $N$-element IRS.
## II System Model and Problem Formulation
As illustrated in Fig. 1, we consider an OFDM-based WP-MEC system, where $K$
single-antenna devices are served by a single-antenna hybrid access point
(HAP) associated with an edge computing node through $M$ equally-divided OFDM
sub-bands. Similar to the assumption in [5, 6, 7], we assume that these
devices do not have any embedded energy supply available, but are equipped
with energy storage devices, e.g. rechargeable batteries or super-capacitors,
for storing the energy harvested from RF signals. As shown in Fig. 2, relying
on the so-called “harvest-then-computing” mechanism [5], the system operates
in a two-phase manner in each time block. Specifically, during the WET phase,
the HAP broadcasts energy-carrying signals to all $K$ devices for replenishing
their batteries, while these $K$ devices process their computing tasks both by
local computing and by computation offloading during the computing phase. We
denote the duration of each time block by $T$, which is chosen to be no larger
than the tolerant latency of MEC applications. The duration of the WET and of
the computing phases are set as $\tau T$ and $(1-\tau)T$, respectively.
Furthermore, to assist the WET and the devices’ computation offloading in this
WP-MEC system, we place an IRS comprising of $N$ reflection elements in the
proximity of devices. The reflection coefficients of these IRS reflection
elements are controlled by an IRS controller in a real-time manner, based on
the optimization results provided by the HAP.
Figure 2: An illustration of the harvest-then-offloading protocol, where $\tau
T$ and $(1-\tau)T$ refer to the duration of the WET and the computing phases,
respectively.
Let us continue by elaborating on the equivalent baseband time-domain channel
as follows. We denote the equivalent baseband time-domain channel of the
direct link between the $k$-th device and the HAP, the equivalent baseband
time-domain channel between the $n$-th IRS element and the HAP, and the
equivalent baseband time-domain channel between the $k$-th device and the
$n$-th IRS element by
$\hat{\boldsymbol{h}}^{d}_{k}\in\mathbb{C}^{L^{d}_{k}\times 1}$,
$\hat{\boldsymbol{g}}_{n}\in\mathbb{C}^{L_{1}\times 1}$ and
$\hat{\boldsymbol{r}}_{k,n}\in\mathbb{C}^{L_{2,k}\times 1}$, respectively,
where $L^{d}_{k}$, $L_{1}$ and $L_{2,k}$ represent the respective number of
delay spread taps. Without loss of generality, we assume that the above
channels remain approximately constant over each time block. Furthermore, the
channels are assumed to be reciprocal for the downlink and the uplink.
As for the IRS, we denote the phase shift vector of and the amplitude response
of the IRS reflection elements by
$\boldsymbol{\theta}=[\theta_{1},\theta_{2},\ldots,\theta_{N}]^{T}$ and
$\boldsymbol{\beta}=[\beta_{1},\beta_{2},\ldots,\beta_{N}]^{T}$, respectively,
where we have $\theta_{n}\in[0,2\pi)$ and $\beta_{n}\in[0,1]$. Then, the
corresponding reflection coefficients of the IRS are given by
$\boldsymbol{\Theta}=[\Theta_{1},\Theta_{2},\ldots,\Theta_{N}]^{T}=[\beta_{1}e^{j\theta_{1}},\beta_{2}e^{j\theta_{2}},\ldots,\beta_{N}e^{j\theta_{N}}]^{T}$,
where $j$ represents the imaginary unit and we have $|\Theta_{n}|\leq 1$ for
$\forall n\in\mathcal{N}$. The baseband equivalent time-domain channel of the
reflection link is the convolution of the device-IRS channel, of the IRS
reflection response, and of the IRS-HAP channel. Specifically, the baseband
equivalent time-domain channel reflected by the $n$-th IRS element is
formulated as
$\hat{\boldsymbol{h}}^{r}_{k,n}=\hat{\boldsymbol{r}}_{k,n}\ast\Theta_{n}\ast\hat{\boldsymbol{g}}_{n}=\Theta_{n}\hat{\boldsymbol{r}}_{k,n}\ast\hat{\boldsymbol{g}}_{n}$.
Here, we have $\hat{\boldsymbol{h}}^{r}_{k,n}\in\mathbb{C}^{L^{r}_{k}\times
1}$ and $L^{r}_{k}=L_{1}+L_{2,k}-1$, which denotes the number of delay spread
taps of the reflection channel. Furthermore, we denote the time-domain zero-
padded concatenated device-IRS-HAP channel between the $k$-th device and the
HAP via the $n$-th IRS element by
$\boldsymbol{v}_{k,n}=[(\hat{\boldsymbol{r}}_{k,n}\ast\hat{\boldsymbol{g}}_{n})^{T},0,\ldots,0]^{T}\in\mathbb{C}^{M\times
1}$. Upon denoting
$\boldsymbol{V}_{k}=[\boldsymbol{v}_{k,1},\ldots,\boldsymbol{v}_{k,N}]\in\mathbb{C}^{M\times
N}$, we formulate the composite device-IRS-HAP channel between the $k$-th
device and the HAP as
$\boldsymbol{h}^{r}_{k}=\boldsymbol{V}_{k}\boldsymbol{\Theta}$. Similarly, we
use
$\boldsymbol{h}^{d}_{k}=[(\hat{\boldsymbol{h}}^{d}_{k})^{T},0,\ldots,0]^{T}\in\mathbb{C}^{M\times
1}$ to represent the zero-padded time-domain channel of the direct device-HAP
link. To this end, we may readily arrive at the superposed channel impulse
response (CIR) for the $k$-th device, formulated as
$\displaystyle\boldsymbol{h}_{k}=\boldsymbol{h}^{d}_{k}+\boldsymbol{h}^{r}_{k}=\boldsymbol{h}^{d}_{k}+\boldsymbol{V}_{k}\boldsymbol{\Theta},\quad\forall
k\in\mathcal{K},$ (1)
whose number of delay spread taps is given by
$L_{k}=\max(L^{d}_{k},L^{r}_{k})$. We assume that the number of cyclic
prefixes (CP) is no smaller than the maximum number of delay spread taps for
all devices, so that the inter-symbol interference (ISI) can be eliminated.
Upon denoting the $m$-th row of the $M\times M$ discrete Fourier transform
(DFT) matrix $\boldsymbol{F}_{M}$ by $\boldsymbol{f}^{H}_{m}$, we formulate
the channel frequency response (CFR) for the $k$-th device at the $m$-th sub-
band as
$\displaystyle
C_{k,m}(\boldsymbol{\Theta})=\boldsymbol{f}^{H}_{m}\boldsymbol{h}_{k}=\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta},\quad\forall
k\in\mathcal{K},\forall m\in\mathcal{M}.$ (2)
For ease of exposition, we assume that the knowledge of
$\boldsymbol{h}^{d}_{k}$ and of $\boldsymbol{V}_{k}$ is perfectly known at the
HAP. Naturally, this assumption is idealistic. Hence, the algorithm developed
in this paper can be deemed to represent the best-case bound for the energy
performance of realistic scenarios. Since different types of signals are
transmitted in the WET and computing phases, the reflection coefficients of
the IRS require separate designs in these two phases. The models of the WET
and of computing phases are detailed as follows.
### II-A Model of the Wireless Energy Transfer Phase
It is assumed that the capacity of devices’ battery is large enough so that
all the harvest energy can be saved without energy overflow. Let us use
$\boldsymbol{\Theta}^{E}=\big{\\{}\Theta^{E}_{1},\Theta^{E}_{2},\ldots,\Theta^{E}_{N}\big{\\}}$
to represent the IRS reflection-coefficient vector during the WET phase, where
we have $|\Theta^{E}_{n}|\leq 1$ for $\forall n\in\mathcal{N}$. Then, the
composite channel of the $m$-th sub-band for the $k$-th device during the WET
phase $C_{k,m}(\boldsymbol{\Theta}^{E})$ can be obtained by (2). As a benefit
of the broadcasting nature of WET, each device can harvest the energy from the
RF signals transmitted over all $M$ sub-bands. Hence, upon denoting the power
allocation for the energy-carrying RF signals at the $M$ sub-bands during the
WET phase by $\boldsymbol{p}^{E}=[p^{E}_{1},p^{E}_{2},\ldots,p^{E}_{M}]$, we
are readily to formulate the energy harvested by the $k$-th device as [5]
$\displaystyle
E_{k}(\tau,\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})=\sum^{M}_{m=1}\eta\tau
Tp^{E}_{m}\big{|}C_{k,m}(\boldsymbol{\Theta}^{E})\big{|}^{2},$ (3)
where $\eta\in[0,1]$ denotes the efficiency of the energy harvesting at the
wireless devices.
### II-B Model of the Computing Phase
We consider the data-partitioning based application [40], where a fraction of
the data can be processed locally, while the other part can be offloaded to
the edge node. For a specific time block, we use $L_{k}$ and $\ell_{k}$ to
denote the number of bits to be processed by the $k$-th device and its
computation offloading volume in terms of the number of bits, respectively.
The models of local computing, of computation offloading and of edge computing
are detailed as follows.
#### II-B1 Local Computing
We use $f_{k}$ and $c_{k}$ to represent its computing capability in terms of
the number of central processing unit (CPU) cycles per second and the number
of CPU cycles required to process a single bit, for the $k$-th device,
respectively. The number of bits processed by local computing is readily
calculated as $(1-\tau)Tf_{k}/c_{k}$, and the number of bits to be offloaded
is given by $\ell_{k}=L_{k}-(1-\tau)Tf_{k}/c_{k}$. Furthermore, we assume that
$f_{k}$ is controlled in the range of $[0,f_{max}]$ using the dynamic voltage
scaling model [40]. Upon denoting the computation energy efficiency
coefficient of the processor’s chip by $\kappa$, we formulate the power
consumption of the local computing mode as $\kappa f_{k}^{2}$ for the $k$-th
device [40].
#### II-B2 Computation offloading
In order to mitigate the co-channel interference, the devices’ computation
offloading is realized relying on the orthogonal frequency-division multiple
access (OFDMA) scheme. In this case, each sub-band is allowed to be used by at
most a single device. We use the binary vector
$\boldsymbol{\alpha}_{k}=[\alpha_{k,1},\alpha_{k,2},\ldots,\alpha_{k,M}]^{T}$
and the non-negative vector
$\boldsymbol{p}^{I}_{k}=[p^{I}_{k,1},p^{I}_{k,2},\ldots,p^{I}_{k,M}]^{T}$ to
represent the association between the sub-band and devices as well as the
power allocation of the $k$-th device to the $M$ sub-bands, respectively,
where we have
$\displaystyle\alpha_{k,m}$ $\displaystyle=\begin{cases}0,&\quad\text{if
}p^{I}_{k,m}=0,\\\ 1,&\quad\text{if }p^{I}_{k,m}>0.\end{cases}$ (4)
The power consumption of computation offloading is given by
$\sum^{M}_{m=1}\alpha_{k,m}(p_{k,m}+p_{c})$, where $p_{c}$ represents a
constant circuit power (caused by the digital-to-analog converter, filter,
etc.) [5]. Let us denote the IRS reflection-coefficient vector during the
computation offloading by
$\boldsymbol{\Theta}^{I}=\big{\\{}\Theta^{I}_{1},\Theta^{I}_{2},\ldots,\Theta^{I}_{N}\big{\\}}$,
where $|\Theta^{I}_{n}|\leq 1$ for $\forall n\in\mathcal{N}$. Then, the
composite channel of the $k$-th device at the $m$-th sub-band denoted by
$C_{k,m}(\boldsymbol{\Theta}^{I})$ can be obtained by (2). The corresponding
achievable rate of computation offloading is formulated below for the $k$-th
device
$\displaystyle
R_{k}(\boldsymbol{\alpha}_{k},\boldsymbol{p}^{I}_{k},\boldsymbol{\Theta}^{I})=\sum^{M}_{m=1}\alpha_{k,m}B\log_{2}\bigg{(}1+\frac{p_{k,m}|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\bigg{)},$
(5)
where $\Gamma$ is the gap between the channel capacity and a specific
modulation and coding scheme. Furthermore, in order to offload all the
$\ell_{k}$ bits within the duration of the computation phase, the achievable
offloading rate has to obey
$R_{k}(\tau,\boldsymbol{\alpha}_{k},\boldsymbol{p}^{I}_{k},\boldsymbol{\Theta}^{I})\geq\frac{\ell_{k}}{(1-\tau)T}$.
#### II-B3 Edge Computing
Invoking the simplified linear model [5], we formulate the energy consumption
at the edge node as
$\vartheta\sum^{K}_{k=1}\ell_{k}=\vartheta\sum^{K}_{k=1}\big{[}L_{k}-(1-\tau)Tf_{k}/c_{k}\big{]}$.
Furthermore, the latency imposed by edge computing comprises of two parts. The
first part is caused by processing the computational tasks. Given that edge
nodes typically possess high computational capabilities, this part can be
negligible. The second part is induced by sending back the computational
results, which are usually of a small volume. Hence, the duration of sending
the feedback is also negligible. As such, we neglect the latency induced by
edge computing.
### II-C Problem Formulation
In this paper, we aim for minimizing the total energy consumption of the OFDM-
based WP-MEC system, by optimizing the time allocation for WET and computing
phases $\tau$, both the power allocation $\boldsymbol{p}^{E}$ and the IRS
reflection coefficients $\boldsymbol{\Theta}^{E}$ at the WET phase, and the
local CPU frequency at devices $\boldsymbol{f}$, the sub-band-device
association $\\{\boldsymbol{\alpha}_{k}\\}$ and the power allocation
$\\{\boldsymbol{p}_{k}\\}$ as well as the IRS reflection coefficients
$\boldsymbol{\Theta}^{I}$ at the computing phase, subject to the energy
constraint imposed by energy harvesting, the latency requirement of
computation offloading and the sub-band-device association constraint in OFDMA
systems as well as the constraint on IRS reflection coefficients. Since the
batteries of all the wireless devices are replenished by the HAP, their energy
consumption is covered by the energy consumption at the HAP during the WET
phase. Hence, the total energy consumption of the system is formulated as the
summation of the energy consumption both of the WET at the HAP and of the edge
computing, i.e. $\tau
T\sum^{M}_{m=1}p^{E}_{m}+\vartheta\sum^{K}_{k=1}\big{[}L_{k}-(1-\tau)Tf_{k}/c_{k}\big{]}$.
To this end, the energy minimization problem is readily formulated for our
OFDM-based WP-MEC system as
$\displaystyle\mathcal{P}0\mathrel{\mathop{\mathchar
58\relax}}\mathop{\min}\limits_{\begin{subarray}{c}\tau,\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E},\boldsymbol{f},\\\
\\{\boldsymbol{\alpha}_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\Theta}^{I}\end{subarray}}\tau
T\sum^{M}_{m=1}p^{E}_{m}+\vartheta\sum^{K}_{k=1}\bigg{[}L_{k}-\frac{(1-\tau)Tf_{k}}{c_{k}}\bigg{]}$
$\displaystyle\text{s.t.}\quad 0<\tau<1,$ (6a) $\displaystyle\quad\quad
p^{E}_{m}\geq 0,\quad\forall m\in\mathcal{M},$ (6b)
$\displaystyle\quad\quad|\Theta^{E}_{n}|\leq 1,\quad\forall n\in\mathcal{N},$
(6c) $\displaystyle\quad\quad 0\leq f_{k}\leq f_{max},\quad\forall
k\in\mathcal{K},$ (6d)
$\displaystyle\quad\quad\alpha_{k,m}\in\\{0,1\\},\quad\forall
k\in\mathcal{K},\quad\forall m\in\mathcal{M},$ (6e)
$\displaystyle\quad\quad\sum^{K}_{k=1}\alpha_{k,m}\leq 1,\quad\forall
m\in\mathcal{M},$ (6f) $\displaystyle\quad\quad p^{I}_{k,m}\geq 0,\quad\forall
k\in\mathcal{K},\quad\forall m\in\mathcal{M},$ (6g)
$\displaystyle\quad\quad|\Theta^{I}_{n}|\leq 1,\quad\forall n\in\mathcal{N},$
(6h) $\displaystyle\quad\quad(1-\tau)T\bigg{[}\kappa
f_{k}^{2}+\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})\bigg{]}\leq
E_{k}(\tau,\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E}),\quad\forall
k\in\mathcal{K},$ (6i)
$\displaystyle\quad\quad(1-\tau)TR_{k}(\boldsymbol{\alpha}_{k},\boldsymbol{p}^{I}_{k},\boldsymbol{\Theta}^{I})\geq
L_{k}-\frac{(1-\tau)Tf_{k}}{c_{k}},\quad\forall k\in\mathcal{K}.$ (6j)
Constraint (6a) restricts the time allocation for the WET and for the
computing phases. Constraint (6b) and (6c) represent the range of the power
allocation and the IRS reflection coefficients at the WET phase, respectively.
Constraint (6d) gives the range of tunable local computing frequencies.
Constraint (6e) and (6f) detail the requirement of sub-band-device association
in OFDMA systems. Constraint (6g) and (6h) restrict the range of the power
allocation and the IRS reflection coefficient at the computing phase,
respectively. Constraint (6i) indicates that the sum energy consumption of
local computing and of computation offloading should not exceed the harvested
energy for each device. Finally, Constraint (6j) implies that the
communication link between the $k$-th device and the HAP is capable of
offloading the corresponding computational tasks within the duration of the
computing phase.
## III Joint Optimization of the Settings in the WET and the Computing Phases
In this section, we propose to solve Problem $\mathcal{P}0$ in a two-step
procedure. Firstly, given a fixed $\hat{\tau}\in(0,1)$, Problem $\mathcal{P}0$
can be simplified as follows
$\displaystyle\mathcal{P}1\mathrel{\mathop{\mathchar
58\relax}}\mathop{\min}\limits_{\begin{subarray}{c}\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E},\boldsymbol{f},\\\
\\{\boldsymbol{\alpha}_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\Theta}^{I}\end{subarray}}\hat{\tau}T\sum^{M}_{m=1}p^{E}_{m}+\vartheta\sum^{K}_{k=1}\bigg{[}L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}\bigg{]}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_4},\eqref{eqn:P1_constraint_5},\eqref{eqn:P1_constraint_10},\eqref{eqn:P1_constraint_6},\eqref{eqn:P1_constraint_7},\eqref{eqn:P1_constraint_8},\eqref{eqn:P1_constraint_9},\eqref{eqn:P1_constraint_1},\eqref{eqn:P1_constraint_2}$
(7a)
In the second step, we aim for finding the optimal $\hat{\tau}$ that is
capable of minimizing the OF of Problem $\mathcal{P}0$ using the one-
dimensional search method. In the rest of this section, we focus on solving
Problem $\mathcal{P}1$. At a glance of Problem $\mathcal{P}1$, the
optimization variables $\boldsymbol{f}$, $\\{\boldsymbol{\alpha}_{k}\\}$ and
$\\{\boldsymbol{p}^{I}_{k}\\}$ are coupled with $\boldsymbol{p}^{E}$ and
$\boldsymbol{\Theta}^{E}$ in Constraint (6i), which makes the problem
difficult to solve. To tackle this issue, the AO technique is invoked.
Specifically, upon initializing the setting of the computing phase, we may
optimize the design of the WET phase while fixing the time allocation and the
computing phase settings. Then, the computing phase settings could be
optimized while fixing the time allocation and the design of the WET. A
suboptimal solution can be obtained by iteratively optimizing the designs of
the WET and of the computing phases. Let us detail the initialization as well
as the designs of the WET and of the computing phases, as follows.
### III-A Initialization of the Time Allocation and the Computing Phase
In order to ensure our WET design to be a feasible solution of Problem
$\mathcal{P}1$, the initial settings of the computing phase denoted by
$\boldsymbol{f}^{(0)},\big{\\{}\boldsymbol{\alpha}_{k}^{(0)}\big{\\}},\big{\\{}{\boldsymbol{p}^{I}_{k}}^{(0)}\big{\\}},{\boldsymbol{\Theta}^{I}}^{(0)}$
should satisfy Constraint (6d), (6e), (6f), (6g), (6h) and (6j). Without any
loss of generality, their initialization is set as follows.
* •
Local computing frequency $\boldsymbol{f}^{(0)}$: Obeying the uniform
distribution, each element of $\boldsymbol{f}^{(0)}$ is randomly set in the
range of $[0,f_{max}]$.
* •
IRS reflection coefficient at the computing phase
${\boldsymbol{\Theta}^{I}}^{(0)}$: Obeying the uniform distribution, the
amplitude response ${\beta^{I}_{n}}^{(0)}$ and the phase shift
${\theta^{I}_{n}}^{(0)}$ are randomly set in the range of $[0,1]$ and of
$[0,2\pi)$, respectively. Then,
${\boldsymbol{\Theta}^{I}}^{(0)}=\\{{\beta^{I}_{1}}^{(0)}e^{j{\theta^{I}_{1}}^{(0)}},\ldots,{\beta^{I}_{N}}^{(0)}e^{j{\theta^{I}_{N}}^{(0)}}\\}$
can be obtained.
* •
Sub-band-device association at the computing phase
$\big{\\{}\boldsymbol{\alpha}_{k}^{(0)}\big{\\}}$: We reserve a single sub-
band for the devices associated with the index ranging from $k=1$ to $k=K$,
sequentially. Specific to the $k$-th device, we use $k_{m}^{(0)}$ to denote
the sub-band having the maximum
$\big{|}C_{k,m}\big{(}{\boldsymbol{\Theta}^{I}}^{(0)}\big{)}\big{|}^{2}$ over
the unassigned sub-bands, and assign this sub-band to the $k$-th device.
* •
Power allocation at the computing phase
$\big{\\{}{\boldsymbol{p}^{I}_{k}}^{(0)}\big{\\}}$: For the $k$-th device, its
power allocation at the computing phase should satisfy Constraint (6j). For
minimizing the energy consumption, we assume the equivalence of two sides in
Constraint (6j). Then, its initial power allocation is given by
${p^{I^{(0)}}_{k,k_{m}^{(0)}}}=\frac{\Gamma\sigma^{2}\Big{[}2^{\frac{L_{k}}{(1-\hat{\tau})TB}-\frac{f_{k}}{c_{k}B}}-1\Big{]}}{\big{|}c_{k,k^{(0)}_{m}}\big{(}{\boldsymbol{\Theta}^{I}}^{(0)}\big{)}\big{|}^{2}}$.
For those sub-bands associated with the index $m\neq k^{(0)}_{m}$, we set
${p^{I^{(0)}}_{k,m}}=0$.
### III-B Design of the WET Phase While Fixing the Time Allocation and
Computing Settings
Given a fixed time allocation $\hat{\tau}$ and the settings of the computing
phase $\boldsymbol{f}$, $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ and $\boldsymbol{\Theta}^{I}$, we may simplify
Problem $\mathcal{P}1$ as
$\displaystyle\mathcal{P}2\mathrel{\mathop{\mathchar
58\relax}}\mathop{\min}\limits_{\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E}}\hat{\tau}T\sum^{M}_{m=1}p^{E}_{m}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_4},\eqref{eqn:P1_constraint_5},$
$\displaystyle\quad\quad(1-\hat{\tau})T\bigg{[}\kappa
f_{k}^{2}+\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})\bigg{]}\leq\sum^{M}_{m=1}\eta\hat{\tau}Tp^{E}_{m}\big{|}C_{k,m}(\boldsymbol{\Theta}^{E})\big{|}^{2},\quad\forall
k\in\mathcal{K}.$ (8a)
Since Constraint (8a) is not jointly convex regarding $\boldsymbol{p}^{E}$ and
$\boldsymbol{\Theta}^{E}$, we optimize one of these two variables while fixing
the other in an iterative manner, relying on the AO technique, as follows.
#### III-B1 Optimizing the Power Allocation of the WET Phase While Fixing the
Settings of the Time Allocation, the Computing Phase and the IRS Reflection
Coefficient at the WET Phase
Given an IRS phase shift design $\boldsymbol{\Theta}^{E}$, Problem
$\mathcal{P}2$ is simplified as
$\displaystyle\mathcal{P}2\text{-}1\mathrel{\mathop{\mathchar
58\relax}}\mathop{\min}\limits_{\boldsymbol{p}^{E}}\hat{\tau}T\sum^{M}_{m=1}p^{E}_{m}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_4},\eqref{eqn:P2_constraint_1}.$
(9a)
It can be observed that Problem $\mathcal{P}2\text{-}1$ is a linear
programming problem, which can be readily solved with the aid of the general
implementation of interior-point methods, e.g. CVX [41]. The complexity is
given by $\sqrt{M+KM}M[M+KM^{3}+M(M+KM^{2})+M^{2}]$ [42], i.e.
$\mathcal{O}(K^{1.5}M^{4.5})$.
#### III-B2 Optimizing the IRS Reflection Coefficient at the WET Phase While
Fixing the Settings of the Time Allocation, the Computing Phase and the power
Allocation at the WET Phase
Given a power allocation at the WET phase $\boldsymbol{p}^{E}$, Problem
$\mathcal{P}2$ becomes a feasibility-check problem, i.e.
$\displaystyle\mathcal{P}2\text{-}2\mathrel{\mathop{\mathchar
58\relax}}\text{Find }\boldsymbol{\Theta}^{E}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_5},\eqref{eqn:P2_constraint_1}.$
(10a)
As verified in [28], if one of the sub-problems is a feasibility-check
problem, the iterative algorithm relying on the AO technique has a slow
convergence. Specific to the problem considered, the operation of Find in
Problem $\mathcal{P}2\text{-}2$ cannot guarantee the OF of Problem
$\mathcal{P}2$ to be further reduced in each iteration. To address this issue,
we reformulate Problem $\mathcal{P}2\text{-}2$ as follows, by introducing a
set of auxiliary variables $\\{\xi_{k}\\}$
$\displaystyle\mathcal{P}2\text{-}2^{\prime}\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\boldsymbol{\Theta}^{E},\\{\xi_{k}\\}}\sum^{K}_{k=1}\xi_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_5},$
$\displaystyle\quad\quad\xi_{k}+\kappa
f_{k}^{2}+\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})\leq\frac{\sum^{M}_{m=1}\eta\hat{\tau}p^{E}_{m}\big{|}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta}^{E}\big{|}^{2}}{1-\hat{\tau}},\quad\forall
k\in\mathcal{K},$ (11a) $\displaystyle\quad\quad\xi_{k}\geq 0,\quad\forall
k\in\mathcal{K}.$ (11b)
It is readily seen that the energy harvested by the wireless devices may
increase after solving Problem $\mathcal{P}2\text{-}2^{\prime}$, which implies
that the channel gain of the reflection link is enhanced. Then, a reduced
power of energy signals can be guaranteed, when we turn back to solve Problem
$\mathcal{P}2\text{-}1$. As such, a faster convergence can be obtained.
However, at a glance of Problem $\mathcal{P}2\text{-}2^{\prime}$, Constraint
(11a) is still non-convex regarding $\boldsymbol{\Theta}^{E}$. To tackle this
issue, we manipulate the optimization problem in light of [33] as follows.
Firstly, we transform Problem $\mathcal{P}2\text{-}2^{\prime}$ to its
equivalent problem below, by introducing a set of auxiliary variables
$\boldsymbol{y}^{E}$, $\boldsymbol{a}^{E}$ and $\boldsymbol{b}^{E}$
$\displaystyle\mathcal{P}2\text{-}2^{\prime}E1\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\boldsymbol{\Theta}^{E},\\{\xi_{k}\\},\boldsymbol{y}^{E},\boldsymbol{a}^{E},\boldsymbol{b}^{E}}\sum^{K}_{k=1}\xi_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_5},\eqref{eqn:P2_2_constraint_11},$
$\displaystyle\quad\quad\xi_{k}+\kappa
f_{k}^{2}+\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})\leq\frac{\sum^{M}_{m=1}\eta\hat{\tau}p^{E}_{m}y^{E}_{k,m}}{1-\hat{\tau}},\quad\forall
k\in\mathcal{K},$ (12a) $\displaystyle\quad\quad
a^{E}_{k,m}=\Re\big{\\{}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta}^{E}\big{\\}},\quad
k\in\mathcal{K},\quad m\in\mathcal{M},$ (12b) $\displaystyle\quad\quad
b^{E}_{k,m}=\Im\big{\\{}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta}^{E}\big{\\}},\quad
k\in\mathcal{K},\quad m\in\mathcal{M},$ (12c) $\displaystyle\quad\quad
y^{E}_{k,m}\leq(a^{E}_{k,m})^{2}+(b^{E}_{k,m})^{2},\quad k\in\mathcal{K},\quad
m\in\mathcal{M},$ (12d)
where $\Re\\{\bullet\\}$ and $\Im\\{\bullet\\}$ represent the real and
imaginary part of $\bullet$, respectively. Following this, the successive
convex approximation (SCA) method [43] is applied for tackling the non-convex
constraint (12d). Specifically, the approximation function is constructed as
follows. The right hand side of (12d) is lower-bounded by its first-order
approximation at $(\tilde{a}^{E}_{k,m},\tilde{b}^{E}_{k,m})$, i.e.
$(a^{E}_{k,m})^{2}+(b^{E}_{k,m})^{2}\geq\tilde{a}^{E}_{k,m}(2a^{E}_{k,m}-\tilde{a}^{E}_{k,m})+\tilde{b}^{E}_{k,m}(2b^{E}_{k,m}-\tilde{b}^{E}_{k,m})$,
where the equality holds only when we have $\tilde{a}^{E}_{k,m}=a^{E}_{k,m}$
and $\tilde{b}^{E}_{k,m}=b^{E}_{k,m}$. Now we consider the following
optimization problem
$\displaystyle\mathcal{P}2\text{-}2^{\prime}E2\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\boldsymbol{\Theta}^{E},\\{\xi_{k}\\},\boldsymbol{y}^{E},\boldsymbol{a}^{E},\boldsymbol{b}^{E}}\sum^{K}_{k=1}\xi_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_5},\eqref{eqn:P2_2_constraint_11},\eqref{eqn:P2_2'E_constraint_1},\eqref{eqn:P2_2'E_constraint_6},\eqref{eqn:P2_2'E_constraint_7},$
$\displaystyle\quad\quad
y^{E}_{k,m}=\tilde{a}^{E}_{k,m}(2a^{E}_{k,m}-\tilde{a}^{E}_{k,m})+\tilde{b}^{E}_{k,m}(2b^{E}_{k,m}-\tilde{b}^{E}_{k,m}),\quad
k\in\mathcal{K},\quad m\in\mathcal{M}.$ (13a)
Both the OF and contraints in Problem $\mathcal{P}2\text{-}2^{\prime}E2$ are
affine. Hence, Problem $\mathcal{P}2\text{-}2^{\prime}E2$ is a convex
optimization problem, which can be solved by the implementation of interior-
point methods, e.g. CVX [41]. Then, a locally optimal solution of
$\mathcal{P}2\text{-}2^{\prime}$ can be approached by successively updating
$\tilde{a}^{E}_{k,m}$ and $\tilde{b}^{E}_{k,m}$ based on the optimal solution
of Problem $\mathcal{P}2\text{-}2^{\prime}E2$, whose procedure is detailed in
Algorithm 1. The computation complexity of the SCA method is analyzed as
follows. Problem $\mathcal{P}2\text{-}2^{\prime}E2$ involves $2KM$ linear
equality constraints (equivalently $4KM$ inequality constraints) of size
$2N+1$, $K$ linear inequality constraints of size $M+1$, $KM$ linear
inequality constraints of size $3$, $K$ linear inequality constraints of size
$1$, $N$ second-order cone inequality constraints of size $2$. Hence, the
total complexity of Algorithm 1 is given by
$\ln(1/\epsilon)\sqrt{4KM(2N+1)+K(M+1)+3KM+K+2N}(2N+3M+K)\\{4KM(2N+1)^{3}+K(M+1)^{3}+27KM+K+(2N+3M+K)[4KM(2N+1)^{2}+K(M+1)^{2}+9KM+K]+4N+(2N+3M+K)^{2}\\}$
[42], i.e.
$\ln(1/\epsilon)\mathcal{O}(K^{1.5}M^{1.5}N^{4.5}+K^{1.5}M^{2.5}N^{3.5}+K^{1.5}M^{2.5}N^{1.5}+K^{2.5}M^{1.5}N^{3.5}+K^{1.5}M^{4.5}+K^{2.5}M^{2.5}N^{2.5}+K^{2.5}M^{3.5}+K^{3.5}M^{1.5}N^{2.5}+K^{3.5}M^{2.5})$.
To this end, we summarize the procedure of solving Problem $\mathcal{P}2$ in
Algorithm 2.
Algorithm 1 SCA approach to design $\boldsymbol{\Theta}^{E}$, given the
settings of the time allocation, the computing phase and the power allocation
at the WET phase
0: $t_{max}$, $\epsilon$, $K$, $M$, $N$, $T$, $\eta$, $c_{k}$, $\kappa$,
$f_{max}$, $p_{c}$, $\Gamma$, $L_{k}$, $\\{\boldsymbol{h}^{d}_{k}\\}$,
$\\{\boldsymbol{V}_{k}\\}$, $\hat{\tau}$, $\boldsymbol{P}^{E}$,
$\boldsymbol{f}$, $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$, $\boldsymbol{\Theta}^{I}$ and
$\tilde{\boldsymbol{\Theta}}^{E}$
0: $\boldsymbol{\Theta}^{E}$ 1\. Initialization
Initialize $t_{1}=0$; $\epsilon_{1}=1$; $\xi_{k}=0,\forall k\in\mathcal{K}$
2\. SCA approach to design $\boldsymbol{\Theta}^{E}$
while $t_{1}<t_{\text{max}}$ $\&\&$ $\epsilon_{1}>\epsilon$ do
$\bullet$ Set
$\tilde{a}^{E}_{k,m}=\Re\big{\\{}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\tilde{\boldsymbol{\Theta}}^{E}\big{\\}}$
and
$\tilde{b}^{E}_{k,m}=\Im\big{\\{}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\tilde{\boldsymbol{\Theta}}^{E}\big{\\}},\forall
k\in\mathcal{K},\forall m\in\mathcal{M}$
$\bullet$ Obtain ${\boldsymbol{\Theta}^{E}}$ and $\\{\xi_{k}\\}$ by solving
Problem $\mathcal{P}2\text{-}2^{\prime}E2$ using CVX
$\bullet$ Set
$\epsilon_{1}=\frac{\big{|}\text{obj}\big{(}{\boldsymbol{\Theta}}^{E}\big{)}-\text{obj}\big{(}\tilde{\boldsymbol{\Theta}}^{E}\big{)}\big{|}}{\big{|}\text{obj}\big{(}\boldsymbol{\Theta}^{E}\big{)}\big{|}}$,
$\tilde{\boldsymbol{\Theta}}^{E}\leftarrow\boldsymbol{\Theta}^{E}$,
$t_{1}\leftarrow t_{1}+1$
end while3. Output optimal ${\boldsymbol{\Theta}^{E}}^{*}$
${\boldsymbol{\Theta}^{E}}^{*}\leftarrow\tilde{\boldsymbol{\Theta}}^{E}$
Algorithm 2 Alternative optimization of $\boldsymbol{p}^{E}$ and
$\boldsymbol{\Theta}^{E}$, given the settings of the time allocation and the
computing phase
0: $t_{max}$, $\epsilon$, $K$, $M$, $N$, $T$, $\eta$, $c_{k}$, $\kappa$,
$f_{max}$, $p_{c}$, $\Gamma$, $L_{k}$, $\\{\boldsymbol{h}^{d}_{k}\\}$,
$\\{\boldsymbol{V}_{k}\\}$, $\hat{\tau}$, $\boldsymbol{f}$,
$\\{\boldsymbol{\alpha}_{k}\\}$, $\\{\boldsymbol{p}^{I}_{k}\\}$,
$\boldsymbol{\Theta}^{I}$ and $\tilde{\boldsymbol{\Theta}}^{E}$
0: $\boldsymbol{P}^{E}$ and $\boldsymbol{\Theta}^{E}$ 1\. Initialization
$\bullet$ Initialize $t_{2}=0$; $\epsilon_{2}=1$;
${\boldsymbol{\Theta}^{E}}^{(0)}=\tilde{\boldsymbol{\Theta}}^{E}$
$\bullet$ Given ${\boldsymbol{\Theta}^{E}}^{(0)}$, find
${\boldsymbol{P}^{E}}^{(0)}$ by solving Problem $\mathcal{P}2\text{-}1$ via
CVX 2\. Alternative optimization of $\boldsymbol{P}^{E}$ and
$\boldsymbol{\Theta}^{E}$
while $t_{2}<t_{\text{max}}$ $\&\&$ $\epsilon_{2}>\epsilon$ do
$\bullet$ Given ${\boldsymbol{P}^{E}}^{(t_{2})}$ and
$\tilde{\boldsymbol{\Theta}}^{E}={\boldsymbol{\Theta}^{E}}^{(t_{2})}$, find
${\boldsymbol{\Theta}^{E}}^{(t_{2}+1)}$ by solving Problem
$\mathcal{P}2\text{-}2^{\prime}E1$ using Algorithm 1
$\bullet$ Given ${\boldsymbol{\Theta}^{E}}^{(t_{2}+1)}$, find
${\boldsymbol{P}^{E}}^{(t_{2}+1)}$ by solving Problem $\mathcal{P}2\text{-}1$
via CVX
$\bullet$ Set
$\epsilon_{2}=\frac{\big{|}\text{obj}\big{(}{\boldsymbol{p}^{E}}^{(t_{2}+1)},{{\boldsymbol{\Theta}}^{E}}^{(t_{2}+1)}\big{)}-\text{obj}\big{(}{\boldsymbol{p}^{E}}^{(t_{2})},{{\boldsymbol{\Theta}}^{E}}^{(t_{2})}\big{)}\big{|}}{\big{|}\text{obj}\big{(}{\boldsymbol{p}^{E}}^{(t_{2}+1)},{{\boldsymbol{\Theta}}^{E}}^{(t_{2}+1)}\big{)}\big{|}}$,
$t_{2}\leftarrow t_{2}+1$
end while3. Output optimal ${\boldsymbol{P}^{E}}^{*}$ and
${\boldsymbol{\Theta}^{E}}^{*}$
${\boldsymbol{\Theta}^{E}}^{*}\leftarrow{\boldsymbol{\Theta}^{E}}^{(t_{2})}$
and ${\boldsymbol{P}^{E}}^{*}\leftarrow{\boldsymbol{P}^{E}}^{(t_{2})}$
### III-C Design of the Computing Phase While Fixing the Time Allocation and
WET Settings
In this subsection, we aim for optimizing the design of the computing phase,
while fixing the time allocation $\hat{\tau}$ and the WET settings
$\boldsymbol{p}^{E}$ and $\boldsymbol{\Theta}^{E}$. In this case, we simplify
Problem $\mathcal{P}1$ as
$\displaystyle\mathcal{P}3\mathrel{\mathop{\mathchar
58\relax}}\mathop{\min}\limits_{\boldsymbol{f},\\{\boldsymbol{\alpha}_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\Theta}^{I}}\vartheta\sum^{K}_{k=1}\bigg{[}L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}\bigg{]}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_10},\eqref{eqn:P1_constraint_6},\eqref{eqn:P1_constraint_7},\eqref{eqn:P1_constraint_8},\eqref{eqn:P1_constraint_9},\eqref{eqn:P1_constraint_1},$
$\displaystyle\quad\quad\sum^{m}_{m=1}\alpha_{k,m}B\log_{2}\Bigg{[}1+\frac{p_{k,m}|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\Bigg{]}\geq\frac{L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}}{(1-\hat{\tau})T},\quad\forall
k\in\mathcal{K}.$ (14a)
Constraint (14a) is not jointly convex regarding
$\\{\boldsymbol{\alpha}_{k}\\}$, $\\{\boldsymbol{p}^{I}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$. Hence, it is difficult to find its globally optimal
solution. Alternatively, its suboptimal solution is provided by iteratively
optimizing the $\boldsymbol{f}$, $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ and $\boldsymbol{\Theta}^{I}$, again relying on
the AO technique, as follows.
#### III-C1 Alternative Optimization of the Sub-Band-Device Association and
the Power Allocation as well as the IRS Reflection Coefficient at the
Computing Phase
Given a fixed local CPU frequency setting $\boldsymbol{f}$, the OF of Problem
$\mathcal{P}3$ becomes deterministic. In other words, the optimization of
$\\{\boldsymbol{\alpha}_{k}\\}$, $\\{\boldsymbol{p}^{I}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$ seems not contributing to reducing the OF. However,
this is not always true, because if a larger feasible set of $\boldsymbol{f}$
can be obtained by optimizing $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ and $\boldsymbol{\Theta}^{I}$, a reduced OF may
be achieved when we turn back to optimize $\boldsymbol{f}$. Based on this
observation, we formulate the problem below, by introducing a set of auxiliary
variables $\\{\zeta_{k}\\}$
$\displaystyle\mathcal{P}3\text{-}1\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\\{\zeta_{k}\\},\\{\boldsymbol{\alpha}_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\Theta}^{I}}\sum^{K}_{k=1}\zeta_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_6},\eqref{eqn:P1_constraint_7},\eqref{eqn:P1_constraint_8},\eqref{eqn:P1_constraint_9},\eqref{eqn:P3_constraint_2}$
$\displaystyle\quad\quad\zeta_{k}\geq 0,\quad\forall k\in\mathcal{K},$ (15a)
$\displaystyle\quad\quad(1-\hat{\tau})T\bigg{[}\kappa
f_{k}^{2}+\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})+\zeta_{k}\bigg{]}\leq\sum^{M}_{m=1}\eta\hat{\tau}Tp^{E}_{m}\big{|}C_{k,m}(\boldsymbol{\Theta}^{E})\big{|}^{2},\quad\forall
k\in\mathcal{K}.$ (15b)
As specified in (15a), the auxiliary variables $\\{\zeta_{k}\\}$ are non-
negative, and thus a non-smaller set of $\boldsymbol{f}$ may be obtained after
solving Problem $\mathcal{P}3\text{-}1$. Given that Constraint (14a) is not
jointly convex regarding $\\{\boldsymbol{p}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$, we optimize $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ and $\boldsymbol{\Theta}^{I}$ in two steps
iteratively.
In the first step, we optimize $\\{\zeta_{k}\\},\\{\boldsymbol{\alpha}_{k}\\}$
and $\\{\boldsymbol{p}^{I}_{k}\\}$, while fixing the IRS reflection
coefficient $\boldsymbol{\Theta}^{I}$. In this case, Problem
$\mathcal{P}3\text{-}1$ can be simplified as
$\displaystyle\mathcal{P}3\text{-}1a\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\\{\zeta_{k}\\},\\{\boldsymbol{\alpha}_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\}}\sum^{K}_{k=1}\zeta_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_6},\eqref{eqn:P1_constraint_7},\eqref{eqn:P1_constraint_8},\eqref{eqn:P3_constraint_2},\eqref{eqn:P3_1_constraint_3},\eqref{eqn:P3_1_constraint_2}.$
(16a)
Problem $\mathcal{P}3\text{-}1a$ is a combinatorial optimization problem,
where the binary constraint (6e) is non-convex. The classic solution typically
relies on the convex relaxation method [44], where the binary constraint
imposed on $\\{\boldsymbol{\alpha}_{k}\\}$ is relaxed into a convex constraint
by introducing time-sharing variables. However, the relaxed problem is
different from the original problem, which might lead to a specific error. To
address this issue, a near-optimal solution based on the Lagrangian duality
was proposed [39], where it is verified that the duality gap vanishes in the
system equipped with more than $8$ sub-bands. Hence, in this paper, the
Lagrangian duality method [45] is invoked for solving Problem
$\mathcal{P}3\text{-}1a$. Specifically, denoting the non-negative Lagrange
multiplier vectors by
$\boldsymbol{\lambda}=[\lambda_{1},\lambda_{2},\ldots,\lambda_{K}]^{T}$ and
$\boldsymbol{\mu}=[\mu_{1},\mu_{2},\ldots,\mu_{K}]^{T}$, we formulate the
Lagrangian function of Problem $\mathcal{P}3\text{-}1a$ over the domain
$\mathcal{D}$ as
$\displaystyle\mathcal{L}\big{(}\\{\zeta_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\lambda},\boldsymbol{\mu}\big{)}$
$\displaystyle=$
$\displaystyle\sum_{k=1}^{K}\zeta_{k}-\sum_{k=1}^{K}\lambda_{k}\bigg{[}\kappa
f_{k}^{2}+\sum^{M}_{m=1}(p^{I}_{k,m}+p_{c})+\zeta_{k}-\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}\bigg{]}$
(17)
$\displaystyle+\sum_{k=1}^{K}\mu_{k}\Bigg{[}\sum^{M}_{m=1}B\log_{2}\bigg{(}1+\frac{p^{I}_{k,m}|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\bigg{)}-\frac{L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}}{(1-\hat{\tau})T}\Bigg{]},$
where the domain $\mathcal{D}$ is defined as the set of all non-negative
$p^{I}_{k,m}$ for $\forall k\in\mathcal{K}$ and for $\forall m\in\mathcal{M}$
such that for each $m$, only a single $p^{I}_{k,m}$ is positive for
$k\in\mathcal{K}$. Then, the Lagrangian dual function of Problem
$\mathcal{P}3\text{-}1a$ is given by
$\displaystyle
g(\boldsymbol{\lambda},\boldsymbol{\mu})=\mathop{\max}\limits_{\\{\zeta_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\}\in{\mathcal{D}}}\mathcal{L}\big{(}\\{\zeta_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\lambda},\boldsymbol{\mu}\big{)}.$
(18)
(18) can be reformulated as
$\displaystyle g(\boldsymbol{\lambda},\boldsymbol{\mu})$ $\displaystyle=$
$\displaystyle\sum_{m=1}^{M}\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})+\sum^{K}_{k=1}(1-\lambda_{k})\zeta_{k}-\sum^{K}_{k=1}\lambda_{k}\kappa
f_{k}^{2}$ (19)
$\displaystyle+\sum^{K}_{k=1}\lambda_{k}\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}-\sum^{K}_{k=1}\frac{\mu_{k}\Big{[}L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}\Big{]}}{(1-\hat{\tau})T},$
where we have
$\displaystyle\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})\triangleq\mathop{\max}\limits_{\\{\boldsymbol{p}^{I}_{k}\\}\in{\mathcal{D}}}\Bigg{\\{}-\sum_{k=1}^{K}\lambda_{k}(p^{I}_{k,m}+p_{c})+\sum_{k=1}^{K}\mu_{k}B\log_{2}\Bigg{[}1+\frac{p^{I}_{k,m}|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\Bigg{]}\Bigg{\\}}.$
(20)
It is readily seen that (20) is concave regarding $p^{I}_{k,m}$. Thus, upon
letting its first-order derivative regarding $p^{I}_{k,m}$ be $0$, we may give
the optimal power of the $m$-th sub-band when it is allocated to the $k$-th
device as
$\displaystyle\hat{p}^{I}_{k,m}(\lambda_{k},\mu_{k})=\bigg{[}\frac{\mu_{k}B}{\lambda_{k}\ln
2}-\frac{\Gamma\sigma^{2}}{|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}\bigg{]}^{+}.$
(21)
Then, $\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})$ can be obtained, by
searching over all possible assignments of the $m$-th sub-band for all the $K$
devices, as follows
$\displaystyle\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})=\max_{k}\Bigg{\\{}-\lambda_{k}\Big{[}\hat{p}^{I}_{k,m}(\lambda_{k},\mu_{k})+p_{c}\Big{]}+\mu_{k}B\log_{2}\Bigg{[}1+\frac{\hat{p}^{I}_{k,m}(\lambda_{k},\mu_{k})|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\Bigg{]}\Bigg{\\}},$
(22)
and the suitable device is given by
$k^{*}=\arg\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})$. We set
$\alpha_{k^{*},m}=1$ and $p^{I}_{k^{*},m}=\hat{p}^{I}_{k^{*},m}$ as well as
$\alpha_{k,m}=0$ and $p^{I}_{k,m}=0$ for $\forall k\neq k^{*}$. We continue by
calculating $\\{\zeta_{k}\\}$ as follows. At a glance of (21), it is observed
that $\lambda_{k}$ has to yield $\lambda_{k}>0$, $\forall k\in\mathcal{K}$,
which implies that Constraint (15b) is strictly binding for the optimal
solution of Problem $\mathcal{P}3\text{-}1a$. Therefore, $\zeta_{k}$ can be
set as
$\displaystyle\zeta_{k}=\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}-\kappa
f_{k}^{2}-\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c}).$ (23)
Once all $\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})$ and $\zeta_{k}$
are obtained, $g(\boldsymbol{\lambda},\boldsymbol{\mu})$ can be calculated by
(19). Bearing in mind that the obtained
$g(\boldsymbol{\lambda},\boldsymbol{\mu})$ is not guaranteed to be optimal, we
have to find a suitable set of $\boldsymbol{\lambda}$ and $\boldsymbol{\mu}$
that minimize $g(\boldsymbol{\lambda},\boldsymbol{\mu})$, which can be
realized by the ellipsoid method [45]. More explicitly, the Lagrange
multipliers are iteratively updated following their sub-gradients towards
their optimal settings. The corresponding sub-gradients are given as follows
$\displaystyle s_{\lambda_{k}}=\kappa
f_{k}^{2}+\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})-\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T},$
(24) $\displaystyle
s_{\mu_{k}}=\frac{L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}}{(1-\hat{\tau})T}-\sum_{m=1}^{M}\alpha_{k,m}B\log_{2}\bigg{(}1+\frac{p^{I}_{k,m}|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\bigg{)}.$
(25)
Upon denoting the iteration index by $t$, the Lagrange multipliers are updated
obeying
$\lambda_{k}(t+1)=[\lambda_{k}(t)+\delta_{\lambda}(t)s_{\lambda_{k}}]^{+}$ and
$\mu_{k}(t+1)=[\mu_{k}(t)+\delta_{\mu}(t)s_{\mu_{k}}]^{+}$, where we set
$\delta_{\lambda}(t)=\delta_{\lambda}(1)/t$ and
$\delta_{\mu}(t)=\delta_{\mu}(1)/t$ for ensuring the convergence of the OF. In
the problem considered, the ellipsoid method converges in $\mathcal{O}(K^{2})$
iterations [45, 39]. Within each iteration, the computational complexity is of
$\mathcal{O}(KM)$. Hence, the total computational complexity is given by
$\mathcal{O}(MK^{3})$. The procedure of this Lagrangian duality method is
summarized in Algorithm 3.
Algorithm 3 Design of $\\{\boldsymbol{\alpha}_{k}\\}$ and
$\\{\boldsymbol{p}^{I}_{k}\\}$, given the settings of $\hat{\tau}$,
$\boldsymbol{p}^{E}$, $\boldsymbol{\Theta}^{E}$, $\boldsymbol{f}$ and
$\boldsymbol{\Theta}^{I}$
0: $t_{max}$, $\epsilon$, $K$, $M$, $N$, $T$, $\eta$, $c_{k}$, $\kappa$,
$f_{max}$, $p_{c}$, $\Gamma$, $L_{k}$, $\\{\boldsymbol{h}^{d}_{k}\\}$,
$\\{\boldsymbol{V}_{k}\\}$, $\hat{\tau}$, $\boldsymbol{P}^{E}$,
$\boldsymbol{\Theta}^{E}$, $\boldsymbol{\Theta}^{I}$, $\boldsymbol{f}$,
$\boldsymbol{\Theta}^{I}$, $\boldsymbol{\lambda}$ and $\boldsymbol{\mu}$
0: $\\{\zeta_{k}\\}$, $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ 1\. Initialization
Initialize $t_{3}=0$; $\epsilon_{3}=1$; Calculate $\mathcal{L}^{(0)}$ using
(17) 2\. Optimization of $\\{\zeta_{k}\\}$, $\\{\boldsymbol{\alpha}_{k}\\}$
and $\\{\boldsymbol{p}^{I}_{k}\\}$
while $t_{3}<t_{\text{max}}$ $\&\&$ $\epsilon_{3}>\epsilon$ do
for $m=1\mathrel{\mathop{\mathchar 58\relax}}M$ do
$\bullet$ Calculate $\hat{p}^{I}_{k,m}$ using (21) for each $\forall
k\in\mathcal{K}$
$\bullet$ Obtain the optimal device
$k^{*}=\arg\hat{g}_{m}(\boldsymbol{\lambda},\boldsymbol{\mu})$ in (22)
$\bullet$ Set $\alpha_{k^{*},m}=1$ and $p^{I}_{k^{*},m}=\hat{p}^{I}_{k^{*},m}$
as well as $\alpha_{k,m}=0$ and $p^{I}_{k,m}=0$ for $\forall k\neq k^{*}$
end for
$\bullet$ Calculate $\zeta_{k}$ using (23)
$\bullet$ Calculate $\mathcal{L}^{(t_{3}+1)}$ using (17)
$\bullet$ Update $\boldsymbol{\lambda}$ and $\boldsymbol{\mu}$ using the
ellipsoid method
$\bullet$ Set
$\epsilon_{3}=\frac{\big{|}\mathcal{L}^{(t_{3}+1)}-\mathcal{L}^{(t_{3})}\big{|}}{\big{|}\mathcal{L}^{(t_{3}+1)}\big{|}}$,
$t_{3}\leftarrow t_{3}+1$
end while3. Output optimal $\\{\zeta_{k}\\}^{*}$,
$\\{\boldsymbol{\alpha}_{k}\\}^{*}$ and $\\{\boldsymbol{p}^{I}_{k}\\}^{*}$
$\\{\zeta_{k}\\}^{*}=\\{\zeta_{k}\\}$,
$\\{\boldsymbol{\alpha}_{k}\\}^{*}=\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{*}=\\{\boldsymbol{p}^{I}_{k}\\}$
In the second step, we optimize the IRS reflection coefficient
$\boldsymbol{\Theta}^{I}$, while fixing the settings of the resource
allocation at the computing phase $\\{\boldsymbol{\alpha}_{k}\\}$ and
$\\{\boldsymbol{p}^{I}\\}$. In this case, Problem $\mathcal{P}3\text{-}1$
becomes a feasibility-check problem below
$\displaystyle\mathcal{P}3\text{-}1b\mathrel{\mathop{\mathchar
58\relax}}\text{Find }\boldsymbol{\Theta}^{I}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_9},\eqref{eqn:P3_constraint_2}.$
The problem can be solved using the approach devised in Section III-B2,
detailed as follows. By introducing a set of auxiliary variables
$\\{\chi_{k}\\}$, we transform $\mathcal{P}3\text{-}2$ to the problem below
$\displaystyle\mathcal{P}3\text{-}1b\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\boldsymbol{\Theta}^{I},\\{\chi_{k}\\}}\sum^{K}_{k=1}\chi_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_9},$
$\displaystyle\quad\quad\chi_{k}\geq 0,\quad\forall k\in\mathcal{K},$ (27a)
$\displaystyle\quad\quad\sum^{m}_{m=1}\alpha_{k,m}B\log_{2}\bigg{[}1+\frac{p_{k,m}|C_{k,m}(\boldsymbol{\Theta}^{I})|^{2}}{\Gamma\sigma^{2}}\bigg{]}\geq\frac{L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}}{(1-\hat{\tau})T}+\chi_{k},\quad\forall
k\in\mathcal{K}.$ (27b)
Constraint (27b) is non-convex regarding $\boldsymbol{\Theta}^{I}$. To address
this issue, firstly we transform Problem $\mathcal{P}3\text{-}1b$ to its
equivalent form, by introducing a set of auxiliary variables
$\boldsymbol{y}^{I}$, $\boldsymbol{a}^{I}$ and $\boldsymbol{b}^{I}$
$\displaystyle\mathcal{P}3\text{-}1bE1\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\boldsymbol{\Theta}^{I},\\{\chi_{k}\\},\boldsymbol{y}^{I},\boldsymbol{a}^{I},\boldsymbol{b}^{I}}\sum^{K}_{k=1}\chi_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_9},\eqref{eqn:P3_2'_constraint_3},$
$\displaystyle\quad\quad\sum^{m}_{m=1}\alpha_{k,m}B\log_{2}\bigg{(}1+\frac{p_{k,m}y^{I}_{k,m}}{\Gamma\sigma^{2}}\bigg{)}\geq\frac{L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}}{(1-\hat{\tau})T}+\chi_{k},\quad\forall
k\in\mathcal{K},$ (28a) $\displaystyle\quad\quad
a^{I}_{k,m}=\Re\big{\\{}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta}^{I}\big{\\}},\quad
k\in\mathcal{K},\quad m\in\mathcal{M},$ (28b) $\displaystyle\quad\quad
b^{I}_{k,m}=\Im\big{\\{}\boldsymbol{f}^{H}_{m}\boldsymbol{h}^{d}_{k}+\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta}^{I}\big{\\}},\quad
k\in\mathcal{K},\quad m\in\mathcal{M},$ (28c) $\displaystyle\quad\quad
y^{I}_{k,m}=(a^{I}_{k,m})^{2}+(b^{I}_{k,m})^{2},\quad k\in\mathcal{K},\quad
m\in\mathcal{M}.$ (28d)
Then, upon invoking the so-called SCA method as detailed in Section III-B2, we
approach the locally optimal solution by solving the problem below in a
successive manner
$\displaystyle\mathcal{P}3\text{-}1bE2\mathrel{\mathop{\mathchar
58\relax}}\mathop{\max}\limits_{\boldsymbol{\Theta}^{I},\\{\chi_{k}\\}}\sum^{K}_{k=1}\chi_{k}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_9},\eqref{eqn:P3_2'_constraint_3},\eqref{eqn:P3_2'E1_constraint_2},\eqref{eqn:P3_2'E1_constraint_6},\eqref{eqn:P3_2'E1_constraint_7},$
$\displaystyle\quad\quad
y^{I}_{k,m}=\tilde{a}^{I}_{k,m}(2a^{I}_{k,m}-\tilde{a}^{I}_{k,m})+\tilde{b}^{I}_{k,m}(2b^{I}_{k,m}-\tilde{b}^{I}_{k,m}),\quad
k\in\mathcal{K},\quad m\in\mathcal{M}.$ (29a)
Problem $\mathcal{P}3\text{-}1bE2$ is a convex optimization problem, which can
be readily solved with the aid of the software of CVX [41]. The computational
complexity is the same as that given in Section III-B2. Note that the
optimization of $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ and $\boldsymbol{\Theta}^{I}$ not only
contributes to reducing the OF of Problem $\mathcal{P}2$, but also leads to a
decreased OF of Problem $\mathcal{P}1$ by slacking its constraint (8a). Hence,
we may still reduce the OF of Problem $\mathcal{P}1$ by iteratively optimizing
the settings of the WET phase and the computing phase, even if
$\boldsymbol{f}$ reaches its maximum value.
#### III-C2 Design of CPU Frequencies
Given the settings of the sub-band-device association
$\\{\boldsymbol{\alpha}_{k}\\}$, the power allocation
$\\{\boldsymbol{p}^{I}_{k}\\}$ and the IRS reflection coefficient
$\boldsymbol{\Theta}^{I}$, Problem $\mathcal{P}3$ can be simplified as
$\displaystyle\mathcal{P}3\text{-}2\mathrel{\mathop{\mathchar
58\relax}}\mathop{\min}\limits_{\boldsymbol{f},\\{\boldsymbol{\alpha}_{k}\\},\\{\boldsymbol{p}^{I}_{k}\\},\boldsymbol{\Theta}^{I}}\vartheta\sum^{K}_{k=1}\bigg{[}L_{k}-\frac{(1-\hat{\tau})Tf_{k}}{c_{k}}\bigg{]}$
$\displaystyle\text{s.t.}\quad\eqref{eqn:P1_constraint_10},\eqref{eqn:P3_1_constraint_2}.$
(30a)
It can be seen that the OF of Problem $\mathcal{P}3\text{-}2$ decreases upon
increasing $\boldsymbol{f}$. Hence, upon denoting
$\displaystyle\hat{f}_{k}=\sqrt{\frac{\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}-\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})-\zeta_{k}}{\kappa}},$
(31)
the optimal $\boldsymbol{f}$ can be obtained as:
$\displaystyle f_{k}$ $\displaystyle=\begin{cases}0,&\quad\text{if
}\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}-\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})-\zeta_{k}<0,\\\
\hat{f}_{k},&\quad\text{if
}0\leq\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}-\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})-\zeta_{k}<\kappa
f_{max}^{2},\\\ f_{max},&\quad\text{if
}\frac{E_{k}(\hat{\tau},\boldsymbol{p}^{E},\boldsymbol{\Theta}^{E})}{(1-\hat{\tau})T}-\sum^{M}_{m=1}\alpha_{k,m}(p^{I}_{k,m}+p_{c})-\zeta_{k}\geq\kappa
f_{max}^{2}.\end{cases}$ (32)
The procedure of optimizing $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$, $\boldsymbol{\Theta}^{I}$ and $\boldsymbol{f}$
is summarized in Algorithm 4. To this end, it is readily to summarize the
algorithm solving Problem $\mathcal{P}1$ under a given $\hat{\tau}$ in
Algorithm 5, and an appropriate $\tau$ is found with the aid of numerical
results, as detailed in Section IV-A.
Algorithm 4 Alternative optimization of $\boldsymbol{f}$,
$\\{\boldsymbol{\alpha}_{k}\\}$, $\\{\boldsymbol{p}^{I}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$, given the settings of $\hat{\tau}$,
$\boldsymbol{p}^{E}$ and $\boldsymbol{\Theta}^{E}$
0: $t_{max}$, $\epsilon$, $K$, $M$, $N$, $T$, $\eta$, $c_{k}$, $\kappa$,
$f_{max}$, $p_{c}$, $\Gamma$, $L_{k}$, $\\{\boldsymbol{h}^{d}_{k}\\}$,
$\\{\boldsymbol{V}_{k}\\}$, $\hat{\tau}$, $\boldsymbol{P}^{E}$,
$\boldsymbol{\Theta}^{E}$, $\boldsymbol{f}$, and
$\tilde{\boldsymbol{\Theta}}^{I}$
0: $\\{\boldsymbol{\alpha}_{k}\\}$ $\\{\boldsymbol{p}^{I}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$ 1\. Initialization
$\bullet$ Initialize $t_{4}=0$; $\epsilon_{4}=1$;
${\boldsymbol{\Theta}^{I}}^{(0)}=\tilde{\boldsymbol{\Theta}}^{I}$
$\bullet$ Given ${\boldsymbol{\Theta}^{I}}^{(0)}$, find
$\\{\boldsymbol{\alpha}_{k}\\}^{(0)}$ and $\\{\boldsymbol{p}^{I}_{k}\\}^{(0)}$
by solving Problem $\mathcal{P}3\text{-}1a$ via Algorithm 3
$\bullet$ Obtain $\boldsymbol{f}^{(0)}$ via (32) and calculate
$\text{obj}\big{(}\boldsymbol{f}^{(0)}\big{)}$ 2\. Alternative optimization of
$\boldsymbol{f}$, $\\{\boldsymbol{\alpha}_{k}\\}$,
$\\{\boldsymbol{p}^{I}_{k}\\}$ and $\boldsymbol{\Theta}^{I}$
while $t_{4}<t_{\text{max}}$ $\&\&$ $\epsilon_{4}>\epsilon$ do
$\bullet$ Given $\\{\boldsymbol{\alpha}_{k}\\}^{(t_{4})}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{(t_{4})}$ and
$\tilde{\boldsymbol{\Theta}}^{I}={\boldsymbol{\Theta}^{I}}^{(t_{4})}$, find
${\boldsymbol{\Theta}^{I}}^{(t_{4}+1)}$ by solving Problem
$\mathcal{P}3\text{-}1bE1$ via Algorithm 1
$\bullet$ Given ${\boldsymbol{\Theta}^{I}}^{(t_{4}+1)}$, find
$\\{\boldsymbol{\alpha}_{k}\\}^{(t_{4}+1)}$ and
$\\{\boldsymbol{p}^{I}_{k}\\}^{(t_{4}+1)}$ by solving Problem
$\mathcal{P}3\text{-}1a$ via Algorithm 3
$\bullet$ Obtain $\boldsymbol{f}^{(t_{4}+1)}$ via (32) and calculate
$\text{obj}\big{(}\boldsymbol{f}^{(t_{4}+1)}\big{)}$
$\bullet$ Set
$\epsilon_{4}=\frac{\big{|}\text{obj}\big{(}\boldsymbol{f}^{(t_{4}+1)}\big{)}-\text{obj}\big{(}\boldsymbol{f}^{(t_{4})}\big{)}\big{|}}{\big{|}\text{obj}\big{(}\boldsymbol{f}^{(t_{4}+1)}\big{)}\big{|}}$,
$t_{4}\leftarrow t_{4}+1$
end while3. Output optimal $\\{\boldsymbol{\alpha}_{k}\\}^{*}$
$\\{\boldsymbol{p}^{I}_{k}\\}^{*}$ and ${\boldsymbol{\Theta}^{I}}^{*}$
$\\{\boldsymbol{\alpha}_{k}\\}^{*}\leftarrow\\{\boldsymbol{\alpha}_{k}\\}^{(t_{4})}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{*}\leftarrow\\{\boldsymbol{p}^{I}_{k}\\}^{(t_{4})}$
and
${\boldsymbol{\Theta}^{I}}^{*}\leftarrow{\boldsymbol{\Theta}^{I}}^{(t_{4})}$
Algorithm 5 Alternative optimization of the WET and computing phases, given
the time allocation
0: $t_{max}$, $\epsilon$, $K$, $M$, $N$, $T$, $\eta$, $c_{k}$, $\kappa$,
$f_{max}$, $p_{c}$, $\Gamma$, $L_{k}$, $\\{\boldsymbol{h}^{d}_{k}\\}$,
$\\{\boldsymbol{V}_{k}\\}$ and $\hat{\tau}$
0: $\boldsymbol{P}^{E}$, $\boldsymbol{\Theta}^{E}$, $\boldsymbol{f}$,
$\\{\boldsymbol{\alpha}_{k}\\}$ $\\{\boldsymbol{p}^{I}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$ 1\. Initialization
$\bullet$ Initialize $t_{5}=0$; $\epsilon_{5}=1$;
$\tilde{\boldsymbol{\Theta}}^{E}$
$\bullet$ Initialize $\boldsymbol{f}^{(0)}$,
$\\{\boldsymbol{\alpha}_{k}\\}^{(0)}$, $\\{\boldsymbol{p}^{I}_{k}\\}^{(0)}$
and ${\boldsymbol{\Theta}^{I}}^{(0)}$ following Section III-A
$\bullet$ Given $\boldsymbol{f}^{(0)}$, $\\{\boldsymbol{\alpha}_{k}\\}^{(0)}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{(0)}$ and ${\boldsymbol{\Theta}^{I}}^{(0)}$,
find ${\boldsymbol{P}^{E}}^{(0)}$ and ${\boldsymbol{\Theta}^{E}}^{(0)}$ by
solving Problem $\mathcal{P}2$ via Algorithm 2 2\. Alternative optimization of
$\boldsymbol{P}^{E}$, $\boldsymbol{\Theta}^{E}$, $\boldsymbol{f}$,
$\\{\boldsymbol{\alpha}_{k}\\}$ $\\{\boldsymbol{p}^{I}_{k}\\}$ and
$\boldsymbol{\Theta}^{I}$
while $t_{5}<t_{\text{max}}$ $\&\&$ $\epsilon_{5}>\epsilon$ do
$\bullet$ Given ${\boldsymbol{P}^{E}}^{(t_{5})}$,
${\boldsymbol{\Theta}^{E}}^{(t_{5})}$ and
$\tilde{\boldsymbol{\Theta}}^{I}={\boldsymbol{\Theta}^{I}}^{(t_{5})}$, find
$\boldsymbol{f}^{(t_{5}+1)}$, $\\{\boldsymbol{\alpha}_{k}\\}^{(t_{5}+1)}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{(t_{5}+1)}$ and
${\boldsymbol{\Theta}^{I}}^{(t_{5}+1)}$ by solving Problem $\mathcal{P}3$
using Algorithm 4
$\bullet$ Given $\boldsymbol{f}^{(t_{5}+1)}$,
$\\{\boldsymbol{\alpha}_{k}\\}^{(t_{5}+1)}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{(t_{5}+1)}$,
${\boldsymbol{\Theta}^{I}}^{(t_{5}+1)}$ and
$\tilde{\boldsymbol{\Theta}}^{E}={\boldsymbol{\Theta}^{E}}^{(t_{5})}$, find
${\boldsymbol{P}^{E}}^{(t_{5}+1)}$ and ${\boldsymbol{\Theta}^{E}}^{(t_{5}+1)}$
by solving Problem $\mathcal{P}2$ via Algorithm 2
$\bullet$ Set
$\epsilon_{5}=\frac{\big{|}\text{obj}^{(t_{5}+1)}-\text{obj}^{(t_{5})}\big{|}}{\big{|}\text{obj}^{(t_{5}+1)}\big{|}}$,
$t_{5}\leftarrow t_{5}+1$
end while3. Output optimal ${\boldsymbol{P}^{E}}^{*}$,
${\boldsymbol{\Theta}^{E}}^{*}$, $\boldsymbol{f}^{*}$,
$\\{\boldsymbol{\alpha}_{k}\\}^{*}$ $\\{\boldsymbol{p}^{I}_{k}\\}^{*}$ and
${\boldsymbol{\Theta}^{I}}^{*}$
${\boldsymbol{P}^{E}}^{*}\leftarrow{\boldsymbol{P}^{E}}^{(t_{5})}$,
${\boldsymbol{\Theta}^{E}}^{*}\leftarrow{\boldsymbol{\Theta}^{E}}^{(t_{5})}$,
$\boldsymbol{f}^{*}\leftarrow\boldsymbol{f}^{(t_{5})}$,
$\\{\boldsymbol{\alpha}_{k}\\}^{*}\leftarrow\\{\boldsymbol{\alpha}_{k}\\}^{(t_{5})}$,
$\\{\boldsymbol{p}^{I}_{k}\\}^{*}\leftarrow\\{\boldsymbol{p}^{I}_{k}\\}^{(t_{5})}$
and
${\boldsymbol{\Theta}^{I}}^{*}\leftarrow{\boldsymbol{\Theta}^{I}}^{(t_{5})}$
## IV Numerical Results
In this section, we present the pertinent numerical results, for evaluating
the performance of our proposed IRS-aided WP-MEC design. A top view of the
HAP, of the wireless devices and of the IRS are shown in Fig. 3, where the
HAP’s coverage radius is $R$ and the IRS is deployed at the cell edge. The
locations of wireless devices are assumed to obey the uniform distribution
within a circle, whose radius and locations are specified by $r$ as well as
$d_{1}$ and $d_{2}$, respectively. Their default settings are specified in the
block of “System model” in Table I. The efficiency of the energy harvesting
$\eta$ is set as $0.5$. As for the communications channel, we consider both
the small-scale fading and the large-scale path loss. More explicitly, the
small-scale fading is assumed to be independent and identically distributed
(i.i.d.) and obey the complex Gaussian distribution associated with zero mean
and unit variance, while the path loss in $\rm{dB}$ is given by
$\displaystyle\text{PL}=\text{PL}_{0}-10\beta\log_{10}\big{(}\frac{d}{d_{0}}\big{)},$
(33)
where $\text{PL}_{0}$ is the path loss at the reference distance $d_{0}$;
$\beta$ and $d$ denote the path loss exponent of and the distance of the
communication link, respectively. Here we use $\beta_{ua}$, $\beta_{ui}$ and
$\beta_{ia}$ to represent the path loss exponent of the links between the
wireless devices and the HAP, between the wireless devices and the IRS, as
well as between the IRS and the HAP, respectively111We assume that the channel
of the direct link between the HAP and devices is hostile (due to an
obstruction), while this obstruction can be partially avoided by the IRS-
reflection link. Hence, we set a higher value for $\beta_{ua}$.. Furthermore,
the additive while Gaussian noise associated with zero mean and the variable
of $\sigma^{2}$ is imposed both on the energy signals and on the offloading
signals. The default values of the parameters are set in the block of
“Communications model” in Table I. As for the computing model, the variables
of $L_{k}$ and $c_{k}$ are assumed to obey the uniform distribution. The
offloaded tasks are assumed to be computed in parallel by a large number of
CPUs at the edge computing node, where the computing capability of each CPU is
$f_{e}=10^{9}\leavevmode\nobreak\ \rm{cycle/s}$. Then, the energy consumption
at the edge for processing the offloaded computational tasks can be calculated
as $\vartheta=c\kappa f_{e}^{2}=5\times 10^{-8}\leavevmode\nobreak\
\rm{Joule/bit}$.
Figure 3: An illustration of the locations of the HAP, of devices and of the IRS in the IRS-aided WP-MEC system. Table I: Default simulation parameter setting Description | Parameter and Value
---|---
System model [27] | $M=16$, $N=30$, $K=3$, $T=10\leavevmode\nobreak\ \rm{ms}$
$R=12\leavevmode\nobreak\ \rm{m}$, $d_{1}=11\leavevmode\nobreak\ \rm{m}$,
$d_{2}=1\leavevmode\nobreak\ \rm{m}$, $r=1\leavevmode\nobreak\ \rm{m}$
Wireless energy transfer model | $\eta=0.5$
Communication model [33] | $B=312.5\leavevmode\nobreak\ \rm{KHz}$
$\text{PL}_{0}=30\leavevmode\nobreak\ \rm{dB}$, $d_{0}=1\leavevmode\nobreak\
\rm{m}$, $\beta_{ua}=3.5$, $\beta_{ui}=2.2$, $\beta_{ia}=2.2$
$L^{d}_{k}=4$, $L_{1}=2$, $L_{2,k}=3$
$\sigma^{2}=1.24\times 10^{-12}\leavevmode\nobreak\ \rm{mW}$, $\Gamma=2$
Computing model [5] | $L_{k}=[15,20]\leavevmode\nobreak\ \rm{Kbit}$
$c_{k}=[400,500]\leavevmode\nobreak\ \rm{cycle/bit}$
$f_{max}=1\times 10^{8}\leavevmode\nobreak\ \rm{cycle/s}$
| $\kappa=10^{-28}$, $\vartheta=5\times 10^{-8}\leavevmode\nobreak\
\rm{Joule/bit}$
Convergence criterion | $\epsilon=0.001$
Apart from our algorithms developed in Section III, we also consider two
benchmark schemes for comparison. Let us describe these three schemes as
follows.
* •
_With IRS_ : In this scheme, we optimize both the power allocation
$\boldsymbol{p}^{E}$ and the IRS reflection coefficients
$\boldsymbol{\Theta}^{E}$ at the WET phase, as well as the local CPU frequency
at devices $\boldsymbol{f}$, the sub-band-device association
$\\{\boldsymbol{\alpha}_{k}\\}$, the power allocation
$\\{\boldsymbol{p}_{k}\\}$ and the IRS reflection coefficients
$\boldsymbol{\Theta}^{I}$ at the computing phase, relying on Algorithm 5.
* •
_RandPhase_ : The power allocation $\boldsymbol{p}^{E}$ at the WET phase, as
well as the local CPU frequency at devices $\boldsymbol{f}$, the sub-band-
device association $\\{\boldsymbol{\alpha}_{k}\\}$ and the power allocation
$\\{\boldsymbol{p}_{k}\\}$ at the computing phase are optimized with the aid
of Algorithm 5, while we skip the design of the IRS reflection coefficients
$\boldsymbol{\Theta}^{E}$ and $\boldsymbol{\Theta}^{I}$, whose amplitude
response is set to $1$ and phase shifts are randomly set in the range of
$[0,2\pi)$ obeying the uniform distribution.
* •
_Without IRS_ : The composite channel
$\boldsymbol{f}^{H}_{m}\boldsymbol{V}_{k}\boldsymbol{\Theta}$ is set to $0$
both for the WET and for the computation offloading. The power allocation
$\boldsymbol{p}^{E}$ at the WET phase, as well as the local CPU frequency at
devices $\boldsymbol{f}$, the sub-band-device association
$\\{\boldsymbol{\alpha}_{k}\\}$ and the power allocation
$\\{\boldsymbol{p}_{k}\\}$ at the computing phase are optimized with the aid
of Algorithm 5, while we skip the optimization of the IRS reflection
coefficient $\boldsymbol{\Theta}^{E}$ and $\boldsymbol{\Theta}^{I}$.
Let us continue by presenting the selection of the time allocation, sub-band
allocation in the WET and the computing phases, as well as the impact of
diverse environment settings, as follows.
### IV-A Selection of the Time Allocation
Figure 4: Simulation results of the total energy consumption versus the time
allocation $\tau$. The parameter settings are specified in Table I.
In order to find an appropriate time allocation for our WP-MEC system, we
depict the total energy consumption (the OF of Problem $\mathcal{P}1$) versus
the the time allocation $\tau$ in Fig. 4. It can be seen that the total energy
consumption becomes higher upon increasing $\tau$ for all these three schemes
considered. The reason behind it is explained as follows. For a given volume
of the computational task to be offloaded within the time duration of $T$, an
increase of $\tau$ implies a higher offloading rate required by computation
offloading, while at a glance of (5), the computation offloading rate is
formulated as a logarithmic function of the offloading power. Hence, we have
to largely increase the transmit power of computation offloading for providing
the extra offloading rate required by the increase of $\tau$, which results in
a higher energy consumption at the wireless devices. Furthermore, since the
energy required by WET is determined by the energy consumption at the wireless
devices, the total energy consumption becomes higher upon increasing $\tau$.
Based on this discussion, it seems that we should select the value of $\tau$
as small as possible. However, this may lead to an upsurge of the power
consumption for WET, which might exceed the maximum allowable transmit power
at the HAP. Therefore, as a compromise, for the environment associated with
the default settings we select $\tau=0.1$, beyond which the total energy
consumption becomes increasingly higher along with $\tau$.
### IV-B Joint Sub-Band and Power Allocation in the WET and Computing Phases
(a)
(b)
(c)
(d)
Figure 5: Joint sub-band and of power allocation for the WET and the computing
phases, relying on the Algorithm 5, where the number of bits to be processed
is set the same as $20\leavevmode\nobreak\ \rm{Kbits}$ for the three wireless
devices. (a) The channel gain at the WET phase; (b) The joint sub-band and
power allocation at the WET phase; (3) The channel gain at the computing
phase; (d) The joint sub-band and power allocation at the computing phase. The
parameter settings are specified in Table I.
Fig. 5 illustrates the channel gain as well as the joint sub-band and power
allocation both for the WET and computing phases. Our observations are as
follows. Firstly, as shown in Fig. 5b, only the $5$-th sub-band is activated
for WET. This allocation is jointly determined by the power consumption of the
computing phase and by the channel gain in the WET phase. Specifically, with
the reference of Fig. 5d, Device 3 requires the highest power consumption for
computation offloading. Given that the overall performance is dominated by the
device having the highest energy consumption, we may reduce the energy
consumption of WET, by activating the sub-band associated with the highest
channel gain of Device 3, which is the $5$-th sub-band as shown in Fig. 5a.
Secondly, with the reference of Fig. 5c, it can be observed that the power
allocation in Fig. 5d obeys the water-filling principle for each device, i.e.
allocating a higher power to the sub-band possessing a high channel gain. This
corresponds to the power allocation obtained in (21). Thirdly, comparing Fig.
5a and Fig. 5c, we can see that the channel gains in the WET and computing
phase are different for each device after we optimize the IRS reflection
coefficients, which consolidates our motivation to conceive separate IRS
designs for the WET and the computing phases.
### IV-C Performance of the Proposed Algorithms
In order to evaluate the benefits of employing an IRS in WP-MEC systems, we
compare the performance of our proposed algorithms with that of the benchmark
schemes, under various settings of the number of IRS reflection elements, of
the device location, of the path loss exponent of the IRS-related channel, and
of the energy consumption per bit at the edge, as follows.
#### IV-C1 Impact of the Number of IRS Reflection Elements
Figure 6: Simulation results of the total energy consumption versus the number
of IRS reflection elements $N$. The rest of parameters are specified in Table
I.
Figure 7: Simulation results of the total energy consumption versus the
distance between the HAP and the wireless device circle $d_{1}$. Other
parameters are set in Table I.
Fig. 6 shows the simulation results of the total energy consumption versus the
number of IRS reflection elements for the three schemes considered. We have
the following observations. Firstly, the performance gap between the scheme
“Without IRS” and the scheme “IRS RandPhase” increases along with $N$, which
implies that the IRS is capable of assisting the energy consumption reduction
in the WP-MEC system, even without carefully designing the IRS reflection
coefficients. This is due to the so-called virtual array gain induced by the
IRS, as mentioned in Section I. Secondly, the scheme “With IRS” outperforms
the scheme “IRS RandPhase”, which indicates that our sophisticated design of
IRS reflection coefficients may provide the so-called passive beamforming gain
for computation offloading. Note that different from the conventional MEC
systems [38] where WET is not employed, these two types of gain are exploited
twice in WP-MEC systems (during the WET and computing phases, respectively).
As such, IRSs are capable of efficiently reducing the energy consumption in
WP-MEC systems.
#### IV-C2 Impact of the Distance between the Device Circle and the IRS
Fig. 7 presents the simulation results of the total energy consumption versus
the distance between the HAP and the mobile wireless circles. Our observations
are as follows. Firstly, the two IRS-aided schemes do not show any visible
advantage over the scheme of “Without IRS” when we have
$d_{1}<6\leavevmode\nobreak\ \rm{m}$, which indicates that each IRS has a
limited coverage. Secondly, the benefit of deploying the IRS is becomes
visible at $d_{1}>9\leavevmode\nobreak\ \rm{m}$ in the scheme of “IRS
RandPhase”, while the advantage of the “With IRS” scheme is already notable at
$d_{1}=7\leavevmode\nobreak\ \rm{m}$. This observation implies that our
sophisticated design of IRS reflection coefficient is capable of extending the
coverage of IRS.
Figure 8: Simulation results of the total energy consumption versus the path
loss exponent of the IRS reflection link $\beta$, where we set
$\beta_{ui}=\beta_{ia}=\beta$. Other parameters are set in Table I.
Figure 9: Simulation results of the total energy consumption versus the energy
consumption per bit at the edge. Other parameters are set in Table I.
#### IV-C3 Impact of Path Loss Exponent
Fig. 8 depicts the simulation results of the total energy consumption versus
the path loss exponent of the IRS related links. It can be seen that the total
energy consumption decreases if a higher path loss exponent is encountered,
which is because a higher $\beta$ leads to a lower channel gain of the IRS-
reflected link. This observation provides an important engineering insight:
the locations of IRSs should be carefully selected for avoiding obstacles.
#### IV-C4 Impact of energy consumption at the edge
Fig. 9 shows the simulation results of the total energy consumption versus the
energy consumption per bit at the edge node. It can be observed that the
advantage of deploying IRS is eminent when we have a small value of
$\vartheta$, while the benefit becomes smaller upon increasing the value of
$\vartheta$. The reason is explained as follows. The OF of Problem
$\mathcal{P}1$ is the combination of the energy consumption of WET and of
processing the offloaded computational tasks. If the energy consumption per
bit at the edge node is of a small value, the energy consumption of WET plays
a dominant role in the total energy consumption. In this case, the benefit of
employing IRS is significant. By contrast, if $\vartheta$ becomes higher, the
total energy consumption is dominated by that at the edge. In this case,
although the energy consumption of WET can be degraded by deploying IRSs, this
reduction becomes marginal.
## V Conclusions
To reduce the energy consumption of WP-MEC systems, we have proposed an IRS-
aided WP-MEC scheme and formulate an energy minimization problem. A
sophisticated algorithm has been developed for optimizing the settings both in
the WET and the computing phases. Our numerical results reveal the following
insights. Firstly, the employment of IRSs is capable of substantially reducing
the energy consumption of the WP-MEC system, especially when the IRS is
deployed in vicinity of wireless devices. Secondly, the energy consumption
decreases upon increasing the number of IRS reflection elements. Thirdly, the
locations of IRSs should be carefully selected for avoiding obstacles. These
results inspire us to conceive a computational rate maximization design for
the IRS-aided WP-MEC system as a future work.
## References
* [1] L. Chettri and R. Bera, “A comprehensive survey on Internet of Things (IoT) toward 5G wireless systems,” IEEE Internet Things J., vol. 7, pp. 16–32, Jan 2020.
* [2] H. Liu, F. Hu, S. Qu, Z. Li, and D. Li, “Multipoint wireless information and power transfer to maximize sum-throughput in WBAN with energy harvesting,” IEEE Internet Things J., vol. 6, pp. 7069–7078, Aug 2019.
* [3] P. Saffari, A. Basaligheh, V. J. Sieben, and K. Moez, “An RF-powered wireless temperature sensor for harsh environment monitoring with non-intermittent operation,” IEEE Trans. Circuits and Systems I: Regular Papers, vol. 65, pp. 1529–1542, May 2018.
* [4] C. You, K. Huang, and H. Chae, “Energy efficient mobile cloud computing powered by wireless energy transfer,” IEEE J. Sel. Areas Commun., vol. 34, pp. 1757–1771, May 2016.
* [5] F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and computing optimization in wireless powered mobile-edge computing systems,” IEEE Trans. Wireless Commun., vol. 17, pp. 1784–1797, March 2018.
* [6] S. Bi and Y. J. Zhang, “Computation rate maximization for wireless powered mobile-edge computing with binary computation offloading,” IEEE Trans. Wireless Commun., vol. 17, pp. 4177–4190, June 2018.
* [7] J. Feng, Q. Pei, F. R. Yu, X. Chu, and B. Shang, “Computation offloading and resource allocation for wireless powered mobile edge computing with latency constraint,” IEEE Wireless Commun. Lett., vol. 8, pp. 1320–1323, Oct 2019.
* [8] H. Wu, X. Lyu, and H. Tian, “Online optimization of wireless powered mobile-edge computing for heterogeneous industrial Internet of Things,” IEEE Internet Things J., vol. 6, pp. 9880–9892, Dec 2019.
* [9] L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” IEEE Trans. Mobile Comput., 2019.
* [10] X. Hu, K.-K. Wong, and K. Yang, “Wireless powered cooperation-assisted mobile edge computing,” IEEE Trans. Wireless Commun., vol. 17, pp. 2375–2388, April 2018.
* [11] L. Ji and S. Guo, “Energy-efficient cooperative resource allocation in wireless powered mobile edge computing,” IEEE Internet Things J., vol. 6, pp. 4744–4754, June 2019.
* [12] F. Zhou, Y. Wu, R. Q. Hu, and Y. Qian, “Computation rate maximization in UAV-enabled wireless-powered mobile-edge computing systems,” IEEE J. Sel. Areas in Commun., vol. 36, pp. 1927–1941, Sep. 2018.
* [13] Y. Liu, K. Xiong, Q. Ni, P. Fan, and K. B. Letaief, “UAV-assisted wireless powered cooperative mobile edge computing: Joint offloading, cpu control and trajectory optimization,” IEEE Internet Things J., pp. 1–1, 2019.
* [14] S. Bi, C. K. Ho, and R. Zhang, “Wireless powered communication: Opportunities and challenges,” IEEE Commun. Mag., vol. 53, pp. 117–125, April 2015.
* [15] K. Huang and X. Zhou, “Cutting the last wires for mobile communications by microwave power transfer,” IEEE Commun. Mag., vol. 53, pp. 86–93, June 2015.
* [16] D. Niyato, D. I. Kim, M. Maso, and Z. Han, “Wireless powered communication networks: Research directions and technological approaches,” IEEE Wireless Commun., vol. 24, pp. 88–97, June 2017.
* [17] S. Barbarossa, S. Sardellitti, and P. Di Lorenzo, “Communicating while computing: Distributed mobile cloud computing over 5G heterogeneous networks,” IEEE Signal Process. Mag., vol. 31, pp. 45–55, June 2014.
* [18] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,” IEEE Internet Things J., vol. 3, pp. 637–646, May 2016.
* [19] Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network,” IEEE Commun. Mag., vol. 58, pp. 106–112, January 2020.
* [20] E. Basar, M. Di Renzo, J. De Rosny, M. Debbah, M.-S. Alouini, and R. Zhang, “Wireless communications through reconfigurable intelligent surfaces,” IEEE Access, vol. 7, pp. 116753–116773, July 2019.
* [21] M. Di Renzo, K. Ntontin, J. Song, F. Lazarakis, J. de Rosny, D.-T. Phan-Huy, O. Simeone, R. Zhang, M. Debbah, G. Lerosey, et al., “Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison.” [Online]. Available: https://arxiv.org/abs/1908.08747.
* [22] W. Tang, M. Z. Chen, X. Chen, J. Y. Dai, Y. Han, M. Di Renzo, Y. Zeng, S. Jin, Q. Cheng, and T. J. Cui, “Wireless communications with reconfigurable intelligent surface: Path loss modeling and experimental measurement.” [Online]. Available: https://arxiv.org/abs/1911.05326.
* [23] Ö. Özdogan, E. Björnson, and E. G. Larsson, “Intelligent reflecting surfaces: Physics, propagation, and pathloss modeling,” IEEE Wireless Commun. Lett., 2019.
* [24] Y. Han, W. Tang, S. Jin, C.-K. Wen, and X. Ma, “Large intelligent surface-assisted wireless communication exploiting statistical CSI,” IEEE Trans. Veh. Technol., vol. 68, pp. 8238–8242, Aug. 2019.
* [25] B. Di, H. Zhang, L. Li, L. Song, Y. Li, and Z. Han, “Practical hybrid beamforming with limited-resolution phase shifters for reconfigurable intelligent surface based multi-user communications,” IEEE Trans. Veh. Technol., pp. 1–1, 2020.
* [26] C. Hu and L. Dai, “Two-timescale channel estimation for reconfigurable intelligent surface aided wireless communications.” [Online]. Available: https://arxiv.org/abs/1912.07990.
* [27] Y. Yang, B. Zheng, S. Zhang, and R. Zhang, “Intelligent reflecting surface meets OFDM: Protocol design and rate maximization.” [Online]. Available: https://arxiv.org/abs/1906.09956, 2019.
* [28] Q. Wu and R. Zhang, “Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming,” IEEE Trans. Wireless Commun., vol. 18, pp. 5394–5409, Nov. 2019.
* [29] Q. Wu and R. Zhang, “Beamforming optimization for wireless network aided by intelligent reflecting surface with discrete phase shifts,” IEEE Trans. Commun., pp. 1–1, 2019.
* [30] H. Guo, Y.-C. Liang, J. Chen, and E. G. Larsson, “Weighted sum-rate optimization for intelligent reflecting surface enhanced wireless networks.” [Online]. Available: https://arxiv.org/abs/1905.07920, 2019.
* [31] G. Zhou, C. Pan, H. Ren, K. Wang, and A. Nallanathan, “A framework of robust transmission design for IRS-aided MISO communications with imperfect cascaded channels.” [Online]. Available: https://arxiv.org/abs/2001.07054, 2020.
* [32] C. Pan, H. Ren, K. Wang, W. Xu, M. Elkashlan, A. Nallanathan, and L. Hanzo, “Intelligent reflecting surface for multicell MIMO communications.” [Online]. Available: https://arxiv.org/abs/1907.10864, 2019.
* [33] Y. Yang, S. Zhang, and R. Zhang, “IRS-enhanced OFDMA: Joint resource allocation and passive beamforming optimization,” IEEE Wireless Commun. Lett., pp. 1–1, 2020.
* [34] L. Dong and H. Wang, “Secure MIMO transmission via intelligent reflecting surface,” IEEE Wireless Commun. Lett., pp. 1–1, 2020.
* [35] S. Hong, C. Pan, H. Ren, K. Wang, and A. Nallanathan, “Artificial-noise-aided secure MIMO wireless communications via intelligent reflecting surface.” [Online]. Available: https://arxiv.org/abs/2002.07063.
* [36] C. Pan, H. Ren, K. Wang, M. Elkashlan, A. Nallanathan, J. Wang, and L. Hanzo, “Intelligent reflecting surface enhanced MIMO broadcasting for simultaneous wireless information and power transfer.” [Online]. Available: https://arxiv.org/abs/1908.04863.
* [37] Y. Zheng, S. Bi, Y. J. Zhang, Z. Quan, and H. Wang, “Intelligent reflecting surface enhanced user cooperation in wireless powered communication networks,” IEEE Wireless Commun. Lett., pp. 1–1, 2020.
* [38] T. Bai, C. Pan, Y. Deng, M. Elkashlan, A. Nallanathan, and L. Hanzo, “Latency minimization for intelligent reflecting surface aided mobile edge computing.” [Online]. Available: https://arxiv.org/abs/1910.07990.
* [39] K. Seong, M. Mohseni, and J. M. Cioffi, “Optimal resource allocation for OFDMA downlink systems,” in Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1394–1398, IEEE, 2006.
* [40] Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, “Mobile-edge computing: Partial computation offloading using dynamic voltage scaling,” IEEE Trans. Commun., vol. 64, pp. 4268–4282, Jan. 2016.
* [41] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1.” http://cvxr.com/cvx, Mar. 2014.
* [42] K.-Y. Wang, A. M.-C. So, T.-H. Chang, W.-K. Ma, and C.-Y. Chi, “Outage constrained robust transmit optimization for multiuser MISO downlinks: Tractable approximations by conic optimization,” IEEE Trans. Signal Process., vol. 62, pp. 5690–5705, Nov. 2014.
* [43] M. Razaviyayn, Successive convex approximation: Analysis and applications. PhD thesis, The University of Minnesota, 2014.
* [44] C. Y. Wong, R. S. Cheng, K. B. Lataief, and R. D. Murch, “Multiuser OFDM with adaptive subcarrier, bit, and power allocation,” IEEE J. Sel. Areas Commun., vol. 17, pp. 1747–1758, Oct. 1999.
* [45] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge University Press, 2004.
|
2024-09-04T02:54:59.388750 | 2020-03-11T20:41:06 | 2003.05514 | {
"authors": "Eleftherios Kastis and Stephen Power",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26179",
"submitter": "Stephen C. Power",
"url": "https://arxiv.org/abs/2003.05514"
} | arxiv-papers | # Projective plane graphs and 3-rigidity
E. Kastis and S.C. Power Dept. Math. Stats.
Lancaster University
Lancaster LA1 4YF
U.K<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
A ${\mathcal{P}}$-graph is a simple graph $G$ which is embeddable in the real
projective plane ${\mathcal{P}}$. A $(3,6)$-tight ${\mathcal{P}}$-graph is
shown to be constructible from one of 8 uncontractible ${\mathcal{P}}$-graphs
by a sequence of vertex splitting moves. Also it is shown that a
${\mathcal{P}}$-graph is minimally generically 3-rigid if and only if it is
$(3,6)$-tight. In particular this characterisation holds for graphs that are
embeddable in the Möbius strip.
2010 Mathematics Subject Classification. 52C25 51E15
Key words and phrases: projective plane, embedded graphs, geometric rigidity
This work was supported by the Engineering and Physical Sciences Research
Council [EP/P01108X/1]
## 1\. Introduction
Let $G$ be the graph of a triangulated sphere. Then an associated bar-joint
framework $(G,p)$ in ${\mathbb{R}}^{3}$ is known to be minimally rigid if the
placements $p(v)$ of the vertices $v$ is strictly convex (Cauchy [4]) or if
the placement is generic. The latter case follows from Gluck’s result [12]
that any generic placement is in fact infinitesimally rigid. An equivalent
formulation of Gluck’s theorem asserts that if $G$ is a simple graph which is
embeddable in the sphere then $G$ is minimally 3-rigid if and only if it
satisfies a $(3,6)$-tight sparsity condition. We obtain here the exact
analogue of this for simple graphs that are embeddable in the real projective
plane ${\mathcal{P}}$. The proof rests on viewing these graphs as partial
triangulations and deriving inductive arguments based on edge contractions for
certain admissible edges. Accordingly we may state this result in the
following form. An immediate corollary is that this combinatorial
characterisation also holds for triangulated Möbius strips.
A graph $G$ is _3-rigid_ if its generic bar-joint frameworks in
${\mathbb{R}}^{3}$ are infinitesimally rigid and is _minimally 3-rigid_ if no
subgraph has this property.
###### Theorem 1.1.
Let $G$ be a simple graph associated with a partial triangulation of the real
projective plane. Then $G$ is minimally $3$-rigid if and only if $G$ is
$(3,6)$-tight.
Recall that a $(3,6)$-tight graph $G=(V,E)$ is one that satisfies the Maxwell
count $|E|=3|V|-6$ and the sparsity condition $|E^{\prime}|\leq
3|V^{\prime}|-6$ for subgraphs $G^{\prime}$ with at least 3 vertices. In
particular it follows from the Maxwell condition that such a graph falls 3
edges short of a full (possibly nonsimple) triangulation of ${\mathcal{P}}$.
The proof of Theorem 1.1 depends heavily on our main result, Theorem 6.1,
which is a purely combinatorial constructive characterisation of the
${\mathcal{P}}$-graphs which are (3,6)-tight. A key step is the identification
of edge contraction moves, for certain edges that lie in two 3-cycle faces,
such that the $(3,6)$-sparsity condition is preserved. This is done in Section
3 by exploiting the implicit topological structure of the graphs. The
associated contraction sequences must terminate and the terminal graphs are
said to in _irreducible_. They have the defining property that every
contractible edge lies on a critical $4$-, $5$\- or $6$-cycle. For the
remainder of the proof of Theorem 6.1 we show that an irreducible graph has no
contractible edges (Section 5) and we determine the uncontractible graphs
(Section 4). Determining the uncontractibles requires an extensive case-by-
case analysis leading to the 8 “base” graphs given in Figures 3, 4, 5.
The determination of construction schemes and their base graphs for various
classes of graphs is of general interest, both for embedded graph theory and
for the rigidity of bar-joint frameworks. We note, for example, that Barnette
[1] employed vertex splitting moves for the construction of triangulations of
2-manifolds and showed that there are 2 (full) triangulations of
${\mathcal{P}}$ which are uncontractible. Also, Barnette and Edelson [2], [3]
have shown that all 2-manifolds have finitely many minimal uncontractible
triangulations. Our construction theorem is in a similar spirit to this and we
expect our reduction methods, involving critical cycles and minimum hole
incidence degree, for example, to be useful for more general surface graphs
and for other sparsity classes. In particular, for $(3,6)$-tight
${\mathcal{P}}$-graphs we show that the irreducibles are the uncontractibles
and it would be interesting to determine to what extent this phenomenon is
true for other surfaces and sparsity classes.
We define a _triangulated surface graph_ associated with a classical surface
${\mathcal{M}}$, with or without boundary and we represent embeddings of these
graphs, and their connected subgraphs (${\mathcal{M}}$-graphs), in terms of
_face graphs_. A face graph is a finite connected planar graph with a
specified pairing of some or all of the edges in the outer boundary.
Identifying the paired edges gives an identification graph $G=(V,E)$ together
with a set $F$ of facial 3-cycles inherited from the finite planar graph. See
Definitions 2.1, 2.2. In Section 3 we identify the obstacles, in terms of
critical cycles of edges, which prevent edge contraction moves from preserving
the sparsity condition. The determination in Section 4 of the 8 uncontractible
${\mathcal{P}}$-graphs is given in several stages, based on the nature of the
“holes” in their partial triangulation. They may have one hole with 6-cycle
boundary, two holes with boundary cycle lengths 5 and 4, or three holes, each
with a 4-cycle boundary. Also we give a useful index for the successive
determination of these uncontractible base graphs, namely the minimum hole
incidence degree $h(G)$ (Definition 4.3).
Since Whiteley’s demonstration [14] that vertex splitting preserves generic
rigidity this construction move has become an important tool in combinatorial
rigidity theory [11]. See for example the more recent studies of generic
rigidity in the case of graphs for modified spheres [7], [8], [5], [13], and
in the case of a partially triangulated torus [6]. The proof of Theorem 1.1
given in Section 6 follows quickly from Whiteley’s theorem, Theorem 6.1, and
the 3-rigidity of the 8 base graphs.
## 2\. Graphs in Surfaces
Let ${\mathcal{M}}$ be a classical surface, possibly with boundary. Then a
_surface graph for ${\mathcal{M}}$_ is a triple $G=(V,E,F)$ where $(V,E)$ is a
simple graph, $F$ is a set of $3$-cycles of edges, called facial 3-cycles, and
where there exists a faithful embedding of $G$ in ${\mathcal{M}}$ for which
the facial 3-cycles correspond to the 3-sided faces inthe embedding. A surface
graph for ${\mathcal{M}}$, which we also refer to as an ${\mathcal{M}}$-graph,
can thus be viewed as a simple graph obtained from a full triangulation of
${\mathcal{M}}$ by discarding vertices, edges and faces. Also, $G$ is a
_triangulated surface graph for ${\mathcal{M}}$_ if the union of the embedded
faces is equal to ${\mathcal{M}}$. The following equivalent definition, based
on simplicial complexes rather than surfaces, is combinatorial and so more
elementary.
###### Definition 2.1.
A _triangulated surface graph_ is a graph $G=G(M)=(V,E,F)$ which is simple and
is determined by the $1$-skeleton and the $2$-simplexes of a finite simplicial
complex $M$ where $M$ has the following properties.
1. (i)
$M$ consists of a finite set of $2$-simplexes $\sigma_{1},\sigma_{2},\dots$
together with their $1$-simplexes and $0$-simplexes.
2. (ii)
Every $1$-simplex lies in at most two $2$-simplexes.
Condition (i) implies that each 1-simplex lies in at least one 2-simplex. It
follows that $M$ can be viewed as a _combinatorial surface_ and we define
${\mathcal{M}}={\mathcal{M}}(M)={\mathcal{M}}(G)$ to be the classical
topological surface, possibly with boundary, which is determined by $M$, the
simplicial complex [9]. Evidently, $G$ is a triangulated surface graph for
${\mathcal{M}}$.
Classical compact surfaces are classified up to homeomorphism by combinatorial
surfaces and, moreover, combinatorial surfaces arise from triangulated polygon
graphs (also called triangulated discs) by means of an identification of
certain pairs of boundary edges [9].
We now formally define such labelled triangulated discs which we refer to as
_face graphs_.
###### Definition 2.2.
A _face graph_ for a triangulated surface graph is a pair $(B,\lambda)$ where
$B$ is the planar graph of a triangulated disc and $\lambda$ is a partition of
the _boundary graph_ $\partial B$ of $B$, such that each set of the partition
has $1$ or $2$ edges, and the paired edges of the partition are directed.
A face graph $(B,\lambda)$ defines a simplicial complex $M$, with
$1$-simplexes provided by edges and identified edge pairs, and 2-simplexes
provided by the facial 3-cycles. Also, if the boundary graph of $B$ is a
3-cycle and $\lambda$ is trivial then this 3-cycle defines a 2-simplex of $M$.
If the identification graph $G=B/\lambda$ is a simple graph then $M$ is of the
type given in Definition 2.1, and so $G$ is a triangulated surface graph
$G=(V,E,F)$.
### 2.1. ${\mathcal{M}}$-graphs
We are concerned simple graphs that can be embedded in a connected classical
surface ${\mathcal{M}}$. More precisely we shall be concerned with embedded
graphs, which we refer to as ${\mathcal{M}}$-graphs, or surface graphs, and we
can define them directly in terms of more general face graphs
$(B_{0},\lambda)$, where $B_{0}\subseteq B$ with $\partial B\subset B_{0}$ and
$(B,\lambda)$ is as in the previous definition. Thus a surface graph has the
form $G=B_{0}/\lambda$ where $B_{0}$ is obtained from $B$ by the removal of
the interior edges of $k$ interior-disjoint triangulated subdiscs of $B$. We
refer to $k$ as the _number of holes_ of the embedded graph $G$. When $k=1$ we
refer to $B_{0}$ as an _annular face graph_.
$e$$f$$g$$e$$f$$g$$v_{1}$$v_{2}$$v_{3}$$v_{1}$$v_{2}$$v_{3}$
Figure 1. A face graph $(B_{0},\lambda)$ for a ${\mathcal{P}}$-graph.
Figure 1 shows the annular face graph $(B_{0},\lambda)$ for a surface graph
$G=B_{0}/\lambda$. The labelling of outer boundary edges and vertices
determines how pairs of edges are identified. A planar triangulation of the
interior of the inner 6-cycle gives a face graph $(B,\lambda)$ for the
triangulated surface graph $S=B/\lambda$, if and only if $S$ is simple. In
view of the identifications the topological surface ${\mathcal{M}}(S)$ is the
real projective plane ${\mathcal{P}}$ and $G$ is a ${\mathcal{P}}$-graph.
In this example the surface graph $G$ happens to be a (fully) triangulated
surface graph for the Möbius strip. However, in general a surface graph may
have “exposed” edges, that is, edges that belong to no facial 3-cycles, and so
in this case the surface graph will not be a triangulated surface graph for
any classical surface with boundary.
## 3\. Contraction moves and (3,6)-sparsity.
Let $G=(V,E,F)$ be a surface graph. An edge of $G$ is of type $FF$ if it is
contained in two facial $3$-cycles and an $FF$ edge is _contractible_ if it is
not contained in any non-facial $3$-cycle. For such an edge $e=uv$ there is a
natural contraction move $G\to G^{\prime}$ on the graph $G$, corresponding to
a contraction of $e$, merging $u$ and $v$ to a single vertex, leading to a
surface graph $G^{\prime}=(V^{\prime},E^{\prime},F^{\prime})$ where
$|V^{\prime}|=|V|-1,|E^{\prime}|=|E|-3,|F^{\prime}|=|F|-2$. We also say that
$G$ is _contractible_ if it has a contractible $FF$ edge.
To define formally the contracted graph $G^{\prime}$, let $e=vw$ be a
contractible $FF$ edge in $G$ and let $avw$ and $bvw$ be the two facial
3-cycles which contain $e$. Then $G^{\prime}$ is obtained from $G$ by an _edge
contraction_ on $e=vw$ if $G^{\prime}$ is obtained by (i) deleting the edges
$aw$ and $bw$, (ii) replacing all remaining edges of the form $xw$ with $xv$,
(iii) deleting the edge $e$ and the vertex $w$. That $G^{\prime}$ is simple
follows from the fact that a contractible $FF$ edge does not lie on a
nonfacial 3-cycle.
Given an edge contraction move $G\to G^{\prime}$ we note that the inverse
move, recovering $G$ from $G^{\prime}$, is a _vertex splitting move_ at $v$
which in particular introduces a new vertex $w$ and the new $FF$ edge $vw$.
Such vertex splitting move $G^{\prime}\to G$, which might be thought of as
being locally planar, creates the new surface graph $G$ for the surface
${\mathcal{M}}$ from a given surface graph $G^{\prime}$ for ${\mathcal{M}}$.
### 3.1. (3,6)-sparse ${\mathcal{P}}$-graphs.
If $G=(V,E)$ is a graph then its _freedom number_ is defined to be
$f(G)=3|V|-|E|$. A graph $G$ is _$(3,6)$ -sparse_ if $f(G^{\prime})\geq 6$ for
any subgraph $G^{\prime}$ with at least 3 vertices, and is _$(3,6)$ -tight_ if
it is $(3,6)$-sparse and $f(G)=6$. In particular a $(3,6)$-sparse graph is a
simple graph, with no loop edges and no parallel edges.
Let $B$ be a triangulated disc such that the boundary cycle $\partial B$ is of
even length $2r$. With the pairing partition $\lambda$ of opposite edges,
directed in cyclic order, the pair $(B,\lambda)$ is a face graph. If
$S=B/\lambda$ is simple then $S$ is a triangulated surface graph for the real
projective plane ${\mathcal{P}}$. Also we observe that the freedom number
$f(B)$ is equal to $6+(2r-3)$. This follows since $B$ may be viewed as a
triangulated sphere (which has freedom number $6$) with $2r-3$ edges removed.
Noting that $S$ is related to $B$ by the loss of $r$ vertices and $r$ edges it
follows that
$f(S)=(3+2r)-3r+r=3.$
Let $G$ be a surface graph for ${\mathcal{P}}$, the real projective plane,
which is determined by the annular face $(B_{0},\lambda)$ where the inner
boundary cycle of edges has length $s$. Then $f(G)=f(S)-(s-3)$ and in
particular $G$ satisfies the so-called _Maxwell count_ $f(G)=6$ if and only if
$s=6$.
###### Lemma 3.1.
Let $G$ be a triangulated surface graph for the Möbius strip. Then $G$ is a
surface graph for ${\mathcal{P}}$. Also $G$ satisfies the Maxwell count if and
only if the boundary graph $\partial G$ is the graph of a simple 6-cycle.
###### Proof.
Let $G(M)=(V,E,F)$ be a triangulated surface graph given by a finite
simplicial complex $M$ for the Möbius strip, as in Definition 2.1. Then $G(M)$
is determined by a face graph $(B,\mu)$ where $\mu$ is obtained from an
identification of two vertex-disjoint paths in the boundary of $B$, which have
the same length and orientation and which have end vertices $w_{1},w_{2}$ and
$w_{3},w_{4}$ respectively. In Figure 2 the boundary of $B$ is depicted as
rectangular.
$w_{1}$$w_{2}$$w_{3}$$w_{4}$$v_{1}$$v_{2}$$B$
Figure 2. A Möbius strip triangulated surface graph as a ${\mathcal{P}}$ with
hole graph.
Augment the planar graph $B$ to obtain a containing planar graph $B_{1}$ which
has 2 additional vertices, $v_{1}$ and $v_{2}$ say, and additional edges
$v_{1}w$ (resp. $v_{2}w$) which are incident to vertices on the boundary path
from $w_{4}$ to $w_{1}$ (resp. $w_{2}$ to $w_{3}$). This defines a
triangulated disc $B_{1}$ which is also indicated in Figure 2. Define a
partition $\lambda$ for $B_{1}$ as the augmentation of $\mu$ by the two
directed edge pairs $v_{1}w_{1},v_{2}w_{3}$ and $w_{2}v_{2},w_{3}v_{1}$ and
let $(B_{1},\lambda)$ be the resulting face graph. Then $H=B_{1}/\lambda$ is a
triangulated surface graph for ${\mathcal{P}}$. Moreover, the faces of $H$
that are incident to the vertex $v_{1}=v_{2}$ in $S$ are the faces of a
triangulated disc and $G$ is a surface graph for ${\mathcal{P}}$. ∎
Let $G$ be a ${\mathcal{P}}$-graph, with $k$ holes. If $G$ satisfies the
Maxwell count $f(G)=6$ then $k=1,2$ or $3$. For $k=1$ a representing face
graph $(B_{0},\lambda)$ for $G$ is annular with a 6-cycle inner boundary. This
inner boundary can intersect and even coincide with the outer boundary of $B$.
For $k=2$ there are two inner boundaries of length $5$ and $4$ corresponding
to the boundaries of the interior disjoint discs defining $G$, while for $k=3$
there are three inner boundaries which are 4-cycles. In particular,
$3\leq|\partial G|\leq 12.$
###### Definition 3.2.
For $k=1,2,3$ the set ${\mathfrak{P}}_{k}$ is the set of $(3,6)$-tight
${\mathcal{P}}$-graphs which have $k$ holes.
While a surface graph is a graph with extra structure we shall informally
refer to the elements of ${\mathfrak{P}}_{k}$ as graphs.
### 3.2. When contracted graphs are $(3,6)$-tight
A contraction move $G\to G^{\prime}$ on a contractible $FF$ edge $e$ of a
surface graph preserves the Maxwell count but need not preserve
$(3,6)$-tightness. We now examine this more closely in the case of a surface
graph for the real projective plane ${\mathcal{P}}$.
Suppose that $G_{1}\subseteq G$ and that both $G_{1}$ and $G$ are in
${\mathfrak{P}}_{1}$. If $e$ is a contractible $FF$ edge of $G$ which lies on
the boundary graph of $G_{1}$ then, since $G_{1}$ contains only one of the
facial 3-cycles incident to $e$, the contraction $G\to G^{\prime}$ for $e$
gives a contraction $G^{\prime}$ which is not $(3,6)$-sparse, since
$f(G_{1}^{\prime})=5$. We shall show that the failure of any contraction to
preserve $(3,6)$-sparsity is due to such a subgraph obstacle.
The following general lemma, which we refer to as the filling in lemma, is
useful for the identification of maximal $(3,6)$-tight subgraphs with specific
properties. See also [6]. In particular this lemma plays a role in the
identification of an obstacle subgraph.
###### Lemma 3.3.
Let $G\in{\mathfrak{P}}_{1}$ and let $H$ be an embedded triangulated disc
graph in G.
(i) If $K$ is a $(3,6)$-tight subgraph of $G$ with $K\cap H=\partial H$ then
$\partial H$ is a $3$-cycle graph.
(ii) If $K$ is a $(3,6)$-sparse subgraph of $G$ with $f(K)=7$ and $K\cap
H=\partial H$ then $\partial H$ is either a $3$-cycle or $4$-cycle graph.
###### Proof.
(i) Write $H^{c}$ for the subgraph of $G$ which contains the edges of
$\partial H$ and the edges of $G$ not contained in $H$. Since $G=H^{c}\cup H$
and $H^{c}\cap H=\partial H$ we have
$6=f(G)=f(H^{c})+f(H)-f(\partial H).$
Since $f(H^{c})\geq 6$ we have $f(H)-f(\partial H)\leq 0$. On the other hand,
$6\leq f(K\cup H)=f(K)+f(H)-f(\partial H)$
and $f(K)=6$ and so it follows that $f(H)-f(\partial H)=0$. It follows that
$\partial H$ is a 3-cycle.
(ii) The argument above leads to $-1\leq f(H)-f(\partial H)$ and hence to the
inequality $-1\leq f(D)-f(\partial D)$. This implies that $\partial H$ is
either a $3$-cycle or $4$-cycle graph. ∎
###### Lemma 3.4.
Let $G\in{\mathfrak{P}}_{1}$, let $e$ be a contractible $FF$ edge in $G$, and
let $G^{\prime}$ be the simple graph arising from the contraction move $G\to
G^{\prime}$ associated with $e$. Then either $G^{\prime}\in{\mathfrak{P}}_{1}$
or $e$ lies on the boundary of a subgraph $G_{1}$ of $G$ where
$G_{1}\in{\mathfrak{P}}_{1}$.
###### Proof.
Assume that $G^{\prime}\notin{\mathfrak{P}}_{1}$. It follows that $G^{\prime}$
must fail the $(3,6)$-sparsity count. Thus there exists a subgraph $K$ of $G$
containing $e$ for which the edge contraction results in a graph $K^{\prime}$
satisfying $f(K^{\prime})<6$. Let $e=vw$ and let $c$ and $d$ be the facial
$3$-cycles which contain $e$. If both $c$ and $d$ are subgraphs of $K$ then
$f(K)=f(K^{\prime})<6$, which contradicts the sparsity count for $G$. Thus $K$
must contain at most one of these facial $3$-cycles.
_Case 1_. Suppose first that $K$ is a maximal subgraph among all subgraphs of
$G$ which contain the cycle $c$, do not contain $d$, and for which contraction
of $e$ results in a simple graph $K^{\prime}$ which fails the $(3,6)$-sparsity
count. Note that $f(K)=f(K^{\prime})+1$ which implies $f(K)=6$ and
$f(K^{\prime})=5$. In particular, $K$ is $(3,6)$-tight, and is a connected
graph.
Let $(B_{0},\lambda)$ be a face graph for $G$ with an associated face graph
$(B,\lambda)$ for a triangulated surface graph for $S=(V,E,F)$ for
${\mathcal{P}}$ containing $G$. In particular $(B,\lambda)$ provides a
faithful topological embedding $\pi:S\to{\mathcal{P}}$. Let
$X(K)\subset{\mathcal{P}}$ be the closed set $\pi(E(K))$ and let
$\tilde{X}(K)$ be the union of $X(K)$ and the embeddings of the faces for the
facial $3$-cycles belonging $K$. Finally, let $U_{1},\dots,U_{n}$ be the
maximal connected open sets of the complement of $\tilde{X}(K)$ in
${\mathcal{P}}$.
Note that each such connected open set $U_{i}$ is determined by a set
${\mathcal{U}}_{i}$ of embedded faces of $S$ with the property: each pair of
embedded faces of $U_{i}$ are the endpoints of a path of edge-sharing embedded
faces in ${\mathcal{U}}_{i}$. From the topological nature of ${\mathcal{P}}$
it follows that $U_{i}$ has one of the following 3 properties.
(i) $U_{i}$ is an open disc.
(ii) $U_{i}$ is the interior of a Möbius strip.
(iii) The complement of $U_{i}$ is not connected.
The third property cannot hold since the embedding of $K$ is contained in the
complement of $U_{i}$ and contains the boundary of $U_{i}$, and yet $K$ is a
connected graph.
From the second property it follows that $K$ is a planar graph, since it can
be embedded in the complement of $U_{i}$ and this is a triangulated disc. This
is also a contradiction, since the edge contraction of a contractible $FF$
edge in a planar triangulated graph preserves $(3,6)$-sparsity.
Each set $U_{i}$ is therefore the interior of the closed set determined by an
embedding of a triangulated disc graph in $B$, say $H(U_{i})$. (Indeed, the
facial 3-cycles defining $H(U_{i})$ are those whose torus embedding have
interior set contained in $U_{i}$.) We may assume that $U_{1}$ is the open set
that contains the hole of $G$. (More precisely, $U_{1}$ contains the open set
corresponding to the embedded faces for the triangulated disc in $B$ that
determines $B_{1}$.)
For $i>1$ by the filling in lemma, Lemma 3.3, it follows that $\partial
H(U_{i})$ is a 3-cycle. By the maximality of $K$ we have $k=1$ (since adding
the edges and vertices of $S$ interior to these nonfacial 3-cycles gives a
subgraph of $G$ with the same freedom count). Thus, $K$ is a subgraph of $G$
and is equal to the surface graph for ${\mathcal{P}}$ defined by $B$ and the
embedded triangulated disc $H(U_{1})$. Thus, with $G_{1}=K$, the proof is
complete in this case.
_Case 2._ It remains to consider the case for which $K$ contains neither of
the facial $3$-cycles which contain $e$. Thus $f(K)=f(K^{\prime})+2$ and
$f(K)\in\\{6,7\\}$. Once again we assume that $K$ is a maximal subgraph of $G$
with respect to these properties and consider the complementary components
$U_{1},\dots,U_{k}$. As before each set $U_{i}$ is homeomorphic to a disc and
determines an embedded triangulated disc graph $H(U_{i})$, one of which, say
$H(U_{1})$, contains the triangulated disc which defines $G$. The filling in
lemma and maximality now implies that each boundary of $H(U_{i})$, for $i>1$,
is a $4$-cycle. By the maximality of $K$, we see once again that $k=1$ (since
adding the missing edge for such a $4$-cycle gives a subgraph of $G$ with a
lower freedom count) and the proof is completed as before. ∎
The filling in lemma holds for graphs in
${\mathfrak{P}}_{2},{\mathfrak{P}}_{3}$, with the same proof, and we may
extend Lemma 3.4 to these families of graphs.
###### Lemma 3.5.
Let $G\in{\mathfrak{P}}_{k}$, for $k=1,2$ or $3$, let $e$ be a contractible
$FF$ edge in $G$, and let $G^{\prime}$ be the simple graph arising from the
contraction move $G\to G^{\prime}$ associated with $e$. Then either
$G^{\prime}\in{\mathfrak{P}}_{k}$ or $e$ lies on the boundary of a subgraph
$G_{1}$ of $G$ where $G_{1}\in{\mathfrak{P}}_{l}$, for some $1\leq l\leq k$.
###### Proof.
The proof follows the same pattern as in the case $k=1$. Thus we assume that
$G^{\prime}\notin{\mathfrak{P}}_{k}$ and consider a subgraph $K$ of $G$ which
is maximal amongst all subgraphs which do not contain one (or both, according
to Cases 1 and 2) of the facial 3-cycles incident to $e$ and whose contraction
$K^{\prime}$ has freedom number $f(K^{\prime})=5$ (or $4$). We consider the
open set which is the complement of the embedding in ${\mathcal{P}}$ of $K$
and its facial 3-cycles. (The embedding here is denoted $\tilde{X}(K)$ in the
case $k=1$.) This open set has components $U_{1},\dots,U_{n}$ and each is the
interior of a union of an edge-connected set of ${\mathcal{P}}$-embedded
facial 3-cycles of $S$.
It follows as before, from the topological nature of ${\mathcal{P}}$, from
$(3,6)$-sparsity and from the filling in lemma, that each $U_{j}$ is an open
disc. Moreover, in the case $k=2$ each $U_{j}$ contains at least one of the 2
discs $D_{1},D_{2}$ which defines $G$ and so $n$ is 1 or 2 and it follows that
$K$ belongs to ${\mathfrak{P}}_{n}$, as desired. Similarly, for $k=3$, each
$U_{j}$ contains at least one of the 3 discs $D_{1},D_{2},D_{3}$ which defines
$G$ and so $n$ is 1, 2 or 3 and $K$ belongs to ${\mathfrak{P}}_{l}$ for some
$1\leq l\leq 3$. ∎
## 4\. The uncontractibles
Let $k=1,2$ or $3$. By the finiteness of a graph $G$ in ${\mathfrak{P}}_{k}$
it is evident that it admits a full reduction sequence
$G=G_{1}\to G_{2}\to\dots\to G_{n}$
where (i) each graph is in ${\mathfrak{P}}_{k}$, (ii) each move $G_{k}\to
G_{k+1}$ is an edge contraction for an $FF$ edge, as before, and (iii) $G_{n}$
is _irreducible_ in ${\mathfrak{P}}_{k}$ in the sense that it admits no edge
contraction to a graph in ${\mathfrak{P}}_{k}$.
Let us say that a surface graph is _uncontractible_ is every $FF$ edge lies on
a nonfacial 3-cycle. An uncontractible graph $G\in{\mathfrak{P}}_{k}$ is
certainly an irreducible graph in ${\mathfrak{P}}_{k}$ but we show in the next
section that the two classes coincide. In Section 4.2 we shall prove that
there are 8 uncontractible graphs but first we establish some useful
properties of the irreducible graphs.
### 4.1. Some properties of irreducible graphs
We say that a $k$-cycle of edges in $G$, $c$ say, is a _planar $k$-cycle_ in
$G$ if there is a face graph $(B_{0},\lambda)$ for $G$, with containing face
graph $(B,\lambda)$ for the triangulated surface graph $B/\lambda$ for
${\mathcal{P}}$, such that $c$ is determined by the boundary cycle $\hat{c}$
of a triangulated disc $D$ in $B$. Note that the holes of $G$ are defined by
embedded triangulated discs $D_{i}$ in $B_{0}$, and so we may say that a
planar cycle $c$ in $G$ _contains a hole of $G$_ if $D$ contains such an
embedded disc $D_{i}$. Also we may say that $c$ _properly contains a hole_ if
there is such an inclusion which is proper.
The following lemma shows that an irreducible (3,6)-tight
${\mathcal{P}}$-graph contains no degree 3 vertex that is incident to an $FF$
edge or lies on a planar nonfacial triangle.
###### Lemma 4.1.
Let $G$ be a graph in ${\mathfrak{P}}_{k}$, for $k=1,2$ or $3$.
(i) If $e$ is an $FF$ edge incident to a degree 3 vertex then
$G/e\in{\mathfrak{P}}_{k}$.
(ii) If $v$ is an interior vertex of $G$ and $v$ lies on a planar nonfacial
3-cycle then there is a contractible edge $vw$ with
$G/vw\in{\mathfrak{P}}_{k}$.
###### Proof.
For (i) note that since $G$ is simple $e$ is a contractible edge. Write $e=uv$
with facial 3-cycles $uvx$ and $uvy$, with $\deg v=3$. Then $e$ cannot lie on
a critical 4-, 5- or 6-cycle since one of the edges incident to $u$ would
provide an interior chord for this cycle. Also $e$ does not lie on a nonfacial
3-cycle and so (i) follows.
For (ii) let $H$ be the triangulated disc subgraph induced by the faces
incident to $v$, with vertices $v_{1},\dots,v_{n}$ in cyclic order on the
boundary of $H$. Considering the hypothesis, and relabelling, we may assume
that there is an edge $f=v_{1}v_{j}$ with $3\leq j\leq n-2$ so that the edges
$v_{3}v_{4},v_{4}v_{5},\dots,v_{j-1}v_{j},f$ are the boundary edges of a
triangulated disc. It is straightforward to show that one of the vertices
$v_{2},\dots,v_{j-1}$ has degree 3, and so (i) applies. ∎
The next lemma shows that if $G$ is irreducible then there is no critical
$m$-cycle which properly contains an $m$-hole.
###### Lemma 4.2.
Let $G$ be a graph in ${\mathfrak{P}}_{k}$, for $k=1,2$ or $3$, such that
there is a critical $m$-cycle, for $m=1,2$ or 3, which properly contains an
$m$-cycle hole, so that $G=G_{1}\cup A$ where $A$ is the annular graph
determined by the two $m$-cycles. Then $G$ is constructible from $G_{1}$ by
planar vertex splitting moves.
###### Proof.
Fix $k$ and $m\leq k$. Suppose that $|V(G)|=|V(G_{1})|+1.$ Then there is a
degree $3$ vertex on the boundary of the relevant hole of $G$. By Lemma 4.1(i)
$G$ is constructible from $G_{1}$ by a single planar vertex splitting move.
Assume next that the lemma is true whenever $|V(G)|=|V(G_{1})|+j$, for
$j=1,2,\dots,N-1$, and suppose that $|V(G)|=|V(G_{1})|+N$. Let $e$ be an
interior edge of the annular graph $A$. If the contraction $G/e$ is in
${\mathfrak{P}}_{m}$ then it follows from the induction step that $G$ is
constructible from $G_{1}$ by planar vertex splitting moves. So, by Lemma 3.5
we may assume (i), that $e$ lies on a critical $m$-cycle, with associated
graph $G^{\prime}$, or (ii), that $e$ lies on a nonfacial 3-cycle of $G$. In
the former case we may take $G_{1}^{\prime\prime}$ to be the union of $G_{1}$
and $G_{1}^{\prime}$. Then $G_{1}^{\prime\prime}$ is also in
${\mathfrak{P}}_{k}$. Since $|V(G_{1}^{\prime\prime})|-|V(G_{1})|<N$ and
$|V(G)|-|V(G_{1}^{\prime\prime})|<N$ it follows from the induction step that
the lemma holds for $G$ and $G_{1}$. So we may assume that (ii) holds, and
moreover, in view of Lemma 4.1(ii), that $e$ lies on a nonplanar nonfacial
3-cycle. To complete the proof we observe that this is not possible when $e$
is incident to a vertex on the hole which is not a vertex of the critical
$m$-cycle. ∎
### 4.2. The uncontractible graphs.
We now identify 8 uncontractible $(3,6)$-tight ${\mathcal{P}}$-graphs. Figure
3 gives two uncontractibles specified by face graphs and Figure 4 gives three
further uncontractibles as embedded graphs in ${\mathcal{P}}$. Here
${\mathcal{P}}$ is represented as a disc or a rectangle, with diagonally
opposite points of the boundary identified. The 3 remaining irreducibles are
given in Figure 5. The notation $G^{h}_{n}$ indicates that $n$ is the number
of vertices and $h=h(G)$ is the minimum _hole incidence degree_ given in the
following definition.
Figure 3. The uncontractible graphs $G^{2}_{3}\in{\mathfrak{P}}_{1}$ and
$G^{3}_{4}\in{\mathfrak{P}}_{3}$.
Figure 4. The uncontractible graphs $G^{0}_{7},$ $G^{2}_{6,\alpha}$ and
$G^{2}_{6,\beta}$ in ${\mathfrak{P}}_{3}$.
Figure 5. The uncontractible graphs $G^{1}_{5}$ in ${\mathfrak{P}}_{2}$ and
$G^{1}_{6,\alpha},$ $G^{1}_{6,\beta}$in ${\mathfrak{P}}_{3}$.
###### Definition 4.3.
Let $v$ be a vertex of $G=(V,E,F)\in{\mathfrak{P}}_{k}$ for some $k=1,2,3$.
Then (i) $\deg_{F}(v)$ is the number of facial 3-cycles incident to $v$, (ii)
$\deg_{h}(v)=\deg(v)-\deg_{F}(v)$ is the _hole incidence degree_ for $v$, and
(iii) $h(G)$ is the minimum hole incidence degree,
$h(G)=\min_{v}\operatorname{deg}_{h}(v)$.
In what follows, we shall usually consider graphs as ${\mathcal{P}}$-graphs,
with facial structure. However, let us note that as graphs: $G^{2}_{3}$ is the
triangle graph $K_{3}$; $G_{4}^{3}$ is $K_{4}$; $G_{7}^{0}$ is the cone graph
over $K_{3,3}$; $G_{5}^{1}$ is $K_{5}-e$. Also the four remaining graphs, each
with 6 vertices, are depletions of $K_{6}$ by 3 edges where these edges (i)
form a copy of $K_{3}$, (ii) are disjoint, (iii) have one vertex shared by 2
edges, (iv) have 2 vertices of degree 1. These graphs account for all possible
$(3,6)$-tight graphs on $n$ vertices for $n=3,4,5,6$, together with 1 of the
26 such graphs for $n=7$. (We remark that for $n=8,9,10$ the number of
$(3,6)$-tight graphs rises steeply, with values 375, 11495, 613092 [10].)
###### Proposition 4.4.
Let $G$ be an uncontractible graph in ${\mathfrak{P}}_{k}$, for $k=1,2$ or
$3$, which has an interior vertex. Then $k=3$ and $G$ is the hexagon graph
$G^{0}_{7}$.
###### Proof.
Let $z$ be an interior vertex in $G$. Let $X(z)$ be the subgraph of $G$
induced by $z$ and its neighbours. Assume that $z$ has degree $n$ and label
its neighbours, in cyclical order, as $v_{1},v_{2},\dots,v_{n}$. Then $X(z)$
has $n$ edges that are incident to $z$, plus $n$ perimeter edges
$v_{1}v_{2},v_{2}v_{3},\dots,v_{n}v_{1}$, and additional edges between non-
adjacent vertices $v_{1},\dots,v_{n}$. Since $G$ is uncontractible there exist
at least $\left\lceil\frac{n}{2}\right\rceil$ additional edges. It follows now
from the (3,)-sparsity that $\operatorname{deg}z=n\geq 6$. Also, since $G$ is
uncontractible there can be no vertices $v_{i}$ of degree $3$, since otherwise
the $FF$ edge $zv_{i}$ would be a contractible edge. For the same reason each
vertex $v_{i}$ has at least one additional edge $v_{i}v_{j}$ for some $j$.
Suppose that there is an additional edge $v_{i}v_{j}$ such that the
(nonfacial) 3-cycle $zv_{i}v_{j}z$ is a planar 3-cycle. Then $G$ contains a
triangulated disc $D$ with 3-cycle boundary with at least 4 vertices. Such a
graph $D$ has a contractible $FF$ edge with an interior vertex and so this
edge is also contractible in $G$, a contradiction.
Consider one of the additional edges, $v_{i}v_{j}$ with $i<j$, and let
$i^{\prime}\in\\{i+1,\dots,j-1\\}$. We claim that for every additional edge
$v_{i^{\prime}}v_{j^{\prime}}$ we have $j^{\prime}\notin\\{i+1,\dots,j-1\\}$.
Indeed, if this is not the case then there is a non-facial planar 3-cycle $c$
described by the edges
$zv_{i^{\prime}},v_{i^{\prime}}v_{j^{\prime}},v_{j^{\prime}}z$ and by the
previous paragraph this is a contradiction. Thus the additional edges have
this _non-nested_ property. It follows by a simple inductive argument that the
embedded graph $X(z)$ has faces with boundary cycles of length at most 4 since
otherwise there must be perimeter vertices of degree 3. These 4-cycles are
planar 4-cycles and so by Lemma 4.2 there are 3 holes. Thus $n=6$ and $G$ is
the hexagon graph $G^{0}_{7}$. ∎
The next lemma is key to the determination of the uncontractible graphs in
${\mathfrak{P}}_{k}$ for $k=2$ or $3$.
###### Lemma 4.5.
Let $G\in{\mathfrak{P}}_{k}$, for $k=2$ or 3, be an uncontractible (3,6)-tight
graph with no interior vertex and let $v_{1}$ be a vertex with
$\operatorname{deg}_{h}(v_{1})=1$ which lies on the boundary of a 4-cycle hole
of $G$ with edges $v_{1}v_{2},v_{2}v_{3},v_{3}v_{4},v_{4}v_{1}$. Then
$\operatorname{deg}(v_{1})=4$ if $v_{1}$ is not adjacent to $v_{3}$ and
$\operatorname{deg}(v_{1})=5$ otherwise.
###### Proof.
Let $v_{2}=w_{1},w_{2},\dots,w_{n}=v_{4}$ be the neighbours of $v_{1}$ in
cyclical order. Since $\operatorname{deg}_{h}(v_{1})=1$, we also have the
edges $w_{1}w_{2},\dots,w_{n-1}w_{n}$. Note that $\deg(v_{1})\geq 4$ since if
the degree is 3 then the edge $v_{1}w_{2}$ is contractible.
Case (a). $v_{3}\neq w_{i}$, for every $i\in\\{2,\dots,m-1\\}$.
Suppose that $n\geq 5$. It follows from the uncontractibility that for each
vertex $w_{i},2\leq i\leq n-1,$ there is an associated edge $w_{i}w_{r}$ for
some $1\leq r\leq n$ and an associated edge $w_{i+1}w_{s}$ for some $s>r$.
Since there are at most 3 holes there is an edge $w_{i}w_{i+1}$ for which the
associated cycle through $w_{i},w_{i+1},w_{s},w_{r}$ is triangulated by faces.
We claim that (i) it is a 4-cycle and (ii) it is triangulated by 2 faces. Note
that at most one of the edges $w_{i}w_{s},w_{i+1}w_{r}$ exists. Indeed,
although we can have $K_{4}\to{\mathcal{P}}$ with 3 faces this implies the
existence of a degree 3 vertex and hence a contractible edge incident to it, a
contradiction. If the face of the triangulation which contains $w_{i}w_{i+1}$
has third vertex $w$ not equal to $w_{s}$ or $w_{r}$, then at least one of the
edges $w_{i}w$, $w_{i+1}w$ is contractible, a contradiction. Since an interior
vertex $w$ does not exist the implied cycle is a 4-cycle and (i) and (ii)
hold.
Since $G$ is uncontractible $w_{i}w_{i+1}$ lies in a non-facial 3-cycle. Since
$v_{1}w_{j}$ is also an $FF$ edge for every $j\in\\{2,\dots,n-1\\}$, it
follows that there are just two candidate non-facial 3-cycles:
$w_{i-1}w_{i}w_{i+1}w_{i-1}$ or $w_{i}w_{i+1}w_{i+2}w_{i}$.
(i):
If $w_{i}w_{i+1}$ lies on the cycle $w_{i-1}w_{i}w_{i+1}w_{i-1}$, then the
4-cycle $w_{i-1}v_{1}w_{r}w_{i+1}w_{i-1}$ contains strictly the hole boundary
$v_{1}v_{2}v_{3}v_{4}v_{1}$, contradicting Lemma LABEL:l:4holelemma. Note that
this 4-cycle does contain the hole in our sense since the shading in Figure 6
indicates a triangulated disc in ${\mathcal{P}}$ with boundary equal to this
4-cycle.
$v_{1}$$w_{n}$$w_{r}$$w_{i+1}$$w_{i}$$w_{i-1}$$w_{1}$$v_{3}$ Figure 6. The
4-cycle $w_{i-1}v_{1}w_{r}w_{i+1}w_{i-1}$ contains strictly the 4-hole
$v_{1}v_{2}v_{3}v_{4}v_{1}$.
(ii):
If $w_{i}w_{i+1}$ lies on the cycle $w_{i}w_{i+1}w_{i+2}w_{i}$, then, noting
that $w_{i+2}w_{i}$ is an edge, we claim that the 5-cycle
$w_{i}v_{1}w_{r}w_{i+1}w_{i+2}w_{i}$ contains all the holes, which is a
contradiction. To see this note that by Lemma 4.2 the 4-cycle
$v_{1}w_{r}w_{i}w_{i+2}v_{1}$ contains no holes. See Figure 7.
$v_{1}$$w_{n}$$w_{r}$$w_{i+1}$$w_{i+2}$$w_{i}$$w_{1}$$v_{3}$ Figure 7. The
5-cycle $w_{i}v_{1}w_{r}w_{i+1}w_{i+2}w_{i}$ contains all the holes.
Hence none of the edges $w_{2}w_{3},w_{3}w_{4},\dots,w_{n-2}w_{n-1}$ is an
$FF$ edge. Also, the same holds for the edge $v_{2}w_{2}$, since it cannot lie
in no non-facial 3-cycle. Thus every edge of the form
$v_{2}w_{2},w_{2}w_{3},w_{3}w_{4},\dots,w_{n-1}w_{n}$ is on the boundary of a
hole. Since every edge $v_{1}w_{j}$ is an $FF$ edge, $j=2,\dots,n-1$, it
follows that $G$ contains at least $\left\lceil\frac{n}{2}+1\right\rceil$
holes. Thus, $n=4$.
Case (b). $v_{4}=w_{i_{0}}$, for some $i_{0}\in\\{2,\dots,n-1\\}$.
We have $\operatorname{deg}(v)\geq 5$ since $G$ is a simple graph. Suppose
that $n\geq 6$. As in case (a) we may assume that there exists an $FF$ edge
$w_{i}w_{i+1}$ with $i>1$ and $i+1<i_{0}$, and with vertex $w_{r}$ as before.
(See Figure 8.) Then, the only possible non-facial 3-cycle for $w_{i}w_{i+1}$
is $v_{3}w_{i}w_{i+1}v_{3}$. However, this would lead to a contradiction since
the 4-cycle $w_{i}v_{3}v_{4}v_{1}w_{i}$ strictly contains the hole
$v_{1}v_{2}v_{3}v_{4}v_{1}$.
.
$v_{1}$$v_{4}$$v_{2}$$v_{3}$$w_{i}$$w_{r}$$w_{i+1}$$v_{3}$ Figure 8. The
4-cycle $v_{4}v_{3}v_{1}w_{i}v_{4}$ contains strictly the 4-hole
$v_{1}v_{2}v_{3}v_{4}v_{1}$.
Thus, each $w_{i}w_{i+1}$ is not an $FF$ edge. Similarly, we can argue that
such a 4-cycle would be created if $w_{i_{0}-1}w_{i_{0}}$ was an $FF$ edge.
Thus again we have that $w_{1}w_{2}$ is not an $FF$ edge, since it does not
lie on a non-facial 3-cycle. Thus, the edges
$w_{1}w_{2},w_{2}w_{3},w_{3}w_{4}$ should lie on the boundaries of different
holes, which again contradicts the number of the holes of $G$. Thus
$\operatorname{deg}(v_{1})=5$. ∎
###### Proposition 4.6.
Let $G\in{\mathfrak{P}}_{k}$, for $k=1,2$ or 3, be an uncontractible
(3,6)-tight graph with no interior vertex. If there exists a vertex $v_{1}\in
V(G)$ with $\operatorname{deg}_{h}(v_{1})=1$ then $G$ is one of the graphs
$G^{1}_{6,\alpha},G^{1}_{6,\beta},G^{1}_{5}$.
###### Proof.
Case (a). Assume first that $v_{1}$ lies on the 4-cycle boundary of the hole
$H_{1}$, with vertices $v_{1},v_{2},v_{3},v_{4}$, and let
$v_{2}=w_{1},w_{2},\dots,w_{n}=v_{4}$ be all the neighbours of $v_{1}$. Since
$\operatorname{deg}_{h}(v_{1})=1$ the edges $w_{1}w_{2},\dots,w_{n-1}w_{n}$
exist. Also $\operatorname{deg}(v_{1})\geq 4$ since otherwise $v_{1}w_{2}$ is
a contractible $FF$ edge. There are two subcases.
(i):
$v_{3}\neq w_{i}$, for every $i\in\\{1,2,\dots,m\\}$.
By Lemma 4.5 we have $\operatorname{deg}(v_{1})=4$. By the uncontractibility
of the edges $v_{1}w_{2}$ and $v_{1}w_{3}$ the edges $w_{2}w_{4}$ and
$w_{1}w_{3}$ must exist. Thus $G$ contains the graph in Figure 9, except
possibly for the edge $v_{3}w_{3}$. It follows that the 4-cycle
$w_{1}w_{2}w_{4}w_{3}w_{1}$ must be the boundary of a 4-hole $H_{2}$, since
otherwise the 5-cycle $v_{1}w_{1}w_{3}w_{2}w_{4}v_{1}$ contains all the holes,
in the sense, as before, of being the boundary of an embedded disc, $B$ say,
which contains the holes. This contradicts $(3,6)$-tightness. We claim now
that the edge $v_{3}w_{2}$ or $v_{3}w_{3}$ must exist, for otherwise there is
a contractible edge in $B$. To see this check that since
$\operatorname{deg}(v_{3})\geq 3$, there exists a vertex $z$ in the interior
of the 5-cycle $v_{3}w_{4}w_{2}w_{3}w_{1}v_{3}$, such that $v_{3}z\in E(G)$.
Since $v_{3}z$ does not lie on a non facial 3-cycle, it follows that it lies
on the boundary of the third 4-hole. Thus, if $v_{3}w_{3}$ is not allowed, we
may assume by symmetry that $w_{1}z$ is an $FF$ edge in $E(G)$, so it lies on
the non-facial 3 cycle $w_{1}zw_{2}w_{1}$. Hence the third hole is described
by the 4-cycle $w_{4}v_{3}zw_{1}w_{4}$. However, this implies that $zw_{3}\in
E(G)$, which is a contractible $FF$ edge, so we have proved the claim. Hence
without loss of generality $G$ contains the subgraph $G^{1}_{6,\alpha}$ as
indicated in Figure 9. Since $G$ is uncontractible it follows that
$G=G^{1}_{6,\alpha}$.
$v_{1}$$w_{4}$$w_{3}$$w_{2}$$w_{1}$$v_{3}$ Figure 9. The uncontractible graph
$G^{1}_{6,\alpha}$.
(ii):
$v_{3}=w_{i_{0}}$ for some $i_{0}\in\\{3,\dots,n-2\\}$.
By Lemma 4.5 $\operatorname{deg}(v_{1})=5$ and so $v_{3}=w_{3}$. Since
$v_{1}w_{2}$ is an $FF$ edge, it follows that $w_{2}w_{4}\in E(G)$ and so $G$
contains the graph $G=G^{1}_{6,\beta}$ of Figure 10. Since $G$ is
uncontractible it follows that this subgraph is equal to $G$.
$v_{1}$$v_{4}$$w_{4}$$w_{2}$$v_{2}$$v_{3}$ Figure 10. The uncontractible graph
$G^{1}_{6,\beta}$.
Case (b). Let $v_{1}$ lie on the boundary of a 5-hole $H$ with boundary edges
$v_{1}v_{2}$, $v_{2}v_{3}$, $v_{3}v_{4}$, $v_{4}v_{5}$, $v_{5}v_{1}$. We may
assume that $\operatorname{deg}_{h}(v_{i})=2$, for every $i=2,3,4,5$, since
otherwise there is a vertex $v$ on a 4-hole of $G$. Since $G$ has two holes it
is straightforward to check that $\operatorname{deg}(v_{1})=4$ and that the
second hole is described by the 4-cycle $v_{2}v_{3}v_{5}v_{4}v_{2}$. Thus we
obtain that $G$ is the uncontractible (3,6)-tight given by Figure 11.
$v_{1}$$v_{5}$$v_{4}$$v_{3}$$v_{2}$ Figure 11. The uncontractible graph
$G^{1}_{5}$.
∎
Note that in the proof of the previous result we have determined the
uncontractible graphs in 2-holed case and shown that there is a unique
uncontractible graph, namely $G_{5}^{1}$. The next proposition completes the
proof that there are 8 base graphs.
###### Proposition 4.7.
Let $G\in{\mathcal{P}}$ be an uncontractible (3,6)-tight graph with
$\operatorname{deg}_{h}(v)\geq 2$ for all $v\in V(G)$. Then $G$ is one of the
four graphs $G^{2}_{6,\alpha},G^{2}_{6,\beta},G^{3}_{4},G_{3}^{2}$.
###### Proof.
Suppose first that $G$ has 2 or 3 holes. Then the hole boundaries have length
4 or 5 and it follows from the simplicity of the graph that every vertex is
common to at least 2 holes. Since there are either 2 or 3 holes it follows
that $|V|\leq 6$.
Case (a). Suppose that $G$ contains at least one $FF$ edge, say $v_{1}v_{2}$,
with non facial 3-cycle $v_{1}v_{2}v_{3}$, and associated 3-cycle faces
$v_{1}v_{2}v_{4}v_{1}$ and $v_{1}v_{2}v_{5}v_{1}$. We claim that one of the
edges $v_{3}v_{4}$ or $v_{3}v_{5}$ lies in $E(G)$. Suppose, by way of
contradiction, that neither edge exists. Then we show that the edge
$v_{4}v_{5}$ is also absent. Indeed, if $v_{4}v_{5}\in E(G)$, then we have two
planar 5-cycles; $v_{1}v_{4}v_{5}v_{2}v_{3}v_{1}$ and
$v_{1}v_{5}v_{4}v_{2}v_{3}v_{1}$, as in Figure 12.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{5}$
Figure 12. A subgraph with the 5-cycles $v_{1}v_{4}v_{5}v_{2}v_{3}v_{1}$ and
$v_{1}v_{5}v_{4}v_{2}v_{3}v_{1}$.
By the sparsity condition one of these has a vertex in the interior with 3
incident edges and the other has a single chordal edge in the interior and by
symmetry we may assume that the planar 5-cycle
$v_{1}v_{4}v_{5}v_{2}v_{3}v_{1}$ has the single chordal edge. However, of the
5 possibilities $v_{1}v_{2},v_{2}v_{4},v_{1}v_{5}$ are not available, by the
simplicity of $G$, and the edges $v_{3}v_{4},v_{3}v_{5}$ are absent by
assumption. This contradiction shows that $v_{4}v_{5}$ is indeed absent and
so, since $v_{4},v_{5}$ have degree at least 2, the edges
$v_{4}v_{6},v_{5}v_{6}$ must exist. Now the complement of the 2 3-cycle faces
is bounded by two 6-cycles. By the sparsity condition there are now only 2
further edges to add and so there must be a 5-cycle hole, a contradiction, and
so the claim holds.
Without loss of generality we suppose that $v_{3}v_{4}\in E(G)$. Since
$\operatorname{deg}_{h}(v_{2})\geq 2$, it follows that $v_{2}v_{6}\in E(G)$.
Moreover, the edges $v_{6}v_{2}$,$v_{2}v_{3}$ should be on the boundary of a
planar 4-hole $H_{1}$, and this implies that $v_{1}v_{6}\in E(G)$. Similarly
we obtain that the two remaining holes are determined by the cycles
$v_{1}v_{3}v_{4}v_{5}v_{1}$, and $v_{2}v_{5}v_{4}v_{6}v_{2}$. The resulting
(3,6)-tight triangulated surface graph is given in Figure 13 and is the
uncontractible graph $G_{6,\alpha}^{2}$.
$v_{5}$$v_{2}$$v_{3}$$v_{1}$$v_{4}$$v_{6}$ Figure 13. The uncontractible graph
with $h(G)=2$ and an $FF$ edge; $G_{6,\alpha}^{2}$.
Case (b). Suppose now $G$ has at least one 3-cycle face, $v_{1}v_{2}v_{3}$,
and no $FF$ edges. Then the edge $v_{1}v_{2}$ is on the boundary of a 4-hole
$H_{1}$, that is determined by the edges $v_{1}v_{2}$, $v_{2}v_{4}$,
$v_{4}v_{5}$ and $v_{5}v_{1}$.
To see that $|V|\neq 5$ note that without loss of generality the edge
$v_{3}v_{4}$ exists and $G$ contains the subgraph shown in Figure 14. Also,
since $v_{5}$ cannot have degree 2 at least one of the edges
$v_{5}v_{3},v_{5}v_{2}$ exists.
$v_{1}$$v_{2}$$v_{4}$$v_{5}$$v_{4}$$v_{5}$$v_{3}$
Figure 14. A necessary subgraph.
If $v_{5}v_{2}$ exists then the edge $v_{2}v_{3}$ is adjacent to a 4-cycle
hole and $v_{5}v_{3}$ is absent. We note next that the planar 5-cycle
$v_{3}v_{1}v_{5}v_{2}v_{4}v_{3}$ must contain a chord edge (and so provide the
third 4-cycle hole). The only available edge (by simplicity) is $v_{3}v_{5}$.
This however is inadmissible since it introduces a second 3-cycle face
$v_{3}v_{5}v_{1}$ adjacent to $v_{1}v_{2}v_{3}$.
Similarly, if $v_{5}v_{3}$ exists then we have the planar 6-cycle
$v_{3}v_{1}v_{5}v_{3}v_{2}v_{4}v_{3}$ and there must exist a diameter edge to
create the 2 additional 4-cycle holes. As there is no such edge we conclude
that $|V|=6$.
Introducing $v_{6}$ the fact that $v_{2}v_{3}$ and $v_{3}v_{1}$ lie on 4-cycle
hole boundaries leads to the graph $G_{6,\beta}^{2}$ indicated in Figure 15.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{5}$$v_{6}$$v_{6}$$v_{4}$$v_{5}$ Figure 15. The
uncontractible graph $G_{6,\beta}^{2}$, with $h(G)=2$, no $FF$ edge and a
3-cycle face.
Case (c). Let now $G$ be a graph with no 3-cycle faces. Since
$\operatorname{deg}(v)\geq 3$ for each vertex it follows that
$\operatorname{deg}_{h}(v)=3$ and $\deg(v)=3$, for all $v\in V(G)$. Thus
$|V|=4$ and it follows that $G$ is the uncontractible (3,6)-tight graph
$G_{4}^{3}$ given by Figure 16.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{2}$$v_{3}$$v_{4}$ Figure 16. The
uncontractible graph $G_{4}^{3}$ with $h(G)=3$.
Case (d). Finally, suppose that $G\in\mathfrak{P}_{1}$. We claim that the
graph has no faces and the surface graph is given by Figure 19.
Assume first that there exists an $FF$ edge, say $v_{1}v_{2}$, that lies on
the faces $v_{1}v_{2}v_{3}v_{1}$ and $v_{1}v_{2}v_{4}v_{1}$. Since the graph
is uncontractible, $v_{1}v_{2}$ lies on a non facial 3-cycle
$v_{1}v_{2}v_{5}v_{1}$. Note that $v_{3}v_{4}\notin E(G)$, since otherwise the
6-hole would lie inside a 5-cycle, either $v_{1}v_{3}v_{4}v_{2}v_{5}v_{1}$ or
$v_{1}v_{4}v_{3}v_{2}v_{5}v_{1}$, contradicting the sparsity of the graph. It
follows that we cannot have $|V(G)|\leq 5$. Indeed, in this case (see Figure
17) $v_{3}v_{5}\in E(G)$, since $\operatorname{deg}(v_{3})\geq 3$, and so
without loss of generality, in view of the symmetry, $v_{1}v_{3}$ is an $FF$
edge. But this edge does not lie on a non-facial 3-cycle, a contradiction.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{5}$$v_{5}$ Figure 17. $|V(G)|\leq 5$ leads to
a contradiction.
Thus $|V(G)|=6$ and it remains to consider two subcases:
1. (i)
$v_{3}v_{5}\in E(G)$. In this case $v_{1}v_{3}$ lies on the non-facial 3-cycle
$v_{1}v_{3}v_{6}v_{1}$. However, this leads to a contradiction, since the
6-hole is contained either in the 5-cycle $v_{5}v_{3}v_{6}v_{1}v_{2}v_{5}$ or
in the 5-cycle $v_{6}v_{3}v_{2}v_{5}v_{1}v_{6}$. Hence by symmetry neither of
the edges $v_{3}v_{5},v_{4}v_{5}$ is allowed.
2. (ii)
$v_{3}v_{6},v_{4}v_{6}\in E(G)$. In this case, indicated in Figure 18, we may
assume that the hole is contained in the planar 6-cycle
$v_{1}v_{5}v_{2}v_{4}v_{6}v_{3}v_{1}$ and that the planar 6-cycle
$v_{1}v_{5}v_{2}v_{3}v_{6}v_{4}v_{1}$ is triangulated. This implies that
$v_{2}v_{3}$ is an $FF$ edge and so lies on non-facial 3-cycle. However, the
only candidate cycle is $v_{3}v_{2}v_{6}v_{3}$ and if $v_{2}v_{6}$ lies in
$E(G)$ then the hole is contained in the 5-cycle
$v_{1}v_{5}v_{2}v_{6}v_{3}v_{1}$, a contradiction.
$v_{1}$$v_{2}$$v_{3}$$v_{4}$$v_{5}$$v_{5}$$v_{6}$$v_{6}$ Figure 18. Edges
$v_{3}v_{6},v_{4}v_{6}$ in $G$ leads to a contradiction.
We have shown that no $FF$ edge is allowed. Suppose now that $G$ contains a
face, described by the vertices $v_{1},v_{2}$ and $v_{3}$. Since there are no
$FF$ edges, all edges $v_{1}v_{2},v_{2}v_{3}$ and $v_{1}v_{3}$ lie on the
boundary of the hole. Moreover, since they form a face of the graph, they
cannot form a 3-cycle path in the boundary of the hole. Only 3 edges of the
boundary cycle are left to be determined, so we may assume that the path
$v_{1}v_{2}v_{3}$ lies on the boundary. Therefore, without loss of generality,
there exists a vertex $v_{4}$ on the boundary that connects the two paths,
$v_{1}v_{2}v_{3}$and $v_{1}v_{3}$, so we obtain the 5-path
$v_{1}v_{3}v_{4}v_{1}v_{2}v_{3}$. But this implies that the remaining edge of
the 6-hole is $v_{1}v_{3}$, which would break the simplicity of the graph.
Hence the graph contains no faces and the proof is complete.
$v_{1}$$v_{2}$$v_{3}$ Figure 19. The uncontractible graph $G^{2}_{3}$.
∎
## 5\. The irreducibles
We show that an irreducible $(3,6)$-tight ${\mathcal{P}}$-graph is
uncontractible. Thus, if a graph $G$ in ${\mathfrak{P}}_{k}$, for $k=1,2$ or
$3$, has a contractible edge $e$ (so that $G/e$ is a simple graph) then there
exists a contractible edge $f$, which need not be the edge $e$, such that the
contracted graph is simple and satisfies the sparsity condition for membership
in ${\mathfrak{P}}_{k}$.
Recall that Lemma 3.5 identifies the obstacles to the preservation of
$(3,6)$-sparsity when contracting a contractible edge of
$G\in{\mathfrak{P}}_{k}$, namely that the edge lies on the boundary of a
subgraph of $G$ which is in ${\mathfrak{P}}_{l}$ for some $l\leq k$. For $k=1$
this boundary corresponds to a directed 6-cycle $c$ and we also refer to it in
subsequent proofs as a _critical 6-cycle_. Likewise for $k=2$ or $k=3$ the
edge $e$ lies on the boundary of one of the holes of a subgraph
$G\in{\mathfrak{P}}_{l}$ and we refer to the associated cycle as a _critical
5-cycle_ or _critical 4-cycle_.
###### Proposition 5.1.
Let $G\in{\mathfrak{P}}_{1}$ be irreducible. Then $G$ is uncontractible.
###### Proof.
Suppose that $G$ is irreducible with a contractible edge $e=xy$. By Lemma 3.5
there is a critical 6-cycle $c$, containing $e$, which is the boundary of a
subgraph $G_{1}\in{\mathfrak{P}}_{1}$. Since $c$ properly contains the hole of
$G$ this contradicts Lemma 4.2, completing the proof. ∎
###### Proposition 5.2.
Let $G$ be an irreducible graph in ${\mathfrak{P}}_{k}$, for $k=2$ or $3$.
Then $G$ is uncontractible.
###### Proof.
Suppose that $G$ is irreducible and $e$ is a contractible $FF$ edge in $G$. By
Lemma 3.5 there is a decomposition $G=G_{1}\cup A$ with $e\in\partial G_{1},$
and $G_{1}\in{\mathfrak{P}}_{l},$ for some $l\leq k$.
$e$$v$$v_{1}$$v_{2}$$w$$y$$x$$G_{1}$$v_{3}$$z$
Figure 20. A ${\mathcal{P}}$-diagram for a critical 6-cycle for $e$.
_Case $k=2,l=1$._ Figure 20 illustrates the planar 6-cycle boundary $c$ of
$G_{1}$ and we assume it includes the contractible edge $e$ and that it
contains the planar 4- and 5-cycle boundaries of the two holes of $G$. Since
$G$ is simple $c$ has 6 distinct vertices.
For the first part of the proof we show that $G$ contains $vv_{1}$, perhaps
after relabelling $v_{1},v_{2}$, that $yvv_{1}v_{2}v_{3}y$ is the boundary of
the 5-cycle hole, and that $xvv_{1}wx$ is the boundary of the 4-cycle hole.
Note that $G_{1}$ is a contractible graph, for otherwise, by the previous
section, $G_{1}=G_{3}^{2}$, with 3 vertices. By Proposition 5.1 $G_{1}$ is
reducible and so there is an $FF$ edge edge $h$ with
$G_{1}/h\in{\mathfrak{P}}_{1}$. If $h$ lies on a critical 6-cycle $c^{\prime}$
in $G$ then it necessarily lies on a critical 6-cycle in $G_{1}$. This is
because the subpath of $c^{\prime}$ which is interior to $c$ must have the
same length as one of boundary paths of $c$ between the corresponding
vertices. (Otherwise the 6-cycle hole is contained in a planar cycle of length
at most 5.) Thus, since $G$ is irreducible, $h$ must lie on a nonfacial
3-cycle in $G$ with some edges that are internal to $c$. To avoid sparsity
violation there must be 2 such edges, say $h_{1},h_{2}$. Moreover, since $h$
is a contractible edge in $G_{1}$ the edges $h_{1},h_{2}$ form a diameter of
the 6-cycle $c$. This diameter together with subpaths of $c$, yields two
planar 5-cycles which contain the holes of $G$. Considering the 5-cycle hole,
Lemma 4.2 implies that, perhaps after relabelling, the pair $h_{1},h_{2}$ is
equal to the pair $yv,vv_{1}$ or to a pair $wu,uv_{3}$ for some vertex $u\neq
v$ interior $c$. In the first case $yvv_{1}v_{2}v_{3}y$ is the boundary of the
5-cycle hole and, by a further application of Lemma 4.2, $xvv_{1}wx$ is the
boundary of the 4-cycle hole. The second case cannot occur, since one of the
edges $xv,yv$ must be an $FF$ edge, and one can see that it does not lie on a
nonfacial 3-cycle or a critical 4-, 5- or 6-cycle.
For the next part of the proof we show that $G_{1}$ has no interior vertices.
Let $u$ be an interior vertex of $G_{1}$ and let $f$ be one of its incident
$FF$ edges. Then since $G$ is irreducible, by the hole inclusion lemma, Lemma
4.2, $f$ does not lie on a critical 6-cycle. Also if $f$ lies on a nonfacial
3-cycle then by Lemma 4.1 it lies on a nonplanar nonfacial 3-cycle. It follows
from the $(3,6)$-sparsity of $G$ that $\deg v\geq 6$ and so there are at least
3 distinct nonplanar nonfacial 3-cycle through $u$. However this implies that
every hole of $G$ is contained in a planar 4-cycle, a contradiction.
$e$$v$$v_{1}$$v_{2}$$w$$y$$x$$G_{1}$$v_{3}$$z$$H$$H$
Figure 21. $G_{1}$ has no interior vertex and $z=v_{1}$.
Since $z$ is not an interior vertex of $G_{1}$ it is equal to $v_{1}$ (see
Figure 20). By Lemma 4.1(i) we have $\deg(v_{2})\geq 4$ and $\deg(v_{3})\geq
4$. Since $G_{1}$ is $(3,6)$-tight it follows that both vertices have degree 4
and that $G$ must have the structure indicated in Figure 21. In particular,
$v_{3}w$ does not lie on a nonfacial 3-cycle or a critical cycle and so
$G/v_{3}w$ is reducible, a contradiction.
_Case $k=2,l=2$._ We argue by contradiction and assume that $G$ is irreducible
and $e$ is a contractible $FF$ edge in $G$ which, by Lemma 3.5, lies on the
boundary of the proper subgraph $G_{1}\in{\mathfrak{P}}_{2}$. Each of the two
holes of $G_{1}$ must contain a hole of $G$, with the boundary cycles are of
the same length. By Lemma 4.2 this is a contradiction.
_Case $k=3,l=2$._ This case follows similarly. ∎
## 6\. Constructibility and 3-rigidity
Combining results of the previous sections we obtain the following
construction theorem and the proof of Theorem 1.1.
###### Theorem 6.1.
Let $G$ be a simple (3,6)-tight graph which is embeddable in the real
projective plane ${\mathcal{P}}$. Then $G$ is constructible by a finite
sequence of planar vertex splitting moves from at least one of the eight
${\mathcal{P}}$-graphs,
$G_{3}^{2},G_{4}^{3},G_{5}^{1},G_{6,\alpha}^{1},G_{6,\beta}^{1},G_{6,\alpha}^{2},G_{6,\beta}^{2},G_{7}^{0}$.
###### Proof.
As we have observed at the beginning of Section 4 it is evident that $G$ can
be reduced to an irreducible (3,6)-tight ${\mathcal{P}}$-graph, $H$ say, by a
sequence of planar edge contraction moves. By the results of Section 5 the
irreducible graph $H$ is uncontractible, and so, by the results of Section 4,
it is equal to one of the eight uncontractible ${\mathcal{P}}$-graphs. Since a
planar edge-contraction move is the inverse of a planar vertex splitting move
the proof is complete. ∎
###### Proof of Theorem 1.1.
Let $G$ be the graph of a partial triangulation of the real projective plane.
If $G$ is minimally 3-rigid then it is well-known that $G$ is necessarily
$(3,6)$-tight [11].
Suppose on the other hand that $G$ is $(3,6)$-tight. Then, by Theorem 6.1 the
graph $G$ is constructible by planar vertex splitting moves from one of the
eight uncontractible ${\mathcal{P}}$-graphs, each of which has fewer than $8$
vertices. It is well-known that all $(3,6)$-tight graphs with fewer than $8$
vertices are minimally 3-rigid. Since vertex splitting preserves minimal
3-rigidity (Whiteley [14]) it follows that $G$ is minimally 3-rigid. ∎
Acknowledgements. This research was supported by the EPSRC grant EP/P01108X/1
for the project _Infinite bond-node frameworks_ and by a visit to the Erwin
Schroedinger Institute in September 2018 in connection with the workshop on
_Rigidity and Flexibility of Geometric Structures_.
## References
* [1] D. W. Barnette, Generating the triangulations of the projective plane, J. Comb. Theory33 (1982), 222-230.
* [2] D. W. Barnette and A. Edelson, All orientable 2-manifolds have finitely many minimal triangulations, Israel. J. Math., 62 (1988), 90-98.
* [3] D.W. Barnette, and A.L. Edelson, All 2-manifolds have finitely many minimal triangulations, Israel J. Math., 67 (1989), 123-128.
* [4] A. Cauchy, Sur les polygones et polyèdres. Second Mémoir. J École Polytechn. 9 (1813) 87-99; Oeuvres. T. 1. Paris 1905, pp. 26-38.
* [5] J. Cruickshank, D. Kitson and S.C. Power, The generic rigidity of triangulated spheres with blocks and holes, J. Combin. Theory Ser. B 122 (2017), 550-577.
* [6] J. Cruickshank, D. Kitson and S.C. Power, The rigidity of a partially triangulated torus, Proc. London Math. Soc., 2018, https://doi.org/10.1112/plms.12215
* [7] W. Finbow-Singh and W. Whiteley, Isostatic block and hole frameworks, SIAM J. Discrete Math. 27 (2013) 991-1020.
* [8] W. Finbow-Singh, E. Ross and W. Whiteley, The rigidity of spherical frameworks: Swapping blocks and holes in spherical frameworks, SIAM J. Discrete Math. 26 (2012), 280-304.
* [9] N. D. Gilbert and T. Porter, Knots and Surfaces, Oxford University Press, 1994.
* [10] G. Grassegar, personal communication.
* [11] J. Graver, B. Servatius and H. Servatius, Combinatorial rigidity, Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 1993.
* [12] H. Gluck, Almost all simply connected closed surfaces are rigid, in Geometric Topology, Lecture Notes in Math., no. 438, Springer-Verlag, Berlin, 1975, pp. 225-239.
* [13] T. Jordan and S. Tanigawa, Global rigidity of triangulations with braces, J. of Comb. Theory, Ser. B, 136 (2019), 249-288.
* [14] W. Whiteley, Vertex splitting in isostatic frameworks, Structural Topology, 16 (1990), 23-30.
|
2024-09-04T02:54:59.425039 | 2020-03-12T09:06:41 | 2003.05669 | {
"authors": "Mohammadreza Salehi, Atrin Arya, Barbod Pajoum, Mohammad Otoofi,\n Amirreza Shaeiri, Mohammad Hossein Rohban, Hamid R. Rabiee",
"full_text_license": null,
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"provenance": "arxiv-papers-0000.json.gz:26180",
"submitter": "Mohammadreza Salehi Dehnavi",
"url": "https://arxiv.org/abs/2003.05669"
} | arxiv-papers | # ARAE: Adversarially Robust Training of Autoencoders Improves
Novelty Detection
Mohammadreza Salehi,1 Atrin Arya,1 Barbod Pajoum,1 Mohammad Otoofi,1
Amirreza Shaeiri,1 Mohammad Hossein Rohban,1 Hamid R. Rabiee1
###### Abstract
Autoencoders (AE) have recently been widely employed to approach the novelty
detection problem. Trained only on the normal data, the AE is expected to
reconstruct the normal data effectively while failing to regenerate the
anomalous data. Based on this assumption, one could utilize the AE for novelty
detection. However, it is known that this assumption does not always hold.
More specifically, such an AE can often perfectly reconstruct the anomalous
data as well, due to modeling of low-level and generic features in the input.
To address this problem, we propose a novel training algorithm for the AE that
facilitates learning of more semantically meaningful features. For this
purpose, we exploit the fact that adversarial robustness promotes learning of
meaningful features. Therefore, we force the AE to learn such features by
making its bottleneck layer more stable against adversarial perturbations.
This idea is general and can be applied to other autoencoder based approaches
as well. We show that despite using a much simpler architecture in comparison
to the prior methods, the proposed AE outperforms or is competitive to state-
of-the-art on four benchmark datasets and two medical datasets.
## Introduction
---
Figure 1: Unlike DAE, ARAE that is trained on the normal class, which is the
digit $8$, reconstructs a normal instance when it is given an anomalous digit,
from the class $1$. The first row shows the input images. The second and third
rows show the DAE and ARAE reconstructions of the corresponding inputs,
respectively. ARAE is trained based on bounded $\ell_{\infty}$, $\ell_{2}$,
rotation, and translation perturbations.
In many real-world problems, it is easy to gather normal data from the
operating behavior of a system. However, collecting data from the same system
in situations where it malfunctions or is being used clumsily may be difficult
or even impossible. For instance, in a surveillance camera that captures daily
activity in an environment, almost all frames are related to the normal
behavior. This means that data associated with the anomalous behavior is
difficult to obtain from such cameras. Anomaly/novelty detection refers to the
set of solutions for such settings.
The key point in the definition of anomaly detection is the outlier notion. In
the literature, An outlier is defined as a data point that deviates from the
bulk of the remaining data (Hawkins 1980; Chalapathy and Chawla 2019).
Assuming that the normal data is generated by a distribution, the goal is to
detect whether a new unseen observation is drawn from this distribution or
not. In prior work, AE and Generative Adversarial Network (GAN) were
extensively applied for novelty detection (Sabokrou et al. 2018; Perera,
Nallapati, and Xiang 2019; Schlegl et al. 2017; Akcay, Atapour-Abarghouei, and
Breckon 2018).
In GAN-based approaches, one tries to train a model that could adversarially
generate realistic images from the normal class. This means that if the model
fails to generate a given input image, the input would probably be an
anomalous one. However, GAN-based approaches face some challenges during the
training. These include mode collapse that happens when the generator maps
several inputs to a single image in the output space. In GAN, complete mode
collapse is rare, while a partial collapse occurs more frequently (Goodfellow
2016; Kodali et al. 2017). Furthermore, high sensitivity of the training to
the choices of hyperparameters, non-convergence problem, parameter
oscillation, and non-reproducible results due to the unstable training are
counted as the other challenges in training of the GAN (Martin and Lon 2017;
Salimans et al. 2016).
On the other hand, AE is more convenient to train and gives results that are
easier to reproduce. Therefore, we propose our method based on AE-based
approaches in this paper. An AE, which has learned features that are mostly
unique to the normal class, could reconstruct the normal data perfectly, while
when given an anomalous data, it either reconstructs a corrupted or a normal
output; In the former case, the anomalous input is likely to have disjoint
features compared to the normal class, while in the latter, the input may
resemble a normal data in some aspects. Note that in both cases, unlike for
the normal data, the reconstruction Mean Squared Error (MSE) is high for the
anomalous data. This means that for such an AE, we could threshold the
reconstruction loss to distinguish the normal vs. anomalous data. One could
alternatively leverage a discriminator that is applied to the reconstructed
image to distinguish between the anomalous and normal data (Sabokrou et al.
2018; Larsen et al. 2015). In any case, as mentioned, an important premise for
the AE to work is that it learns mostly unique features to the normal class.
We call such features “semantically meaningful” or “robust”, contrasted with
generic low level features that are subject to change in presence of noise, in
the rest of the paper.
A common problem in using AE for novelty detection is its generalization
ability to reconstruct some anomaly inputs, when they share common features
with the normal class (Gong et al. 2019; Zong et al. 2018). Although this
generalization property is useful in other contexts, such as restoration (Mao,
Shen, and Yang 2016), it is considered as a drawback in novelty detection. In
other papers (Hasan et al. 2016; Zhao et al. 2017; Sultani, Chen, and Shah
2018), the main underlying assumption behind the AE-based approaches is that
the reconstruction error is high when the model is given an anomalous data,
which as mentioned does not seem to be holding perfectly.
There are two reasons why the main underlying assumption in these methods does
not hold necessarily. First, the model behavior when facing the anomalous data
is not observed and is not therefore predictable. Second, the learned latent
space may capture mostly the features that are in common between the normal
and anomalous data. When given the anomalous data, this would likely yield a
perfectly reconstructed anomalous data. To address these issues, we aimed for
a solution that learns an adversarially robust latent space, where the focus
is on learning unique or semantically meaningful features of the normal inputs
and their nuances. This could prevent the decoder from reconstructing the
anomalies.
It is shown in (Madry et al. 2017) that small imperceptible changes in the
input can easily fool a deep neural network classifier. AE’s are subject to
such attacks as well. This stems from the fact that a deep classifier or an AE
would likely learn low level or brittle non-robust features (Ilyas et al.
2019). Low level features could be exploited to reconstruct any given image
perfectly. Hence, the presence of such features seems to violate the main
underlying assumption of the earlier work for novelty detection that is based
on AE. Therefore, we propose to train an adversarially robust AE to overcome
this issue. In Figure 1, reconstructions from DAE and the proposed method are
shown. Here, the normal data is considered to be the number $8$ in the MNIST
dataset and the models are trained only on the normal category. As opposed to
the proposed ARAE, DAE generalizes and reconstructs the number $1$ perfectly.
This is not desired in the novelty detection problem. This means that the
latent space of DAE has learned features that are not necessarily meaningful.
To train a robust AE for the novelty detection task, a new objective function
based on adversarial attacks is proposed. The novel AE which is based on a
simple architecture, is evaluated on MNIST, Fashion-MNIST, COIL-100, CIFAR-10,
and two medical datasets. We will next review existing approaches in more
details, and then describe our proposed idea along with its evaluation. We
demonstrate that despite the simplicity of the underlying model, the proposed
model outperforms or stays competitive with state-of-the-art in novelty
detection. Moreover, we show that our method performs much better compared to
another state-of-the-art method in presence of adversarial examples, which is
more suitable for real-world applications.
## Related work
Figure 2: The training procedure of our method. $L_{latent}$ and $L_{rec.}$
are obtained using the MSE distance and used to form $L_{AE}$.
As explained earlier in the introduction, methods that are used in the
literature are classified into two main categories: (1) modeling the normal
behavior in the latent space; and (2) thresholding the AE reconstruction
error. Of course, a hybrid of these two approaches was also considered in the
field.
DRAE (Zhou and Paffenroth 2017), takes the second approach, i.e. it is based
on the MSE distance between the AE output and its input. An underlying
assumption in this work is that the training data may contain abnormal
samples. Therefore, the method tries to identify these samples throughout the
training process. It finally uses only the reconstruction error in the test
time.
As an extension to the AE-based methods, in OCGAN (Perera, Nallapati, and
Xiang 2019), a model is introduced in which the AE is trained by using 4 GANs,
a classifier, and the “negative sample mining” technique. Here, both the
encoder and decoder of the AE are considered as generators in the GAN. At the
inference time, the method only uses MSE between the model output and input to
make a prediction. The authors attempted to force the encoder output
distribution to be approximately uniform. They also forced the decoder output
distribution to resemble the normal input distribution in the whole latent
domain. This is expected to result in a higher MSE distance between the
decoder output and input for the abnormal data. This method achieved state-of-
the-art results at the time of presentation.
(Abati et al. 2019) and (Sabokrou et al. 2018) are the other examples in the
AE-based approaches, except that in (Abati et al. 2019), additionally, the
probability distribution over the latent space was obtained for the normal
input data. Then, in the test time, the probability of a sample being normal,
which is called the “surprise score”, is added to the reconstruction error
before the thresholding happens. In (Sabokrou et al. 2018), there is a
possibility of using the discriminator output, which is a real number between
zero and one, as an alternative to the MSE distance in order to find the
anomaly score. This is done by considering the AE as the generator in the GAN
framework.
In (Pidhorskyi, Almohsen, and Doretto 2018), a GAN is initially used to obtain
the latent space, then the probability distribution of the normal class over
the latent space is considered to be as the multiplication of two marginal
distributions, which are learned empirically. (Ruff et al. 2018) (DSVDD) tries
to model the normal latent space with the presumption that all normal data can
be compressed into a hyper-sphere. This framework can be considered as a
combination of Deep Learning and classical models such as (Chen, Zhou, and
Huang 2001) (One-class SVM), that has the advantage of extracting more
relevant features from the training data than the above-mentioned (Chen, Zhou,
and Huang 2001) because the whole network training process is done in an end-
to-end procedure. In (Schlegl et al. 2017), a GAN framework is used to model
the latent space. It is assumed that if the test data is normal, then a sample
could be found in a latent space such that the corresponding image that is
made by the generator is classified as real by the GAN discriminator.
## Method
Figure 3: Samples from the evaluation datasets. For the medical datasets, the
top row samples are anomalous and the bottom row samples are normal.
As we discussed earlier, the main problem of AE is its strong generalization
ability. We observe that DAE does not necessarily learn distinctive features
of the normal class. To remedy this problem, our approach is to force the AE
latent space implicitly to model only unique features of the normal class. To
make this happen, the framework for adversarial robustness, which is proposed
in (Madry et al. 2017; Ilyas et al. 2019), is adopted. We propose to
successively craft adversarial examples and then utilize them to train the AE.
Adversarial examples are considered as those irrelevant small changes in the
input that destabilize the latent encoding. We will next describe the details
of the proposed adversarial training in the following sections. The training
procedure is demonstrated in Figure 2.
### Adversarial Examples Crafting
In a semantically meaningful latent space, two highly perceptually similar
samples should share similar feature encodings. Therefore, searching for a
sample $X^{*}$ that is perceptually similar to a sample $X$, but has a distant
latent encoding from that of $X$, leads us to an adversarial sample. As
opposed to the normal sample $X$, the adversarial sample $X^{*}$ is very
likely to have a high reconstruction loss, thus it would be detected as
abnormal by the AE, despite being perceptually similar to a normal sample.
Therefore, based on this intuition, the following method is used to craft the
adversarial samples.
At the training epoch $i$, we craft a set of adversarial samples
$S^{i}_{(adv)}$ based on the initial training dataset $S$. For this purpose,
we slightly perturb each sample $X\in S$ to craft an adversarial sample
$X^{*}$ that has two properties: (1) $X^{*}$ is perceptually similar to $X$,
through controlling the $\ell_{\infty}$ distance of $X$ and $X^{*}$; (2)
$X^{*}$ latent encoding is as far as possible from that of $X$. This is
equivalent to solving the following optimization problem:
$\max_{\delta_{X}}L_{\text{latent}}\mbox{ s.t.
}{\|\delta_{X}\|}_{\infty}\leq\epsilon$ (1)
$L_{\text{latent}}=\|\mbox{Enc}(X+\delta_{X})-\mbox{Enc}(X)\|^{2}_{2}\ $ (2)
In this formulation, ${\|\ .\ \|}_{p}$ is the $\ell_{p}$-norm, $\epsilon$ is
the attack magnitude, and $X^{*}=X+\delta_{X}$ is the adversarial sample. We
solve this optimization problem for each sample $X\in S$ using the Projected
Gradient Descent (PGD) (Madry et al. 2017) method, to obtain $S^{i}_{(adv)}$.
### Autoencoder Adversarial Training
To train the AE using the crafted dataset $S^{i}_{(adv)}$ in the previous
section, we propose the following loss function:
$L_{\text{AE}}=L_{\text{rec.}}+\gamma L_{\text{latent}}$ (3)
where $\gamma$ is a balancing hyperparameter, $L_{\text{latent}}$ refers to
the loss function that is introduced in Eq. 2 and $L_{\text{rec.}}$
corresponds to the following loss function:
$L_{\text{rec.}}=\|X-\mbox{Dec}(\mbox{Enc}(X^{*}))\|^{2}_{2}\ $ (4)
At each step, the AE is trained one epoch on the adversarially crafted samples
using this loss function. In the training procedure, the $L_{\text{rec.}}$
term forces the AE to reconstruct the adversarial samples properly, while the
$L_{\text{latent}}$ term forces the adversarial samples to have closer
representations to that of the corresponding normal samples in the latent
space. We observe that the encoder decreases $L_{\text{latent}}$ to a limited
extent by merely encoding the whole input space into a compact latent space.
Too compact latent space results in a high $L_{\text{rec.}}$, which is not
achievable when the network is trained using $L_{\text{AE}}$. A compact latent
space causes the latent encodings of anomalous data to be close to that of
normal data. Thus for any given input, the generated image is more likely to
be a normal sample. To summarize, the whole training procedure is trying to
solve the following saddle point problem (Wald 1945):
$\begin{gathered}\delta^{*}_{X}:=\operatorname*{arg\,max}_{\|\delta_{X}\|_{\infty}\leq\epsilon}L_{\text{latent}}(X,\delta_{X},W)\\\
\min_{W}\operatorname{\mathbb{E}}_{X}\left[\gamma
L_{\text{latent}}(X,\delta^{*}_{X},W)+L_{\text{rec.}}(X,\delta^{*}_{X},W)\right]\end{gathered}$
(5)
where $W$ is denoted as the AE weights. Note that it was shown that the
adversarial training could not be solved in a single shot by the Stochastic
Gradient Descent (SGD), and one instead should try other optimization
algorithms such as the PGD. This relies on Danskin theorem to solve the inner
optimization followed by the outer optimization (Madry et al. 2017).
## Experiments
In this section, we evaluate our method, which is denoted by ARAE, and compare
it with state-of-the-art on common benchmark datasets that are used for the
unsupervised novelty detection task. Moreover, we use two medical datasets to
evaluate our method in real-world settings. We show that even though our
method is based on a simple and efficient architecture, it performs
competitively or superior compared to state-of-the-art approaches.
Furthermore, we provide insights about the robustness of our method against
adversarial attacks. The results are based on several evaluation strategies
that are used in the literature. All results that are reported in this paper
are reproducible by our publicly available implementation in the Keras
framework (Chollet 2015)111https://github.com/rohban-
lab/Salehi˙submitted˙2020.
### Experimental Setup
#### Baselines
Baseline and state-of-the-art approaches like VAE (Kingma and Welling 2013),
OCSVM (Chen, Zhou, and Huang 2001), AnoGAN (Schlegl et al. 2017), DSVDD (Ruff
et al. 2018), MTQM (Wang, Sun, and Yu 2019), OCGAN (Perera, Nallapati, and
Xiang 2019), LSA (Abati et al. 2019), DAGMM (Zong et al. 2018), DSEBM (Zhai et
al. 2016), GPND (Pidhorskyi, Almohsen, and Doretto 2018), $l_{1}$ thresholding
(Soltanolkotabi, Candes et al. 2012), DPCP (Tsakiris and Vidal 2018),
OutlierPursuit (Xu, Caramanis, and Sanghavi 2010), ALOCC (Sabokrou et al.
2018), LOF (Breunig et al. 2000), and DRAE (Xia et al. 2015) are selected to
be compared with our method. Results of some of these methods were obtained
from (Perera, Nallapati, and Xiang 2019; Wang, Sun, and Yu 2019; Pidhorskyi,
Almohsen, and Doretto 2018).
#### Datasets
We evaluate our method on MNIST (LeCun, Cortes, and Burges 2010), Fashion-
MNIST (Xiao, Rasul, and Vollgraf 2017), COIL-100 (Nene, Nayar, and Murase
1996), CIFAR-10 (Krizhevsky 2009), Head CT - hemorrhage (Kitamura 2018), and
Brain MRI - Tumor (Chakrabarty 2019) datasets. Samples from each dataset are
shown in Figure 3. These datasets differ in size, image shape, complexity and
diversity. Next, we briefly introduce each of these datasets.
* •
MNIST: This dataset contains 70,000 $28\times 28$ grayscale handwritten digits
from 0 to 9.
* •
Fashion-MNIST: A dataset similar to MNIST with 70,000 $28\times 28$ grayscale
images of 10 fashion product categories.
* •
CIFAR-10: This dataset contains 60000 $32\times 32$ color images of 10
categories.
* •
COIL-100: A dataset of 7200 color images of 100 different object classes. Each
class contains 72 images of one object captured in different poses. We
downscale the images of this dataset to the size $32\times 32$.
* •
Head CT - Hemorrhage: A dataset with 100 normal head CT slices and 100 other
with 4 different kinds of hemorrhage. Each slice comes from a different person
and the image size is $128\times 128$.
* •
Brain MRI - Tumor: A dataset with 253 brain MRI images. 155 of them contain
brain tumors and the rest 98 are normal. The image size is $256\times 256$.
#### Protocols
To carry out the training-testing procedure, we need to define the data
partitions. For MNIST, Fashion-MNIST, and CIFAR-10, one class is considered as
the normal class and samples from the other classes are assumed to be
anomalous. For COIL-100, we randomly take $n$ classes as the normal classes,
where $n\in\\{1,4,7\\}$, and use the samples from the remaining classes as the
anomalous samples. For the mentioned dataset, this process is repeated 30
times and the results are averaged. For the medical datasets, the brain images
with no damage are considered as the normal class and the rest form the
anomalous class. To form the training and testing data, there are two
protocols that are commonly used in the framework of unsupervised novelty
detection(Pidhorskyi, Almohsen, and Doretto 2018; Perera, Nallapati, and Xiang
2019; Sabokrou et al. 2018), which are as follows:
* •
Protocol 1: The original training-testing splits of the dataset are merged,
shuffled, and $80\%$ of the normal class samples are used to train the model.
The remaining $20\%$ forms some specified portion (denoted as $\tau$) of the
testing data. The other portion is formed by randomly sampling from the
anomalous classes.
* •
Protocol 2: The original training-testing splits of the dataset are used to
train and test the model. The training is carried out using the normal samples
and the entire testing data is used for evaluation.
We compare our method to other approaches using Area Under the Curve (AUC) of
the Receiver Operating Characteristics (ROC) curve, the $F_{1}$ score, and the
False Positive Rate (FPR) at $99.5\%$ True Positive Rate (TPR). Here, we let
the positive class be the anomalous one unless otherwise specified.
#### Architecture and Hyperparameters
Our AE uses a 3-layer fully connected network with layer sizes of
$(512,256,128)$, following the input-layer to encode the input. A decoder,
whose architecture is mirroring that of the encoder, is used to reconstruct
the output. Each layer of the network is followed by a sigmoid activation.
This architecture is used for all the datasets except the medical ones and
CIFAR-10. For the medical datasets and CIFAR-10, we use a convolutional AE
which is explained in (Bergmann et al. 2019). For datasets with complex and
detailed images like COIL-100, Fashion-MNIST, CIFAR-10, and the medical
datasets, the hyperparameter $\epsilon$, which is the maximum perturbation
$\ell_{\infty}$ norm as defined in Eq. 5, is set to $0.05$, while for MNIST it
is set to $0.2$. The hyperparameter $\gamma$, defined in Eq. 5, is always set
to $0.1$.
### Results
Table 1: AUC values (in percentage) for the medical datasets. The standard deviation of the last 50 epochs’ AUCs are included for the Brain MRI - Tumor dataset. Dataset | OCGAN | LSA | ARAE
---|---|---|---
Head CT - Hemorrhage | 51.2 | 81.6 | 84.8
Brain MRI - Tumor | 91.7 | 95.6 | 97.0
$\pm 3$ | $\pm 1.4$ | $\pm 0.5$
Table 2: AUC values (in percentage) on MNIST and FMNIST (Fashion-MNIST). The standard deviation of the last 50 epochs’ AUCs are included for our method on MNIST. The values were obtained for each class using protocol 2. Dataset | Method | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---
MNIST | VAE | 98.5 | 99.7 | 94.3 | 91.6 | 94.5 | 92.9 | 97.7 | 97.5 | 86.4 | 96.7 | 95.0
OCSVM | 99.5 | 99.9 | 92.6 | 93.6 | 96.7 | 95.5 | 98.7 | 96.6 | 90.3 | 96.2 | 96.0
AnoGAN | 96.6 | 99.2 | 85.0 | 88.7 | 89.4 | 88.3 | 94.7 | 93.5 | 84.9 | 92.4 | 91.3
DSVDD | 98.0 | 99.7 | 91.7 | 91.9 | 94.9 | 88.5 | 98.3 | 94.6 | 93.9 | 96.5 | 94.8
MTQM | 99.5 | 99.8 | 95.3 | 96.3 | 96.6 | 96.2 | 99.2 | 96.9 | 95.5 | 97.7 | 97.3
OCGAN | 99.8 | 99.9 | 94.2 | 96.3 | 97.5 | 98.0 | 99.1 | 98.1 | 93.9 | 98.1 | 97.5
LSA | 99.3 | 99.9 | 95.9 | 96.6 | 95.6 | 96.4 | 99.4 | 98.0 | 95.3 | 98.1 | 97.5
ARAE | 99.8 | 99.9 | 96.0 | 97.2 | 97.0 | 97.4 | 99.5 | 96.9 | 92.4 | 98.5 | 97.5
$\pm 0.017$ | $\pm 0.003$ | $\pm 0.2$ | $\pm 0.17$ | $\pm 0.14$ | $\pm 0.1$ | $\pm 0.03$ | $\pm 0.1$ | $\pm 0.3$ | $\pm 0.04$ | $\pm 0.04$
FMNIST | VAE | 87.4 | 97.7 | 81.6 | 91.2 | 87.2 | 91.6 | 73.8 | 97.6 | 79.5 | 96.5 | 88.4
OCSVM | 91.9 | 99.0 | 89.4 | 94.2 | 90.7 | 91.8 | 83.4 | 98.8 | 90.3 | 98.2 | 92.8
DAGMM | 30.3 | 31.1 | 47.5 | 48.1 | 49.9 | 41.3 | 42.0 | 37.4 | 51.8 | 37.8 | 41.7
DSEBM | 89.1 | 56.0 | 86.1 | 90.3 | 88.4 | 85.9 | 78.2 | 98.1 | 86.5 | 96.7 | 85.5
MTQM | 92.2 | 95.8 | 89.9 | 93.0 | 92.2 | 89.4 | 84.4 | 98.0 | 94.5 | 98.3 | 92.8
LSA | 91.6 | 98.3 | 87.8 | 92.3 | 89.7 | 90.7 | 84.1 | 97.7 | 91.0 | 98.4 | 92.2
ARAE | 93.7 | 99.1 | 91.1 | 94.4 | 92.3 | 91.4 | 83.6 | 98.9 | 93.9 | 97.9 | 93.6
Table 3: AUC and $F_{1}$ values on the COIL-100 dataset. The values were obtained using protocol 1 for $n\in\\{1,4,7\\}$ and different $\tau$s, where n and $\tau$ represent the number of normal classes and the testing data portion of the normal samples, respectively. Parameters | Metric | OutlierPursuit | DPCP | $l_{1}$ thresholding | GPND | ARAE
---|---|---|---|---|---|---
$n=1$, $\tau=50\%$ | AUC | 0.908 | 0.900 | 0.991 | 0.968 | 0.998
$F_{1}$ | 0.902 | 0.882 | 0.978 | 0.979 | 0.993
$n=4$, $\tau=75\%$ | AUC | 0.837 | 0.859 | 0.992 | 0.945 | 0.997
$F_{1}$ | 0.686 | 0.684 | 0.941 | 0.960 | 0.973
$n=7$, $\tau=85\%$ | AUC | 0.822 | 0.804 | 0.991 | 0.919 | 0.993
$F_{1}$ | 0.528 | 0.511 | 0.897 | 0.941 | 0.941
Table 4: Mean AUC values (in percentage) on CIFAR-10 using protocol 2. Metric | OCSVM | OCGAN | LSA | ARAE
---|---|---|---|---
AUC | 67.8 | 73.3 | 73.1 | 71.7
We present our AUC results for MNIST and Fashion-MNIST in Table 2. The table
contains AUC values for each class as the normal class, which were achieved
using protocol 2. Moreover, we report our results on the COIL-100 dataset in
Table 3. This table contains AUC and $F_{1}$ values for $n\in\\{1,4,7\\}$,
where $n$ is the number of normal classes. We use protocol 1 for this dataset.
For each $n\in\\{1,4,7\\}$, the percentage of the normal samples in the
testing data ($\tau$) is defined in the table. The $F_{1}$ score is reported
for the threshold value that is maximizing it. As shown in Tables 2 and 3, we
achieve state-of-the-art results in all of these datasets while using a
simpler architecture compared to other state-of-the-art methods, such as
OCGAN, LSA, and GPND. Moreover, the results in Table 3 indicate that our
method performs well when having multiple classes as normal. It also shows the
low effect of the number of normal classes on our method performance.
We also report our mean AUC results for the CIFAR-10 dataset using protocol 2,
excluding the classes with AUC near 0.5 or below, in Table 4. Consider a
classifier that labels each input as normal with probability $p$. By varying
$p$ between 0 and 1, we can plot a ROC curve and compute its AUC. We observe
that this method achieves an AUC of 0.5. So improvements below or near 0.5
aren’t valuable (see (Zhu, Zeng, and Wang 2010) for more details).
Consequently, classes 1, 3, 5, 7, and 9 which contained AUC values below 0.6
were excluded. As shown in the table, we get competitive results compared to
other state-of-the-art approaches.
The AUC values of our method on the medical datasets are reported in Table 1.
We used $90\%$ of the normal data for training and the rest in addition to the
anomalous data were used to form the testing data. Our method clearly
outperforms other state-of-the-art approaches, which shows the effectiveness
of our method on medical real-world tasks, where the dataset might be small
and complex.
To show the stability of our training procedure, we compute the standard
deviation of AUCs for the last 50 epochs of training. These values are
reported for our method on MNIST in Table 2 and for all the methods on Brain
MRI - Tumor in Table 1. From these tables, one can see the high stability of
our training procedure. Moreover, It is apparent that our method is much more
stable than other methods on the Brain MRI - Tumor dataset.
We also evaluate our method using the $F_{1}$ score on the MNIST dataset. In
this experiment, the normal class is the positive one. We use protocol 1 and
vary $\tau$ between $50\%$ and $90\%$. We use $20\%$ of the training samples
and sample from the anomalous classes to form a validation set with the same
normal samples percentage as the testing data. This validation set is used to
find the threshold that maximizes the $F_{1}$ score. As shown in Figure 4, we
achieve slightly lower $F_{1}$ scores compared to that of GPND. However, this
figure shows the low impact of the percentage of anomalous data on our method
performance.
Furthermore, FPR values at $99.5\%$ TPR on the MNIST dataset using protocol 2,
for ARAE and LSA are compared in Figure 5. One can see that despite having
equal AUCs, ARAE has lower FPR values compared to LSA and that it can reduce
the FPR value more than $50\%$ in some cases.
#### Adversarial Robustness
To show the robustness of our model against adversarial attacks, we use PGD
(Madry et al. 2017) with the $\epsilon$ parameter set to $0.05$ and $0.1$ on
the reconstruction loss, to craft adversarial samples from the normal samples
of the testing data. The normal samples of the testing data are replaced by
the adversarial ones. The AUC results for this testing data are reported in
Table 5 on the class 8 of the MNIST dataset, using protocol 2. As shown in the
table, our method is significantly more robust against adversarial samples
compared to LSA.
### Ablation
Table 5: AUC values for the attacked models. The values are reported for class 8 of MNIST using protocol 2. Parameters | LSA | ARAE
---|---|---
$\epsilon=0.05$ | 0.56 | 0.86
$\epsilon=0.1$ | 0.17 | 0.76
Table 6: AUC values (in percentage) on MNIST using protocol 2. The results are reported for both one class and two classes as the normal data. Results for other variants of our method are reported. Method | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Mean
---|---|---|---|---|---|---|---|---|---|---|---
DAE | 99.6 | 99.9 | 93.9 | 93.5 | 96.4 | 94.3 | 99.0 | 95.8 | 89.1 | 97.5 | 95.9
ARAE | 99.8 | 99.9 | 96.0 | 97.2 | 97.0 | 97.4 | 99.5 | 96.9 | 92.4 | 98.5 | 97.5
ARAE-A | 99.1 | 99.7 | 95.2 | 96.7 | 97.7 | 98.3 | 99.2 | 97.1 | 95.6 | 96.8 | 97.5
ARAE-R | 99.3 | 99.9 | 93.2 | 92.5 | 96.2 | 96.6 | 99.3 | 97.3 | 91.2 | 98.2 | 96.4
Method | (4, 5) | (0, 7) | (1, 3) | (2, 6) | (8, 9) | (2, 9) | (0, 8) | (0, 1) | (2, 3) | (4, 9) | Mean
DAE | 88.8 | 94.1 | 98.2 | 90.3 | 86.8 | 91.8 | 91.1 | 99.7 | 90.0 | 97.3 | 92.8
ARAE | 91.7 | 96.0 | 99.1 | 94.7 | 91.4 | 94.5 | 93.1 | 99.7 | 91.2 | 97.3 | 94.9
ARAE-A | 95.0 | 97.1 | 97.4 | 95.7 | 91.5 | 92.6 | 94.3 | 98.8 | 94.3 | 97.4 | 95.4
Figure 4: $F_{1}$ scores on the MNIST dataset using protocol 1, by taking the
normal class as the positive one. Figure 5: FPR at $99.5\%$ TPR on the MNIST
dataset using protocol 2.
We train a DAE, as a baseline method, with a random uniform noise between 0
and $0.1$ using the same network as the one that is used in our approach.
Furthermore, In addition to the $\ell_{\infty}$ perturbation set, we consider
$\ell_{2}$, and also rotation and translation perturbation sets. We need to
solve a similar optimization to the one in Eq. 5, with the only difference
being the perturbation sets (Engstrom et al. 2017). Specifically, we solve
this optimization problem on $\ell_{2}$-bounded perturbations for each sample
$X\in S$ through PGD (Madry et al. 2017) again. We next solve this
optimization on rotation and translation perturbation sets for each sample
$X\in S$ by quantizing the parameter space, and performing a grid search on
the quantized space and choosing the one with the highest latent loss. This is
the most reliable approach for solving rotation and translation perturbations
that is mentioned in (Engstrom et al. 2017). Following the approach in (Tramèr
and Boneh 2019), we use the union of these perturbation sets to make the
attack even stronger to avoid as much as brittle features that model might use
(Ilyas et al. 2019). We present our results on MNIST using protocol 2, in
Table 6. This variant of our method is denoted as ARAE-A. Notably, the AUC is
improved further in this variant in the most challenging class $8$ in MNIST
from $92.4$ based on $\ell_{\infty}$ attack to $95.6$ using the union of the
mentioned attacks. Despite this improvement, the average AUC is still the same
as in the original ARAE method.
Instead of designing the attack based on the latent layer, one could directly
use the reconstruction loss to do so. We denote this variant as ARAE-R.
However, we observed that a model that is robust to the latter attack yields a
lower improvement compared to ARAE (see Table 6). To justify this effect, we
note that an AE model that is robust based on the latter attack does not
necessarily have a stable latent layer. This stems from the fact that the
encoder and decoder are almost inverse functions by construction, and a
destabilization of the latent encoding by an attack could be repressed by the
decoder. In summary, an attack based on the latent layer is stronger than an
attack based on the reconstruction error, and hence the former promotes more
robust features.
We also report AUC values on MNIST by taking pairs of classes as the normal
ones, in Table 6. These values show the improvement yield by both of the ARAE
variants. Note that when having multiple classes as normal, one should tune
the $\epsilon$ parameter based on diversity and complexity of the training
data.
## Visualization
In the experiments section, we showed that our method improves the AE
performance and surpasses other state-of-the-art methods. In order to
demonstrate the reasons behind this improvement, we show that ARAE learns more
semantically meaningful features than DAE by interpreting these two
approaches.
MNIST | Fashion-MNIST
---|---
Input | DAE rec. | DAE map | ARAE rec. | ARAE map | Input | DAE rec. | DAE map | ARAE rec. | ARAE map
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
Figure 6: ARAE and DAE reconstructions and saliency maps for ten random inputs
from MNIST and Fashion-MNIST datasets.
ARAE | | | | | | | |
---|---|---|---|---|---|---|---|---
DAE | | | | | | | |
Figure 7: Local minima of inputs of ARAE and DAE, by initializing the input
with random noise and optimizing the reconstruction loss with respect to the
input. ARAE produces more realistic $8$ digits compared to DAE.
### Interpreting with Occlusion-1
In this method, we measure the effect of each part of the input on the output,
by occluding it and observing the difference in the output. Finally, we
visualize these differences as a saliency map (Zeiler and Fergus 2014; Ancona
et al. 2017). In the occlusion-1 method, we iteratively set each pixel to
black and then observe the reconstruction error. If it increases, we set the
corresponding pixel in the saliency map to blue, and otherwise, we set it to
red. The intensity of a pixel is determined by the amount that the
reconstruction error has changed. We compare ARAE and DAE reconstructions and
saliency maps on MNIST and Fashion-MNIST datasets, in Figure 6.
For the MNIST dataset, the model has been trained on the class $8$ and noisy
inputs are obtained by adding a uniform noise in the interval $[0,0.4]$. The
outputs and saliency maps of ARAE and DAE are shown for five random inputs in
the normal class. It is evident that DAE is focusing too much on the random
noises and has a poorer reconstruction than our model.
Similar to MNIST, we carry out the occlusion-1 method on the class dress of
the Fashion-MNIST dataset. For Fashion-MNIST, it is also obvious that random
noises have a larger effect on the output of DAE. Furthermore, DAE
reconstructions are less accurate than those of ARAE. These observations are
consistent with the known fact that adversarial robustness can increase the
model interpretability (Tsipras et al. 2018) by avoiding the learning of
brittle features (Ilyas et al. 2019).
### Local Minima Visualization
We expect from an ideal model that is trained on the MNIST class $8$, to have
a lower reconstruction error as the input gets more similar to a typical $8$.
With this motivation, we start from random noise and iteratively modify it in
order to minimize the reconstruction error using gradient descent. The results
achieved by our model and DAE are shown in Figure 7. This figure demonstrates
that inputs that lead to local minima in ARAE are much more similar to $8$,
compared to DAE.
## Conclusions
We introduced a variant of AE based on the robust adversarial training for
novelty detection. This is motivated by the goal of learning representations
of the input that are almost robust to small irrelevant adversarial changes in
the input. A series of novelty detection experiments were performed to
evaluate the proposed AE. Our experimental results of the proposed ARAE model
show state-of-the-art performance on four publicly available benchmark
datasets and two real-world medical datasets. This suggests that the benefits
of adversarial robustness indeed go beyond security. Furthermore, by
performing an ablation study, we discussed the effect of multiple perturbation
sets on the model. Future work inspired by this observation could investigate
the effect of other types of adversarial attacks in the proposed framework.
## References
* Abati et al. (2019) Abati, D.; Porrello, A.; Calderara, S.; and Cucchiara, R. 2019. Latent space autoregression for novelty detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 481–490.
* Akcay, Atapour-Abarghouei, and Breckon (2018) Akcay, S.; Atapour-Abarghouei, A.; and Breckon, T. P. 2018. Ganomaly: Semi-supervised anomaly detection via adversarial training. In _Asian Conference on Computer Vision_ , 622–637. Springer.
* Ancona et al. (2017) Ancona, M.; Ceolini, E.; Öztireli, C.; and Gross, M. 2017. Towards better understanding of gradient-based attribution methods for deep neural networks. _arXiv preprint arXiv:1711.06104_ .
* Bergmann et al. (2019) Bergmann, P.; Löwe, S.; Fauser, M.; Sattlegger, D.; and Steger, C. 2019. Improving Unsupervised Defect Segmentation by Applying Structural Similarity To Autoencoders. _International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP)_ 5: 372–380.
* Breunig et al. (2000) Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; and Sander, J. 2000. LOF: identifying density-based local outliers. In _Proceedings of the 2000 ACM SIGMOD international conference on Management of data_ , 93–104.
* Chakrabarty (2019) Chakrabarty, N. 2019. Brain MRI Images for Brain Tumor Detection. https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection.
* Chalapathy and Chawla (2019) Chalapathy, R.; and Chawla, S. 2019. Deep learning for anomaly detection: A survey. _arXiv preprint arXiv:1901.03407_ .
* Chen, Zhou, and Huang (2001) Chen, Y.; Zhou, X. S.; and Huang, T. S. 2001. One-class SVM for learning in image retrieval. In _Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205)_ , volume 1, 34–37. IEEE.
* Chollet (2015) Chollet, F. 2015. keras. https://github.com/fchollet/keras.
* Engstrom et al. (2017) Engstrom, L.; Tran, B.; Tsipras, D.; Schmidt, L.; and Madry, A. 2017. Exploring the landscape of spatial robustness. _arXiv preprint arXiv:1712.02779_ .
* Gong et al. (2019) Gong, D.; Liu, L.; Le, V.; Saha, B.; Mansour, M. R.; Venkatesh, S.; and Hengel, A. v. d. 2019. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In _Proceedings of the IEEE International Conference on Computer Vision_ , 1705–1714.
* Goodfellow (2016) Goodfellow, I. 2016. NIPS 2016 tutorial: Generative adversarial networks. _arXiv preprint arXiv:1701.00160_ .
* Hasan et al. (2016) Hasan, M.; Choi, J.; Neumann, J.; Roy-Chowdhury, A. K.; and Davis, L. S. 2016. Learning temporal regularity in video sequences. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 733–742.
* Hawkins (1980) Hawkins, D. M. 1980. _Identification of outliers_ , volume 11. Springer.
* Ilyas et al. (2019) Ilyas, A.; Santurkar, S.; Tsipras, D.; Engstrom, L.; Tran, B.; and Madry, A. 2019\. Adversarial examples are not bugs, they are features. In _Advances in Neural Information Processing Systems_ , 125–136.
* Kingma and Welling (2013) Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ .
* Kitamura (2018) Kitamura, F. 2018. Head CT - hemorrhage. https://www.kaggle.com/felipekitamura/head-ct-hemorrhage.
* Kodali et al. (2017) Kodali, N.; Abernethy, J.; Hays, J.; and Kira, Z. 2017. On convergence and stability of gans. _arXiv preprint arXiv:1705.07215_ .
* Krizhevsky (2009) Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.
* Larsen et al. (2015) Larsen, A. B. L.; Sønderby, S. K.; Larochelle, H.; and Winther, O. 2015. Autoencoding beyond pixels using a learned similarity metric. _arXiv preprint arXiv:1512.09300_ .
* LeCun, Cortes, and Burges (2010) LeCun, Y.; Cortes, C.; and Burges, C. 2010. MNIST handwritten digit database .
* Madry et al. (2017) Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. _arXiv preprint arXiv:1706.06083_ .
* Mao, Shen, and Yang (2016) Mao, X.-J.; Shen, C.; and Yang, Y.-B. 2016. Image restoration using convolutional auto-encoders with symmetric skip connections. _arXiv preprint arXiv:1606.08921_ .
* Martin and Lon (2017) Martin, A.; and Lon, B. 2017. Towards principled methods for training generative adversarial networks. In _NIPS 2016 Workshop on Adversarial Training. In review for ICLR_ , volume 2016.
* Nene, Nayar, and Murase (1996) Nene, S. A.; Nayar, S. K.; and Murase, H. 1996. object image library (COIL-100. Technical report.
* Perera, Nallapati, and Xiang (2019) Perera, P.; Nallapati, R.; and Xiang, B. 2019. Ocgan: One-class novelty detection using gans with constrained latent representations. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2898–2906.
* Pidhorskyi, Almohsen, and Doretto (2018) Pidhorskyi, S.; Almohsen, R.; and Doretto, G. 2018. Generative probabilistic novelty detection with adversarial autoencoders. In _Advances in neural information processing systems_ , 6822–6833.
* Ruff et al. (2018) Ruff, L.; Vandermeulen, R.; Goernitz, N.; Deecke, L.; Siddiqui, S. A.; Binder, A.; Müller, E.; and Kloft, M. 2018. Deep one-class classification. In _International conference on machine learning_ , 4393–4402.
* Sabokrou et al. (2018) Sabokrou, M.; Khalooei, M.; Fathy, M.; and Adeli, E. 2018. Adversarially learned one-class classifier for novelty detection. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 3379–3388.
* Salimans et al. (2016) Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques for training gans. In _Advances in neural information processing systems_ , 2234–2242.
* Schlegl et al. (2017) Schlegl, T.; Seeböck, P.; Waldstein, S. M.; Schmidt-Erfurth, U.; and Langs, G. 2017. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In _International conference on information processing in medical imaging_ , 146–157. Springer.
* Soltanolkotabi, Candes et al. (2012) Soltanolkotabi, M.; Candes, E. J.; et al. 2012. A geometric analysis of subspace clustering with outliers. _The Annals of Statistics_ 40(4): 2195–2238.
* Sultani, Chen, and Shah (2018) Sultani, W.; Chen, C.; and Shah, M. 2018. Real-world anomaly detection in surveillance videos. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 6479–6488.
* Tramèr and Boneh (2019) Tramèr, F.; and Boneh, D. 2019. Adversarial training and robustness for multiple perturbations. In _Advances in Neural Information Processing Systems_ , 5858–5868.
* Tsakiris and Vidal (2018) Tsakiris, M. C.; and Vidal, R. 2018. Dual principal component pursuit. _The Journal of Machine Learning Research_ 19(1): 684–732.
* Tsipras et al. (2018) Tsipras, D.; Santurkar, S.; Engstrom, L.; Turner, A.; and Madry, A. 2018. Robustness may be at odds with accuracy. _arXiv preprint arXiv:1805.12152_ .
* Wald (1945) Wald, A. 1945. Statistical decision functions which minimize the maximum risk. _Annals of Mathematics_ 265–280.
* Wang, Sun, and Yu (2019) Wang, J.; Sun, S.; and Yu, Y. 2019. Multivariate Triangular Quantile Maps for Novelty Detection. In _Advances in Neural Information Processing Systems_ , 5061–5072.
* Xia et al. (2015) Xia, Y.; Cao, X.; Wen, F.; Hua, G.; and Sun, J. 2015. Learning discriminative reconstructions for unsupervised outlier removal. In _Proceedings of the IEEE International Conference on Computer Vision_ , 1511–1519.
* Xiao, Rasul, and Vollgraf (2017) Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_ .
* Xu, Caramanis, and Sanghavi (2010) Xu, H.; Caramanis, C.; and Sanghavi, S. 2010. Robust PCA via outlier pursuit. In _Advances in Neural Information Processing Systems_ , 2496–2504.
* Zeiler and Fergus (2014) Zeiler, M. D.; and Fergus, R. 2014. Visualizing and understanding convolutional networks. In _European conference on computer vision_ , 818–833. Springer.
* Zhai et al. (2016) Zhai, S.; Cheng, Y.; Lu, W.; and Zhang, Z. 2016. Deep structured energy based models for anomaly detection. _arXiv preprint arXiv:1605.07717_ .
* Zhao et al. (2017) Zhao, Y.; Deng, B.; Shen, C.; Liu, Y.; Lu, H.; and Hua, X.-S. 2017. Spatio-temporal autoencoder for video anomaly detection. In _Proceedings of the 25th ACM international conference on Multimedia_ , 1933–1941.
* Zhou and Paffenroth (2017) Zhou, C.; and Paffenroth, R. C. 2017. Anomaly detection with robust deep autoencoders. In _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , 665–674.
* Zhu, Zeng, and Wang (2010) Zhu, W.; Zeng, N.; and Wang, N. 2010. Sensitivity, Specificity, Accuracy, Associated Confidence Interval and ROC Analysis with Practical SAS Implementations. _Health Care and Life Sciences_ .
* Zong et al. (2018) Zong, B.; Song, Q.; Min, M. R.; Cheng, W.; Lumezanu, C.; Cho, D.; and Chen, H. 2018\. Deep autoencoding gaussian mixture model for unsupervised anomaly detection .
*[AE]: Autoencoders
*[DAE]: Denoising Autoencoder
*[ARAE]: Adversarially Robust trained Autoencoder
*[GAN]: Generative Adversarial Network
*[MSE]: Mean Squared Error
*[PGD]: Projected Gradient Descent
*[SGD]: Stochastic Gradient Descent
|